I have been struck by an occasional series of Daring Fireball posts in which the question is raised: What explains the relative lack of Mac for-profit malware? The conclusion seems to be that there are two factors: a system-dynamics-based answer that says that the overwhelming Windows market share keeps the attention of malware authors; an aspect of user culture; and some element of technical superiority on the platform.
I would argue that there is yet another factor: developer education. After all, no matter how secure the OS, if an application that is widely distributed on it has flaws, the machine is vulnerable. But Objective C is not exactly a trivial language to pick up and learn, and writing in Carbon doesn’t buy you any development ease points either. So Mac software vendors have to be fairly well educated, and presumably in that process they learn how to avoid introducing flaws like potential buffer overflows. Of course, Apple makes the majority of the software that many casual Mac users use every day, too: Mail, Safari, iTunes, iPhoto… and the company has proven to have a reasonable track record with avoiding exploitable issues there.
The big exception, though, is QuickTime: cross-platform, widely used on websites, widely installed on PCs thanks to iTunes, and vulnerable. There are in the range of 50-60 CERT vulnerability notes indicating potentially exploitable flaws in QuickTime. So clearly Apple doesn’t have a virtuous monopoly on programming skill.
Compare this, though, to the history for iTunes, also cross platform, also widely installed: just two vulnerabilities that affect iTunes directly. The rest are vulnerabilities rising from QuickTime. The suggestion here is that Apple programmers defer much more to the lower level frameworks provided by Apple than their Windows counterparts—consider the wide impact of something like an exploitable buffer overflow in JPG handling in Windows, which affected the OS, AND Microsoft Office, AND half a dozen other Microsoft products, because the vulnerable code was distributed with each application rather than centralized at the OS level.
So, OK, a new theory: because Mac OS X apps rely heavily on centralized system frameworks, more attention can be devoted to keeping those central resources secure. That in turn keeps the OS secure.
Which is why the news that Apple introduced a way to hide program behavior into its port of the DTrace tool is so alarming. For the uninitiated, DTrace is an open source tracing and debugging utility that allows inspecting the internals of program operations—essential for software development, and also for security testing. Apple’s implementation introduces a way that software authors can flag their process so that it is ignored by DTrace (the soon to be infamous PT_DENY_ATTACH
request).
From a security perspective, this won’t have a lot of effect on researchers’ ability to observe Apple applications. Following Chris Wysopal’s admonition to use tools that are free from interaction with the platform being analyzed means that security researchers will use non-crippled tools to analyze what’s going on. But it sets a very bad precedent. By creating a way to hide what software is doing on Mac OS X, Apple creates something that smells a lot like a rootkit. And we know what sort of trouble that gets people into.