SummaryBy Michael H. Warfield
Even though conventional wisdom may lead some to the equation "open source equals open equals not secure," reality tells a different story. Some of the most secure systems available are based on the open source model.Find out why in this first installment of Michael Warfield's LinuxWorld column devoted to security issues in the open source world. (2,300 words)
Original link at LinuxWorld
Common sense would seem to indicate open source software is insecure because, for many, secure means hidden, secret. Recent discussions on several security-related mailing lists have revolved around keeping the names of servers secret, as if hiding information makes networks secure.
Others see open source as the means to secure operating systems. Some of the more secure systems available are based on the open source model. Many don't trust closed proprietary systems that can't be examined and verified for secure coding. To them, it's a myth that such systems are somehow inherently insecure, even though this belief is widely held.
In cryptography circles, they have a saying: The security of an algorithm should not depend on its secrecy. Now, this maxim is equally applicable to security software in general. Algorithms can be reverse-engineered. Protocols can be cracked through analysis. That which is hidden and secret will eventually be revealed. A secret, once lost, is gone forever and cannot be regained. Security through secrecy is largely a myth.
The argument then goes on, from the closed source camp, that secrecy or obscurity, when applied to an otherwise secure system, improves the security. It slows up intruders and, even when the secrecy is broken, the security remains as it would have been with open source.
So, all other things being equal, a secure system that isn't open source should be more secure than a secure system that is open source. Sounds reasonable.
Is it reasonable, though? Can all other things be equal? Are there ways in which secrecy and closed source code can actually compromise security?
The nature of common sense
There are many opinions and definitions as to what comprises "common sense."
By one definition, common sense is the "future application of past
experience." This definition allows for the possibility that what some
refer to as common sense may, in fact, be wrong.
People unfamiliar with the open source model are accustomed to keeping their source secret. When their source does becomes public, it's almost always related to a security breach or the threat of a security breach.
Revelation of code developed and maintained in secret also often results in the discovery of previously undetected flaws and security holes. It's no wonder, therefore, that those accustomed to the closed source model of development view open source as insecure. Their past experience with security breaches colors their conviction that security goes down the tubes when source code becomes public.
It's in their "future application" of that "past experience" that common sense fails. Their past experience really no longer applies, since the conditions have changed.
The nature of secure software
Secure systems shouldn't depend on the secrecy of the source. What is it,
then, that makes a system secure?
I offer this as a guideline to security:Secure systems require quality software utilizing secure coding techniques implemented and installed in a manner consistent with security guidelines and policy.
Myths regarding security and open source software
The following are some of the myths that contribute to the belief that open source is
insecure:
Myth 1. There is no source control in open source software.
Those of us who develop open source software tend to howl with laughter over this one, but it's a criticism I hear quite a lot. Some folks actually believe open source software is developed with total disregard for tracking, accountability, or control. Nothing could be farther from the truth. With large and diverse development teams being the norm in the open source world, source control is a necessity, not an option.
Recently, a researcher at a national laboratory made such a remark to me. My reaction was to wait until I saw him again, weeks later, then innocently regale him with a number of stories and incidents arising out of "source control incidents" in open source. Many of these were taken from CVS (Concurrent Versions Systems) log announcements. He was utterly amazed at the level of source control in use and hasn't remarked on the lack of source control since.
Myth 2. No one really looks at the source.
The open source camp proclaims the source is available for anyone to examine. The closed source camp counters that people either don't have the time or the skill or won't invest the effort to examine it. Since the source isn't examined by everyone or examined by them personally, the argument goes, it's as good as if it were examined by nobody, and any errors in the code are unlikely to be made public.
After the release of PGP 2.6, someone examining the code noticed an error in
a random-number generator. The mistake was very minor. A statement that
should have been an XOR with
operation (~=
) was in
fact, an assignment (=
). The result was that the random number
seed was somewhat less random than expected. This didn't seriously
compromise the security of PGP, but it did reduce the strength of the random
keys.
This error was quickly corrected, and the incident does illustrate some important points. It shows that the code is examined by others and that coding errors (intentional or unintentional) do get spotted. Due to the minor, obscure, nature of this buglette, it also indicates that the probability of a more serious bug or backdoor going undetected is rather low.
Myth 3. Anyone could put a backdoor or trapdoor in open source software.
The simplest response to this is: How? Open source uses source control, it uses code examination and analysis by others, and it puts the personal reputations of the authors on the line. Who would personally risk his or her reputation by putting a backdoor in source that is openly available in public forums?
This can be contrasted with closed source programs, which have an amazing array of "Easter eggs," those cute little surprises programmers leave in their code. What does the existence of such surprises say about the state of code review and source control in closed source circles?
This begs the question: Are there backdoors and serious surprises we can't see? Easter eggs are cute and backdoors aren't. Outside of that, there isn't much difference between them. Placing such secret surprises in open source would certainly seem much more difficult to do, if not a paradox in itself.
Myth 4. Hackers are going to find all the security holes in open source software.
Well, this really isn't a myth. The real myth is that they won't find the security holes in closed source software. One only has to look at the security warnings and advisories attached to any of the closed source systems to realize this. It has become almost a joke in the industry that hackers (good and bad) probably have better debugging, analysis, and reverse-engineering tools than developers have. Unfortunately, this joke often ends up with a decidedly unfunny punch line for network administrators.
The closed source camp likes to point out every open source security advisory as evidence that open source is insecure. In doing so, they conveniently ignore the counter examples in their own advisories. They also conveniently overlook the fact that open source problems are often found and fixed before they're widely exploited, while some closed source problem go unaddressed for months, or longer.
Recently, Alan Cox announced a Solaris security hole on the Bugtraq mailing list, after waiting over a year for Sun to fix the problem. Sun's response: that Alan had failed to notify the "correct people."
In contrast, the "Ping 'O Death" bug was fixed in Linux only a few hours after it was announced. The same bug remained unsolved in some closed source systems for weeks or months. The author of the "teardrop" exploit only released its source after seeing David Miller commit the fix to the Linux source tree.
Open source contributes to code quality
In typical, corporate, closed source development projects, personal
responsibility and the programmer's investment in terms of his or her
professional reputation is relatively limited. Serious errors may reflect
badly on one's performance review or may result in commentary from one's
coworkers, but rarely does even the worst of mistakes result in a penalty
to one's personal or professional reputation.
Since the source is kept secret, the mistakes and foul-ups are certainly going to be kept quiet. The skills and workmanship of a closed source programmer are rarely common knowledge to the greater community of programmers in the profession.
In contrast, the open source programmer puts his reputation on the line with every line of code he writes. Poor code quality can result in ridicule or worse. Releasing software under open source exposes one to a level of peer review (and peer pressure) that is impossible under the closed source model.
A level of personal accountability exists in open source development that doesn't exist in the closed source model. When the source and changes to it are present in public for anyone to examine, it becomes personally incumbent on the developer to ensure the code is right and that it hasn't been tampered with by any unknown parties.
Peer review and code review
There is something frightening about releasing a program in open source.
Thoughts run through your head that would never be entertained with a closed
source release. "What if I missed something?" "What if there's a hole
someone can spot?" "What if someone finds something and I look like a fool?"
"What if someone finds something and it looks like I did it deliberately?"
Every time I've posted a patch or made a commit to an open source project, these thoughts have gone through my mind. In a closed source project, it's a little embarrassing to make a mistake and have it known to your coworkers. In an open source project, to make a mistake and have it known to the entire development community and your friends is mortifying in the extreme. That last moment before hitting the Enter key -- to commit a change or send a patch out into the cold cruel world of your peers -- is the longest moment imaginable.
This is a time when you do not want to make a mistake.
Open source promotes secure coding techniques
The open source programmer tends to be a participant in his or her
profession and in the community of developers into which the source is released.
Open source programmers exchange ideas, thoughts, problems, critiques, and solutions
to a variety of development issues as they arise.
In this environment, debate rages over this technique or those practices. The open source programmer is exposed to new ideas and cutting-edge theory regarding coding practice and security. When one technique is exposed as flawed, new techniques take its place and, hopefully, future code is improved as older code is fixed.
Contrast this with closed source programmers, who work in isolation from others in the profession and take little opportunity to swap ideas with peers. The projects they work on may be considered proprietary, or their work a trade secret. In this secrecy they labor, not knowing how their work may or may not stand up against the current standards of engineering practice. They may get some of their ideas and techniques from text books, and those books may be decades out of date.
OK, I'll admit I've exaggerated a bit here. Of course not all closed source programmers work in such isolation. But many do. With a foot in both development models, I've worked with open source programmers, closed source programmers who are participants in their professions and professional communities, and isolated closed source programmers who come to work, do a job, and never see the greater profession outside. I've experienced all three environments and can say the latter is, unfortunately, all too common.
Commercial advantage issues
When companies tout proprietary advantages, they're usually promoting
something that is to their commercial advantage. Let me be clear -- there's
nothing wrong with commercial advantage, and nothing inherently
incompatible between commercial advantage and secure systems.
The problems begin when commercial advantage takes priority over security. Unfortunately, in the closed source model of development, consumers have no way to make that distinction. As a consequence, closed source developers are free to place commercial advantage over security and then market the advantage as if it were security.
When we select an application, a package, or a system to use, our choice is affected by a variety of factors. Features, reliability, price, and performance can all affect our choices in both the open source and closed source packages. In the open source model, we also have the ability to base our choice on the reputation of the developers, the coding styles, the history of the source, and even the programmers' coding comments. Our choices and our responsibilities as consumers increase with open source packages.
Fearing what we don't know
The paradox between security and open source turns out to be a myth. What
little security may be derived from keeping source closed and secret is
more than offset by the security problems potentially introduced by that
same secrecy. Within the myth is the riddle of why people still believe
closed source is more secure.
Fear is one of the most powerful of human motivators. Fear of the unknown is among the strongest of fears, since the cause cannot be addressed by the reasoning mind. And still, even with the indefinable unknowns that exist in closed source software, many fear it less than open source software.
The real modern paradox isn't that security and open source are compatible. The paradox is that there are those who fear what is known and embodied in the open source model, and choose to embrace what is hidden, unknown, and uncertain in the closed source model.
In so doing, they assume the unknown is somehow secure.
(Read our forums FAQ to learn more.)
About the author
Michael H. Warfield is a senior researcher with Internet Security Systems Inc.
A Unix systems engineer, Unix consultant, security consultant, and
network administrator on the Internet for well over a decade, he has been
involved in computer security for over 23 years. Mike is one of the resident
Unix gurus at the Atlanta UNIX Users Group and is a founding member
of the Atlanta Linux Enthusiasts.