Originally Posted By: haertig
If you think about that, you'll have to admit that opensource *IS* inheritantly more secure.

Open source is more secure because...? The way that so many people parrot that same mantra essentially boils down to almost a circular kind of logic. "Anyone" can contribute code to an open source project and anyone can see the code, ergo, the thinking goes, it is more (pick your adjective) secure/correct/efficient/yada-yada, but as this major GnuTLS flaw so starkly illustrates, just because anyone CAN look at the code for open source software, doesn't mean that anyone actually DOES, at least with a knowledgeable and critical eye. Obviously, with this major GnuTLS vulnerability, nobody did (well, except for the bad guys who may have been exploiting this hole for years).

Actually, I think MS has come a long way since the days about a decade ago when Internet Explorer and the IIS webserver had more holes than a block of Swiss cheese. People would groan when a report about a new IE or IIS vulnerability was published because they just kept coming and coming and were often quite big holes. People were totally losing trust in MS products so Bill Gates made some big decisions.

I remember thinking that when Gates launched his Trustworthy Computing Initiative, it was a huge business and mental shift. Remember, we were coming out of the Dot Com days when "get the code out first, get it out fast" was the mantra in software development. Bill Gates essentially said that MS needed to think of our computers more like appliances or utilities--they need to "just work"--and that's the level of functionality users expected. So the mindset, the procedures, the design, the software tools, etc. were changed to emphasize the quality and security of their code. The level of vulnerabilities in code written after TCI was initiated seems light years ahead in security compared to before. (And most of Win XP was written before TCI, by the way, which is a huge reason to move on). When's the last big IE or IIS vulnerability, the kind you'd read about in the mainstream press? I can't think of one.

Can an open source developer do the same thing? Sure. Do they? Not necessarily. So which model is inherently more likely to produce secure code? A closed system that systematically checks for problems, or an open one that can/might check? There is nothing "inherently" more secure with open source. It is inherently more transparent but that only matters if someone acts on that transparency. A one-person open source project, like an app, could be riddled with security vulnerabilities, no one else bothers to review the code, and yet tons of people may merrily go about using it, feeling confident because they're using open source code and it seems to run just fine. That's akin to "walking by faith, not by sight".