Open Source: A false sense of security
The heartbleed vulnerability dropped like a bombshell, a large majority of web servers on the internet was sharing there memory with the world. The even bigger bombshell was that the vulnerability had existed for over two years. Most people consider open source more secure then proprietary code since anyone can verify that it’s safe. The problem is that most people think that someone else already done that!
OpenSSL and it’s heartbleed
Early 2014 it several security companies simultaneously publicized the heartbleed bug found in OpenSSL. Used by a majority of web servers online the bug in OpenSSL was a big issue. But the bug wasn’t new, it was checked in more then two years earlier into the OpenSSL project. In December 2011 a German named Robert Seggelmann checked in some minor bug fixes and a new functionality, heartbeat.
Heartbeat was designed to keep the connection alive by sending 64kb to the server which responded with the same 64kb in return. The problem was that the heartbeat was built by two things, the data and a size parameter. By sending 1kb but telling the server you sent 64kb it would return 63kb of unrelated memory back to you. When Cloudflare put up a capture the flag competion with a vulnerable server it took only hours before several people had the private SSL certificate. The hearbeat function was also enabled by default.
Private SSL keys on the loose
SSL is a great way to protect against man in the middle attacks. If an attacker puts him self in between a user and a target it can be detected by loss of secure connection (SSL) or by unverified certificates. But if the attacker is in possession of the private key he/she can present a secure connection to the end user and no one will be the wiser that somebody is listening in to the traffic.
When this bug was patched it had existed for over two years in code that anyone could read. I usually say that if you have an attacker with enough will and resources the only way to protect your self is to disconnect your system. We now know, from the Snowden documents, that NSA was aware of this bug for a long time. By collecting private keys from big sites they were able to access encrypted information.
The cost for changing out all the potentially compromised certificates was estimated to be over a 100 million USD. The problem is that 1 out of 10 was replaced with a new certificate using the same private key, so the connection is still compromised! On top of that almost a 1000 sites among the 150 000 most popular sites online still have this issue.
When using open source on an important production system, why not check the code or hire someone to do an audit. Read up on as much information as possible at least, as I wrote above there still is a lot of systems out there with this issue. When you update to a new version check what new features are enabled by default, disable them until you have tested them. There are very few companies that actually do penetration testing on there own systems, you should always do that before you let the curious public loose on it!
Larger companies tend to use proprietary software, not really because it’s safer but they payed for it and someone is responsible in a contract. This is the other end of this problem, we payed for it so it’s someone else problem. Yes and no, we still need to secure our own systems and make sure that we don’t have this kind of issues. So if there is a better open source alternative for a system you need use that, and use the money to do a code review/audit. That will be money well spent in the long run!