It didn't take long for some substantive exploration and explanation to emerge concerning the Heartbleed vulnerability in the OpenSSL security protocol. Within a day or two of its disclosure on the OpenSSL Project site, the Internet was replete with remarks about it ranging from wildly speculative conspiracy theories to academics' comprehensive analyses of how the bug landed in the code, and what the OpenSSL team did to fix the bug.
However, as veteran cryptographer Bruce Schneier, chief technology officer at Co3 Systems, wrote in his Schneier On Security blog, "This may be a massive computer vulnerability, but all of the interesting aspects of it are human."
Indeed, the flaw itself was revealed to be an overlooked bounds check mechanism in the Internet Engineering Task Force's (IETF) Request For Comments (RFC) 6520, the "Heartbeat Extension" of the Transport Layer Security (TLS) and Datagram Transport Layer (DTLA) security protocols. The extension, first implemented in December 2011 and made a recognized IETF standard in February 2012, essentially allows TLS to keep sessions alive without requiring continuous data transfer.
There are numerous concise explanations of how the flaw, which was in versions 1.0.1 to 1.0.1f of the protocol, allowed those with malicious intent to obtain crucial data. As briefly stated by Paul Ducklin of security technology vendor Sophos Ltd., the lack of a bounds checking mechanism allowed a party with bad intent to falsely state to the TLS server that a virtually empty request container payload is 64 kilobytes long.
"Then, OpenSSL will uncomplainingly copy 65535 bytes from your request packet, even though you didn't send across that many bytes," Ducklin wrote in a Sophos explainer. "That means OpenSSL runs off the end of your data and scoops up whatever else is next to it in memory at the other end of the connection, for a potential data leakage of approximately 64KB each time you send a malformed heartbeat request."
Subsequent research into exactly how nasty the effects of the flaw were on the global network revealed vexing possibilities: since the 64kb of data a bad actor could swipe were random, that information could have been completely useless. On the other hand, as network traffic optimization vendor Cloudfare reported – after the flaw had been detected and fixed – it was also possible to obtain SSL private keys in a matter of hours. It took one researcher 2.5 million requests to do so, another just 100,000.
The answer, as has been well-documented by now, was for end-users to change passwords in any of their usually-visited sites that were shown to be vulnerable, and for operators of OpenSSL servers to revoked security certificates issued prior to April 7, 2014 – the day the flaw was announced – and to obtain new ones.
In a larger sense, the flaw brought to the fore the question about exactly how open source projects such as OpenSSL should be supported, especially if a significant number of enterprises use such code in their businesses – and how massive forking of open source projects multiply the amount of uncertainty around a technology exactly when it needs an injection of stability.
While the old open source proverb (given enough eyes, all bugs are shallow) was borne out with Heartbleed, it also became evident that, while it may have been 64 kilobytes shallow, it was also 7,900 miles wide – the ramifications extended across the entire globe, at the very least in inconvenience. Whether networks suffered catastrophic compromises is still unknown.
"The 'with enough eyes' adage is valid, but the key part of that is 'enough eyes,’" Steve Marquess, founding partner of the OpenSSL Software Foundation, the small group of developers responsible for the protocol, told CACM. "The fact that the code is completely visible doesn't help if no one has the time/inclination/motivation to actually look at it."
On his blog Speeds And Feeds, Marquess explained exactly how shoestring an operation OpenSSL was: the foundation receives about $2,000 annually in donations, and also makes money by selling commercial consulting contracts. It has never made more than $1 million in a single year. The foundation also has only one "full-time" developer.
"There should be at least a half-dozen full-time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work," he wrote. "If you’re a corporate or government decision maker in a position to do something about it, give it some thought. Please."
Since the post was published, Marquess said, "We're seeing more interest than ever before in sustainable funding of OpenSSL, but nothing has been finalized yet. Here's a sound bite for you: before Heartbleed happened, I would tell people how important OpenSSL was and they would nod and say, 'yes, yes, of course it is,' and move on. Now, they are calling me to tell me how critical OpenSSL is.
"Talk is cheap, though, so we'll see."
Andy Grant, principal security engineer at San Francisco-based security consultancy iSec Partners, said one large community effort to clean up OpenSSL is already under way.
"While not a rewrite of the OpenSSL code, the OpenBSD team has started a major cleanup attempt of the OpenSSL code," Grant said. "Their focus is minimizing attack surface and stripping down the code base to the essentials for their needs. Hopefully the OpenSSL project can benefit from this massive effort."
In fact, the OpenBSD effort does not appear to have been done to benefit OpenSSL. It has publicly forked from OpenSSL into an effort called LibreSSL. Its minimally designed Web site takes some oblique swipes at the existing OpenSSL effort:
"We know you all want this tomorrow. We are working as fast as we can but our primary focus is good software that we trust to run ourselves. We don't want to break your heart."
As might be expected from an OpenBSD project, the site's maintainers say the first implementation of the new code will be for the OpenBSD operating system, and that support for other OSes will be provided once they have rewritten and fixed enough of the code to have a stable baseline they can trust and maintain, and the right portability team in place.
They also say they need a stable commitment of funding. Left an open question, of course, is whether or not the technical community at large will choose to supply enough eyes and money to support either or both of these efforts, or if the fork will result in even smaller numbers of loyalists in either camp.
Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.
No entries found