6 months on – Have we learnt from our Heartbleed Mistakes?

‘First time’s an accident, the second time a coincidence but third time is stupidity’ has long been the mantra of infuriated parents, exasperated at their children’s ability to make the same mistake multiple times over. Oddly, it was also the phrase that came to mind as news of the Shellshock bug targeting open-source coding broke – just 6 months on from the Heartbleed attack.

Shellshock allows hackers to easily exploit many web servers that used the free and open source Bash command line shell. So far hackers have focussed efforts on exploiting the weakness to place malware on vulnerable web servers, with the intention of creating armies of bots for future Distributed Denial of Service attacks, flooding website networks with traffic and taking them offline. While it was initially thought that the vulnerability would only affect machines that ran Bash as their default command line interface, it is now suspected that machines using related coding could also be exploited.

While very different in nature, there are alarming similarities between the Heartbleed and Shellshock attacks. Both exploited simple coding errors – the type of error that any developer could make. And, just like its predecessor, the main issue with Shellshock was not the error itself, but rather the assumptions made by thousands of people globally about the integrity and security of open-source coding. This series of assumptions once again led to the widely-reported problems caused by the bug, making global headlines and the sparking a desperate scramble to close off the vulnerability.

· Assumption #1: someone had double-checked and security tested the code before it was made available and widely deployed.

· Assumption #2: because open-source development has a community of programmers working together free of commercial imperatives, it leads to software with fewer bugs.

· Assumption #3: because the coding is used by tens of thousands of organizations worldwide, it’s fully tested, robust and secure.

With Shell Shock, just like Heartbleed, all of these assumptions were proved wrong. Of course these assumptions are not a reflection on open-source software development, nor the individuals involved in developing open-source coding. The error is just as likely to be made by commercial software providers, worked on by huge development teams and used by hundreds of thousands of companies, is just as prone to serious, widespread bugs and vulnerabilities. If it wasn’t, we wouldn’t have organizations running masses of patches on a regular basis.

Setting ‘blame’ and ‘fault’ to one side, Shell Shock once again highlights that effective Internet security has to be based on more than idle assumptions or the child’s favorite excuse “he did it first so I thought I could get away with it” mentality. Just because thousands of other companies worldwide use the code in their applications, services and solutions, does not mean that they are secure.

There is no global Internet security task force that actively seeks and closes off these types of vulnerabilities when they are discovered. The blind rush to deploy open-source coding – because everyone else was using it and, being open-source, it was cheap – played the major role in creating the scale and seriousness of the Heartbleed flaw. 6 months further down the line that lesson remained unlearned, leading to Shellshock was able to strike. Once again people trusted, but failed to verify that their trust was deserved.

Lesson of the Day
All solutions need to be rigorously developed, tested and re-tested to ensure any vulnerabilities are removed. IT must be about trust, founded on a solid technical basis. And that applies to consumers, major websites, and IT vendors alike.

Despite its recent failings open-source clearly has a role to play, and a value to add, to modern IT infrastructure, which is why it is so widely used. However it is abundantly clear, firstly with Heartbleed and now Shellshock, that hackers are acutely aware that organisations are relying on vast amounts of untested coding in websites, apps, security solutions and more, offering them a plethora of opportunities to exploit. If organisations want to continue to use and realize the benefits of open-source it is evident that open-source coding must be vigorously tested to reduce potential vulnerabilities before it is deployed and assumed to be secure.

Thinking back to the parents’ mantra we can excuse Heartbleed as a mistake and pass Shellshock as a coincidence but the next large scale open-source security flaw? Well, I’ll let you draw your own conclusions.

Image: Heartbleed via Shutterstock