BROWSE BY TECHNOLOGY










RTC SUPPLEMENTS


TECHNOLOGY CONNECTED

Security in the Wireless World

Is Open Source Wireless Connectivity Worth the Security Risk?

The Heartbleed security breach, based on OpenSSL, raises the spectre of attacks across a range of wirelessly connected embedded devices. Rigorous software development processes are critical for protecting wirelessly connected devices in the Internet of Things.

BY DAVE HUGHES, HCC EMBEDDED

  • Page 1 of 1
    Bookmark and Share

Article Media

Open Secure Sockets Layer (OpenSSL) is widely used to provide network security in many different kinds of computing systems, including wirelessly connected embedded systems in the emerging Internet of Things. OpenSSL is also the open source security library that allowed the widely publicized security breach called Heartbleed. While there are advantages to open source libraries such as OpenSSL, there are clearly risks as well, many of which stem from the development process itself. The main process used for development of OpenSSL is simple. First a programmer develops code, then a reviewer checks the code, and finally the code is released.

This method of development is the way most software in the world is developed. If you look behind the scenes of OpenSSL development, there are usually four programmers, only one of whom is full time. This leads to a fairly obvious question—why do huge companies, often with access to significant engineering resources, trust their customers’ data and their own reputation to such a small team; especially to a team outside of their control, which may potentially expose the company to unquantifiable quality and security risks? “Because it has always been done that way” would seem to be an insufficient response in the light of the chaos caused by Heartbleed. 

In retrospect, Heartbleed seems to be more of a warning tremor than a full earthquake. It showed the potential scope and depth of harm, but the consequences of this particular fault were relatively mild. Continuing to follow the same path, however, will undoubtedly lead to similar problems, and the ubiquity of the software is in itself a weakness, which can be exploited by those who choose to do harm.

Better Software Development Methods Needed

If the methods of development used by OpenSSL were demonstrably the state-of-the-art in robust software development, then there would not be much to debate. However security problems such as Heartbleed, Apple’s “goto fail” and GnuTLS have been caused by defects in software, not necessarily in the protocols or design. Across various industries there are well-established methods for developing high-quality software. The aerospace, industrial, medical and transport industries use software processes based on the “V” model development defined by IEC 61508, and the data shows that not only does it reduce defects significantly, but in many cases it also reduces the cost of software management over its lifecycle.

How would use of such methods have helped in the OpenSSL Heartbleed bug case? Let’s look at some specific development approaches that can help address security specifically.

“V” Model Development

In the Heartbleed situation, the information available states that the software failed to check the scope of a protocol variable and then processed it blindly. Standard V model development would include unit testing and boundary case analysis/testing that would have instantly alerted developers to the issue (Figure 1). There are other elements of the process that would also have picked up these kinds of issues. For example, a decent static analysis tool would have picked up Apple’s recent issue with their TLS software.

Figure 1
V model of the systems engineering process from: “Systems Engineering Process II” by Osborne, Brummond et al.

It would be impractical from either a cost or resource point of view to propose that full V model development be used for all software, and it is not the intention of this article to state that open-source methodologies are “bad.”  Open source software is open, not just in the source code but also in the processes used to develop this software. It is no secret in the industry what processes could be used to achieve a low software defect level. However, no open-source software today goes to these lengths, and in the area of security the question is—is this approach good enough? Indeed, can it ever be good enough?

Verification of Software Components

When a company wants to use any piece of equipment in a highly sensitive application area, you would expect the manufacturer of that equipment to verify that all components used reach the required level of quality. It is unclear how this occurs in companies managing large amounts of potentially sensitive customer data. This always happens in a manufacturing process where they check the supplier history, the strength of components, ISO9001 compliance, etc., but strangely not for security.

There are deeper issues to consider. If it were possible to create a perfect TLS implementation, would that mean the system was secure? More secure maybe, but if a defect bug was sitting elsewhere in the target system (e.g., in the TCPIP stack), then it could be possible to expose memory. It is much less likely that it would yield sensitive data, but still possible. Eventually we conclude it is necessary to ensure every part of a sensitive system is designed to a verifiable standard.

The only practical solution is to carefully partition what belongs in a critical part of a system and what does not. For instance, bringing the whole of Linux (used in many of the systems that use OpenSSL) to this standard is clearly unrealistic. There is a risk that someone could make a mistake in a Linux update, or BSP update, in unrelated code and leave systems vulnerable. This would leave very little possibility for companies basing their products on this system to protect themselves.

The Problem with “Free” Software and

Security

If we accept that mistakes will always be made and systems will tend to become more complex, then continuing as things are now will probably result in further problems. Commercial devaluation of software does not help this process. The idea that software can be created and obtained for free is a bizarre concept for commercial companies to believe in. It also appears to focus only on the initial capital cost of software and not the ongoing maintenance costs. If the lifetime cost of development and maintenance of “free” software was truly accounted for, it would probably raise some corporate eyebrows.

It could also be quite difficult for any company involved in a “Toyota style” legal case where the consequences of software errors were much worse than compromised data. Imagine a defect, caused by a mistake by a hobby programmer in Australia and reviewed by a programmer working in his spare time in Argentina, which resulted in injury or loss of life. 

Again this is not an attack on open source—they are open and transparent. Blind usage of any software without a proper assessment of context and risk is the problem. Developers should choose development processes that reflect how much they value the security of customer data (Figure 2).

Figure 2
Developers should choose development processes that reflect how much they value the security of customer data.

The argument that software is open and therefore everyone will fix everything is clearly not sustainable anymore—the Heartbleed bug existed for two years before someone realized the problem. This would not be acceptable in any safety-critical or secure environment. There are several different issues.

First, the only way it was possible to exploit the Heartbleed bug was by challenging a system that used OpenSSL. High security systems (weapons systems, nuclear power stations, etc.) publish as little information as possible about the system internals to make the attacker’s starting point as difficult as possible. There are practical problems with transport layer security (TLS) in this respect since the point is secure interoperability. Therefore the communication protocols used must be in the public domain. But OpenSSL is so widely used that, if an issue is discovered, it is relatively easy to find a victim. Concealing the details of an implementation reduces the likelihood that an attack can be effectively mounted.

Second, attacks on TLS-related algorithms in recent years have revolved around back-door methods, such as changes in power consumption or response times, rather than hacking the algorithms directly. These attacks are normally only possible with a direct knowledge of the specific algorithm used. In the TLS case again there are limitations because the algorithm to be used must be publicly negotiated. However, the specific realization does not need to be public knowledge.

Moving to Secure Embedded Software Components

The commercial market for standard software components has been damaged by free software from many sources. How this affects professional companies who need good quality code and support is not obvious. It seems that developers lose the benefits of scale that using specialist providers brings. HCC, an embedded software vendor, has always focused on high quality, reliable components, such as failsafe file systems, but we are working on components developed to standards of verifiability. Ultimately many of these will achieve certification under the IEC 61508 SIL3. We strongly believe that key components of embedded software should be developed once and reused in many environments. Providing these components with the necessary life-cycle support and documentation can make this level of quality more affordable across the industry.

The security of devices has become a critical issue for both device manufacturers and consumers. Wireless embedded devices have specific security issues based on their applications, though a large part of making them secure requires a rigorous approach to the development of software for them. As in similarly sensitive fields such as aerospace and medical, a formal approach to development will significantly reduce the probability of a major incident with a product (Figure 3).

Figure 3
Formal development methods are well understood and will reduce the likelihood of security issues caused by software defects.

HCC Embedded USA
New York, NY
(212) 734-1345
www.hcc-embedded.com