Virtualization solutions for real-time and embedded applications are available now, and there are multiple implementations and approaches available to choose from. But what are the right questions to ask and the things to consider? It is important to look at the different approaches and understand some of the key attributes that an embedded, real-time hypervisor should possess.
BY GERD LAMMERS, REAL-TIME SYSTEMS
It is amazing just how long it takes sometimes for new technology to become mature enough so it can be applied successfully without too many unwanted side effects. For hypervisor technology on Intel x86 platforms, this is definitely something that can be said. In this case, engineers did not adopt them just because of a lack of trust in consolidated systems running on a single hardware. Simply said, the different pieces of technology needed to build secure and deterministic real-time hypervisors were just not there a decade ago. Over the last few years this has changed. Low-power multicore processors are state of the art, and hardware assisted virtualization technology (e.g., Intel VT) now allows the makers of hypervisors to build secure, high-performance virtualization products. And, as always, there are tradeoffs, and no “one size fits all” solution when it comes to hypervisors. While their general use is for the consolidation of multiple cores into a single system (Figure 1), the specific selection depends on the particular requirements and application. Here we examine hypervisors used in products for industrial automation but which may also be applicable to other fields where hard real-time performance and security are essential.
The purpose of a hypervisor is “system consolidation” and coordination of multiple CPUs on a single die.
When choosing a hypervisor for an embedded application with real-time requirements, there are a number of important things to consider, for example: the “Type” of hypervisor used, security aspects, real-time performance, scalability, portability and ease of use.
First of all, there are several different types of hypervisors available. The so-called “host-based” hypervisor is implemented as an application on top of a host operating system like Windows or Linux. Host-based hypervisors not only depend on the host operating system to be up and running, but scheduling and access to hardware devices is also provided by the host operating system, adding a non-deterministic layer between the guest operating system and hardware. Thus it is only by default that they provide virtual or emulated devices for the guests to work with because all the real, physical devices are already serviced by the host operating system installed on the hardware. These restrictions obviously rule out real-time applications (Figure 2).
A “host-based” hypervisor is implemented on top of a host operating system running on the physical hardware.
On the other hand, a “bare metal” hypervisor is a hypervisor that runs directly on the hardware with no host operating system getting in the way (Figure 3). This is the only way to provide determinism and direct hardware access with unmodified drivers to an OS. It is pretty clear that while a “host-based” hypervisor is probably great for IT or Server Virtualization, it is not suitable for hard real-time, robust applications in an industrial controller or medical device.
A “bare-metal” hypervisor runs directly on the hardware and offers virtual or real hardware interfaces to the guest operating systems.
When considering “bare-metal” hypervisors, there are again different implementations. There are bare-metal hypervisors that use virtualization supported in hardware monitoring, as with Intel VT, and that potentially limit guest operating system access to protected resources like memory or devices of other guests running in parallel. This approach of course means that the hypervisor must occasionally “step in” to modify or block access to the hardware, which leads to jitter in the system running on top, therefore causing a loss of determinism.
Real Time and/or Security
At the other end of the spectrum, there are solutions in which the operating systems are configured or patched to limit access to only a portion of the underlying hardware, allowing a guest to run alongside a second operating system, which then is virtualized the same way. The big benefit of this approach is that it eliminates any and all virtualization overhead because the operating systems run directly on their assigned portion of the hardware without a hypervisor getting between them. As a consequence of this type of solution, however, there is no provision for security and no hypervisor monitoring, limiting or preventing an operating system from accessing resources owned by different guests. In a time where security plays an ever more important role, this might not always be the best choice; direct, “unmonitored” hardware access should probably not be given to operating systems that are targeted by malware or hackers, such as Microsoft Windows, the most commonly used operating system for human machine interfaces (HMIs). On the other hand, when it comes to real-time operating systems (RTOS), it might be very advantageous for best performance if both virtualization overhead and jitter can be eliminated. Like everything else, it is a trade-off. An ideal hypervisor would provide choices between “hardware -monitored” hardware separation, secure execution of guest operating systems, or the configuration of a guest operating system executing in a mode for “best-possible real-time performance.”
The question of security has become more and more important over the last few years. To satisfy basic security concerns, operating systems should execute completely independent of each other, utilizing hardware separation with Intel VT. But security starts the moment the hardware is turned on. A perfect hypervisor for industrial automation would support a full “chain of trust,” starting with the initial execution of the BIOS, the boot loader and then the hypervisor. The hypervisor should provide end-to-end security in the startup phase, making sure that everything that is loaded, including its own configuration, is signed and unmodified before and during the boot process. Then, at runtime, the hypervisor would still have to manage access rights to programming interfaces (APIs) and shared memory sections or any other intersystem communication or guest operating system control functionality.
Getting to Real Time
Next on the list of things to consider is real-time performance and determinism of an RTOS running as a guest operating system on a hypervisor. Because this is one of the most challenging hypervisor requirements in industrial applications for products like motion controllers, it is therefore right at the top of the list of criteria when hypervisors are evaluated. In hypervisor design there are different things that need to be done properly when hard real-time behavior is of vital importance. Even when going with a bare-metal hypervisor design, there is no guarantee for real-time performance, and quite a few things must be taken into consideration when developing a hypervisor with hard real-time deterministic behavior. Among the first things to be implemented, but only after thinking them through carefully, are interrupt handling for and scheduling of guest operating systems. Fortunately, on x86 hardware used for hypervisor designs or system consolidation, the days of single-core processors lie in the past. Dual-Core, Quad-Core or even larger processors can be used nowadays, and these new chips are often cheaper and require less power than a single core Pentium did only a few years ago.
Multicore processors permit each guest operating system to “own” one or more CPUs exclusively, which eliminates the need for a hypervisor to schedule operating systems. In addition, each guest can execute without interruption just as if it had been deployed on its own dedicated hardware board (Figure 4).
Partitioning of Cores. Example of Quad-Core configured with a Dual Core for Windows and a single core each for two RTOSs.
Interrupt handling can also be a problem when implementing applications with hard real-time constraints. If interrupts must first be captured by a hypervisor before they can be passed to the target guest operating systems, this would clearly increase interrupt-latency times and jeopardize determinism, even more if this happens at high frequency. The best real-time performance can therefore be achieved if interrupts never have to go through software but if the hypervisor uses the capability in the Intel architecture (no matter if standard laptop or small embedded board) to route interrupts directly in hardware to the CPU(s). In this fashion, just like on native systems, target guest operating systems execute without first making a detour through software. Even after satisfying all of the above criteria—a hypervisor that runs directly on the hardware, provides security and doesn’t interfere with RTOS scheduling or interrupt latencies—there are still more aspects to consider.
On an Intel x86-board, there are more things to be aware of. Intel put in more and more clever features for reducing processors’ power consumption by reducing CPU voltage or speed or if needed gaining additional computing power by overclocking one CPU while slowing down a second CPU. This is of course completely counterproductive for an RTOS running on that second core. A hypervisor used in hard real-time applications has to take all of this into consideration and be able to block access to features like Turbo Boost, power management or other functions that could potentially have a negative system-wide impact.
And then there are certain other resources that require protection—like the last level cache of a processor, or the bus to access the main memory of the system, which is shared between guest operating systems. If, as in typical industrial automation applications, a hypervisor is run using Microsoft Windows as a graphical user interface and an RTOS in parallel—when accessing shared resources, i.e., the memory bus or last level cache—the RTOS should always have priority over Windows. Whenever a conflict of resources occurs in a particular system, a hypervisor should always be able to prioritize the RTOS over the GPOS (Figure 5).
CPUs each with L1 and L2 caches and a shared L3 cache, and a GPOS and RTOS illustrating shared cache topology.
After having reviewed the security and real-time aspects of hypervisor design, we come to a short but very essential topic, namely communication. If multiple operating systems are deployed with great separation, how are they to communicate with one another? Before the trend to consolidate systems, an Ethernet was usually used for intersystem communication. This means that a hypervisor should make consolidation quick and easy by providing a virtual network for operating systems to communicate with each other. Such a network could also be used, for example, to provide remote display functionality and access network drives or a physical network. Communication via shared memory, interrupt-based event system and time synchronization between guests should of course be available as well.
Although we already discussed security, hard real-time performance and intersystem communication, we should now briefly consider the equally important questions of usability, portability, scalability and flexibility. Unlike consumer products, which are often stable over their life-cycles, industrial automation systems are generally longer-lived. Constantly driven forward by demands for better performance and efficiency, industrial systems must often be upgraded within a few years of initial product deployment.
Since the technology on which industrial systems are based advances so rapidly, it is also likely that such systems will have to be re-hosted on a new industrial PC, probably equipped with next-generation processors. If such a system depends on hypervisor technology for its proper functioning, the underlying hypervisor must be correspondingly upgradeable.
Therefore, the ideal hypervisor will run on many different platforms, from Atom to Xeon, from dual- to many-core processors and from small embedded modules to server boards. It will provide developers with hassle-free, out-of-the box experiences without help from specialists. The hypervisor that meets all these requirements by design will accelerate time-to-market while keeping development and maintenance costs in check. In a time where multicore is everywhere, embedded hypervisors are here for good. But not every hypervisor is good for every application.
+49 (0)751 359 558-0