Developing for Multicore Systems
Embedded Technology, Multicore, and Virtualization. Are You Keeping Up?
The trend toward multicore processors and massively parallel architectures is just beginning. In order to keep pace and realize the potential of these developments, new tools and approaches will be needed.
BY CASEY WELTZIN, NATIONAL INSTRUMENTS
Page 1 of 1
The embedded design world has changed markedly over the last decade, and the progress shows no sign of slowing. Multicore processing (in the form of both symmetric multiprocessing (SMP) and asymmetric multiprocessing (AMP)) is becoming commonplace, with embedded multicore CPU revenue expected to grow 6X from 2007 to 2011. In addition, field programmable gate arrays (FPGAs) have grown in capability and gone down in cost, providing high-speed functionality that could once only be achieved with application specific integrated circuits (ASICs). Finally, virtualization is blurring the connection between hardware and software by enabling multiple operating systems to run on a single processor. With the rapid evolution of these technologies, how can embedded developers possibly keep up? These technologies mean big challenges as well as huge potentials for developers of embedded designs. It is vital to be able to take advantage of these changes now while keeping development time to a minimum.
It goes without saying that multicore processing represents an enormous shift in embedded design. With the presence of just one processor core on a chip, embedded designers traditionally have been able to use sequential programming languages such as C even for the most complex of applications. However, the presence of multiple processing cores on one physical chip complicates the design process considerably. So far, most commercial compilers have not advanced to the level of being able to automatically analyze which sections of code can run in parallel. That has forced embedded designers looking to take advantage of multicore processors to make use of parallel programming APIs that add overhead to code and are difficult to debug. In addition, traditional sequential programs make it very difficult to visualize parallel routines, creating a big problem for designers inheriting legacy code or struggling with their own complex applications. If today’s parallel programming is difficult for designers, imagine how it will be when the next generation of processors with 16 or more cores arrives.
The most obvious solution to this challenge is using better programming tools and methods to abstract away the complexity of multicore hardware. While APIs such as OpenMP and POSIX have become commonplace in parallel applications, newer APIs such as the Multicore Communications API (MCAPI) promise to be more scalable and support a wide variety of parallel hardware architectures for both SMP and AMP. In addition, new tool suites such as Intel Parallel Studio aim to provide better debugging tools than previously available. Finally, graphical dataflow languages such as NI LabView provide an inherently parallel programming model for SMP that can greatly reduce time-to-market. It makes no sense to program serially when your application is supposed to run in parallel. By automatically analyzing parallel sections of code and mapping those sections onto multiple threads, dataflow languages allow you to focus on your main task: developing code quickly and concisely (Figure 1).
Using a dataflow language such as LabView can speed development of parallel embedded applications.
Envision the typical embedded software design process for one of your projects. A large embedded application likely starts with a flow chart, and then individual pieces of the flow chart are translated into code and implemented. With dataflow programming, you can skip a step; code can be implemented in parallel as laid out on your flow chart without translation into a sequential language. In this way, investing in parallel programming tools, including new APIs and IDEs that support dataflow languages, will help you make the most of advances in multicore technology for your embedded designs.
Next, FPGAs have changed the way that high-speed and massively parallel embedded designs are implemented, and will no doubt continue to evolve in the future. In the past, implementing custom signal processing routines such as digital filtering in hardware meant designing an ASIC with significant initial design expense. While this may have been cost-effective for high-volume applications, low-volume embedded designs were forced to use a combination of existing ASICs, or run signal processing code on a considerably slower processor in software. FPGAs have been a game changer. Now, you can simply download custom signal processing applications to an FPGA and run in hardware, at a cost of only tens of dollars. In addition, because FPGAs implement your embedded applications in hardware, they are by nature massively parallel. With all of these advantages, it is important to make better use of FPGAs in embedded designs, and develop in less time.
One major challenge embedded developers face is the difference in design tools used to program FPGAs and microprocessors. While many developers are comfortable writing high-level C code, at least for sequential microprocessor applications, FPGA programming is typically done in a hardware description language (HDL) such as VHDL. This fundamental gap in communication between developers, which is tantamount to speaking different languages, can add a major hurdle in the development cycle, especially when FPGAs and processors are both used in a single design. To solve this problem, a number of tools, such as Impulse CoDeveloper, have been developed to translate C applications into HDL code. These kinds of tools enable you to specify applications at a high level and then target those applications to FPGAs. In addition, graphical dataflow languages such as LabView allow you to develop for FPGAs without specific HDL knowledge. Because dataflow provides an inherently parallel approach to programming, it also allows you to automatically take advantage of the massively parallel nature of FPGAs. The message here is simple: using high-level FPGA design strategies, such as dataflow languages and C to HDL translators, can maximize the efficiency of your design team and reduce your time-to-market.
Finally, one of the most recent technologies to enter the embedded scene is virtualization. The main idea behind this technology is to make better use of processing hardware by abstracting away the details of the specific hardware platform from operating systems and applications. Specifically, one way to use virtualization in embedded designs is to install a piece of software called a hypervisor, which will allow multiple operating systems to run in parallel simultaneously. This ends up having positive implications on both the overall capability of an embedded system and its use of multicore hardware. In a system with multiple homogeneous processor cores, a hypervisor makes it easy to construct an AMP software architecture where individual operating systems are assigned one or more cores. At a high level, you can think of virtualization technology as making your multicore hardware multitalented (Figure 2).
Installing a hypervisor enables asymmetric multiprocessing (AMP) on a set of homogeneous processor cores.
Though designers often program entire embedded systems from the ground up, pressure to reduce development time—and therefore cost—has led to higher usage of operating systems in the embedded domain. This, however, presents a problem: how do engineers balance the need for the services and user interface provided by a commercial OS with the real-time performance needed for an embedded application? Imagine, for example, that you are designing a medical imaging machine. How can you take advantage of the built-in UI capabilities of an OS such as Linux while processing imaging data in real time? Using a hypervisor can meet these challenges. Running both a feature-rich commercial OS and a real-time OS in parallel can reduce development time for your embedded applications while maintaining determinism.
Though trends in embedded technology including multicore processing, FPGAs and virtualization present a big departure from traditional development techniques, there are some clear steps that you can take to harness them and stay competitive. First, adopt programming tools that abstract away hardware features such as multiple processing cores or FPGA gates. By concentrating on implementing your design while spending minimal time making adjustments for the underlying hardware architecture, you can bring your embedded products to market faster. Programming environments with parallel debugging features, new parallel programming APIs, dataflow programming, and C to HDL converters can all help you achieve these goals. In addition, employing virtualization enables you to take advantage of both real-time processing and commercial OS services to reduce development time and make the most of your multicore hardware. As the next generation of embedded systems grows more powerful than ever, taking advantage of these latest technologies will help your company stay ahead of the curve.