BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECH FEATURE

VME

2eSST VME Eases Design of Network-Centric Embedded Systems

The traditional embedded computing model has been fraught with obstacles and headaches. That is changing as technology advances create the possibility for a new “network-centric” approach to embedded computing that is shaking up the traditional industry.

DAVE BARKER AND BOB TUFFORD, MOTOROLA EMBEDDED COMMUNICATIONS COMPUTING GROUP

  • Page 1 of 1
    Bookmark and Share

At the upper end of the embedded computing spectrum, there isn’t much of a notion of a “stand-alone” device anymore. Embedded computing isn’t just about a single processor monitoring and controlling some devices. Today’s high-end equipment can more accurately be described as embedded communications computing, which consists of multiple processors, multiple boards or multiple systems operating in a network with other equipment or systems. The processors in this “network” need to communicate to make the equipment function properly. What has been commonly referred to as “distributed processing” has been a growing part of embedded computing for a number of years. The “network” in a distributed processing application could be the VMEbus, the PCI bus, Ethernet or a proprietary interconnect like Race++.

The software environment in the traditional embedded computing arena has been fragmented, with many small Real-Time Operating System (RTOS) vendors and equipment manufacturers developing their own proprietary kernels. This resulted in a fragmented software market with many RTOSs that did not work together and required considerable effort to maintain and to port to new hardware.

In addition, the only standard, widely used communications protocol was TCP/IP, but this was not generally adopted in the embedded computing space because of a lack of performance and software stacks in the fragmented RTOS arena. As a result, many proprietary communications protocols and transports were developed over the years to provide the performance required by embedded computing applications. Because there were no standardized data formats or protocols, it was not practical to interconnect heterogeneous equipment from multiple vendors.

Network-Centric Computing

Network-centric computing is based on interconnecting all of the computing elements using a common networking architecture (e.g., Gigabit Ethernet). Abstracting the hardware implementation means that applications may not have any knowledge of the underlying hardware and the same application software can run on any computing element in the network.

This shift to network-centric computing is being driven by necessity. Market demands are requiring equipment manufacturers to create bigger, faster, better systems, which equates to more complex environments. Traditional distributed processing methods are too hard to scale as the complexity of systems increases. Equipment manufacturers need an easier way to make multiple processors/boards/systems communicate so they can focus on serving the needs of their market. The network-centric computing model allows this. There are two major factors enabling this shift to network-centric computing.

The first of these factors is the advancement in networking technologies. Ethernet and the Internet Protocol (IP) are ubiquitous. Networking using Ethernet is inexpensive, easy to implement and it allows companies to leverage their existing networking infrastructure. The performance of Ethernet is continuing to increase from 10 Mbits a few years ago, to 100 Mbits, to Gigabit Ethernet today, and moving to 10 Gbit Ethernet in the near future with the benefit of powerful TCP/IP Offload Engines (TOEs) to boost performance even further.

The other factor has to do with software. The embedded computing software market has matured, resulting in considerable consolidation in the RTOS market. Linux is finally gaining traction within the embedded computing market with the release of Linux 2.6. These changes in the embedded computing software arena have made it practical for companies to develop middleware to make it easier for multiple processors, boards and systems to communicate and share data. With the right middleware, multiple heterogeneous hardware platforms can be supported and data can be freely exchanged between them.

Embedded communications middleware is a key element supporting the shift to the network-centric model because it allows developers to focus their efforts on their application rather than having to deal with writing code for specific hardware platforms. The equipment manufacturers are no longer tied to specific hardware implementations or specific hardware vendors. They are free to use the hardware platform that is best suited for each area of their application such as the user interface, data processing, real-time control, data storage, etc.

Various industries have standardized on communication protocols and middleware to support their chosen communication mediums. The telecom market has been a leader with the standardization on Ethernet and communication and management middleware to support the development of complex telecommunications infrastructure equipment. The industrial automation market has shifted from proprietary networks to factory networks based on Ethernet. The medical industry has standardized on Picture Archiving and Communications Systems (PACS) and Ethernet. The United States Navy has been working on its Open Architecture (OA) Initiative, which is centered on Gigabit Ethernet, Data Distribution Service (DDS) and Common Object Request Broker Architecture (CORBA) middleware.

Hardware Platforms for Network-Centric Applications

Commercial Off-The-Shelf (COTS) has been a buzzword for a number of years but it means different things to different people. Some people believe that COTS equates to PCs and they would like to be able to use PCs for all aspects of their embedded application. PCs appear to offer benefits such as low cost, ease of use and software availability. PCs are well-suited for a number of embedded computing applications such as providing the user interface, for processing power where power consumption, cooling and long lifecycles are not important and cost is, and for non-real-time data processing.

If PCs met all of the requirements of today’s embedded computing network-centric applications, it would be relatively straightforward for equipment manufacturers to build their embedded communications infrastructure using Gigabit Ethernet and the right communication middleware. However, there are several real-world factors that cannot be ignored when designing a large-scale network-centric application.

One of the factors is that PCs and Gigabit Ethernet cannot satisfy all of the requirements. PCs may not meet the ruggedization requirements of the embedded application. They have power consumption and cooling issues; they tend to have shorter lifecycles; and they sometimes aren’t well-suited to handle real-time, event-driven aspects of embedded applications.

The second factor is that most of the companies and industries moving to a network-centric model are coming from a traditional embedded computing model with legacy hardware and software that they cannot afford to completely discard. The VMEbus architecture has been the most widely used COTS architecture in the embedded computing industry. Many companies moving to a network-centric model have significant investments in the VMEbus infrastructure that they need to preserve.

The Best of Both Worlds

Do equipment manufacturers have to choose between the traditional distributed embedded computing model based on architectures such as the VMEbus and the network-centric model based on PCs? Not any more. Middleware (such as DDS) is not limited to communicating over Ethernet—it can be ported to communicate over any physical transport layer, including the VMEbus. Since the middleware abstracts the hardware, equipment manufacturers can take advantage of the same network-centric approach to their distributed processing systems that they employ in other parts of their system design. This provides four distinct advantages.

One advantage is that it allows equipment manufacturers to leverage their investment in VMEbus-based distributed processing systems and continue to use VME where it makes sense. Second, it allows their VME-based distributed processing systems to seamlessly integrate into their network-centric environment via Gigabit Ethernet. Third, with the middleware communicating over the VMEbus, it gives them the increased performance they need in the real-time portions of their application. And fourth, it makes the VMEbus architecture easier to use. One of the downsides of VME over the years has been that the integration of boards and software was left as an exercise for the customer.

Utilizing 2eSST and Gigabit Ethernet

Gigabit Ethernet is the most common interconnect between distributed processing nodes today. This applies for a range of platforms from embedded systems all the way to corporate computing infrastructures. The reason for this is the benefits that Gigabit Ethernet provides. However, Gigabit Ethernet can also have some disadvantages in certain real-time embedded applications. These include:

• Limited bandwidth

• Low bandwidth utilization

• Non-determinism

• High protocol stack CPU processing overhead

• Lots of cabling

• The requirement for external switches

Gigabit Ethernet’s theoretical bandwidth is approximately 100 Mbytes/s in each direction. In terms of bandwidth utilization, however, Gigabit Ethernet rarely achieves over 90% (90 Mbyte/s) bandwidth utilization.

Another disadvantage of Gigabit Ethernet is the high CPU processing overhead incurred by the protocol stack. In an actual test scenario with two boards running a TCP/IP stack, each having a RISC processor running at 1.267 GHz, a Gigabit Ethernet link transferring data from one board to the other ran at 91% bandwidth utilization (91 Mbytes/s), but the CPU bandwidth consumed was 83% on the sending board and 98% on the receiving board!

It is simply not efficient in real-time applications to have an average of 90% of a CPU’s bandwidth consumed implementing protocol stacks and managing connections. If other processes are run on the CPU simultaneously, this will throttle the bandwidth of the Gigabit Ethernet link thereby lowering its bandwidth utilization. TCP Offload-Engines (TOEs) can help reduce some of the CPU processing load, but they are not yet cost-effective.

Ethernet is generally viewed as having high latency and being non-deterministic, although that perception is a carry-over from the days of shared 10 Mbit/s Ethernet where the latency and non-determinism were a result mainly of the mechanism by which an Ethernet node obtained the “wire”. Since that mechanism isn’t used with the point-to-point switched implementations used by Gigabit Ethernet, both determinism and latency have improved. The main contributing factors to the high latency and non-deterministic nature of Gigabit Ethernet now are the switches and the protocol stack respectively.

Since Gigabit Ethernet uses point-to-point connections, non-backplane implementations require cables between nodes, or between nodes and switches. For networks with more than two nodes, hubs or switches are required. Both of these requirements—cabling and hubs/switches—increase system cost and reduce system reliability.

The 2eSST protocol was added to the VMEbus in 1997 to provide significant bandwidth improvement with sustained bandwidths up to 300 Mbytes/s. This represents more than a 3X improvement over Gigabit Ethernet’s 100 Mbyte/s bandwidth. Another benefit the 2eSST VMEbus transport has over Gigabit Ethernet is lower latency and determinism. In fact, these two well known attributes have made the VMEbus a preferred choice in hard real-time applications for over 20 years.

The 2eSST VMEbus transport utilizes significantly less CPU bandwidth than Gigabit Ethernet. When implemented in a memory-mapped architecture with DMA, 2eSST VMEbus transport CPU utilization can be as low as 10% compared to an average of 90% for Gigabit Ethernet using TCP/IP. Also, 2eSST VMEbus system implementations don’t have the cables and switches that are required for Gigabit Ethernet implementations. This provides a system cost and reliability advantage.

Thus, while both Gigabit Ethernet and the 2eSST VMEbus protocol have their respective places in a network-centric architecture, the 2eSST VMEbus protocol is preferred for use within a VME chassis instead of Gigabit Ethernet (for all the reasons outlined above). This is illustrated in Figure 1. The left-hand diagram in Figure 1 shows a traditional network-centric system implementation using Gigabit Ethernet as the primary system interconnect. The right-hand diagram in Figure 1 shows how the 2eSST VMEbus can be integrated into the network-centric architecture and can provide a high-performance communication path between VMEbus CPU boards. Standards-based middleware such as DDS can be used to abstract the VMEbus 2eSST transport from the application, allowing VMEbus platforms to seamlessly integrate into network-centric architectures along with Gigabit Ethernet as shown in Figure 2.

The VMEbus architecture has some very attractive attributes for embedded communications computing applications. It is well-suited to real-time data processing and dealing with real-time, event-driven applications. It can be used where ruggedization, power consumption, cooling, processing density (defined as the amount of compute power within a specified amount of space) and long lifecycles are important.

Although more and more PC technology is finding its way into embedded applications, there is still a need for distributed processing systems based on the VMEbus or similar architectures. With the shift that is taking place in the industry to a network-centric approach to embedded communications computing, these two contrasting technologies can work together to allow manufacturers to build bigger, faster, better and more robust equipment that meets the needs of their customers today and into the future.

Motorola Embedded
Communications Computing Group
Tempe, AZ.
(602) 438-3000.
[www.motorola.com].