BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECH FEATURE

VME

VXS is Like the Battery in that Bunny–It Keeps VME Technology Going and Going

VXS is poised to evolve using the same philosophy as VMEbus, supporting compatibility between current and future interconnects. Ultimately, VXS systems support the past, present and future, with a wide range of options to support VMEbus-based systems for the next 20 years.

ANDREW REDDIG, TEK MICROSYSTEMS

  • Page 1 of 1
    Bookmark and Share

The VMEbus has become one of the most successful and long-lived buses in the history of embedded computing, extending a compatible infrastructure across 20 years and several generations of computing technology. The philosophy of VMEbus technology has allowed a large number of technology insertions—D64, MBLT, RACE++, StarFabric and now 2eSST, to name a few—while maintaining complete mechanical, electrical and software compatibility with all pre-existing legacy solutions.

This philosophy has allowed VMEbus system integrators to mix custom and off-the-shelf cards with the assurance that their investment in hardware and software would be protected as portions of the system migrated to newer technologies. It has also allowed incremental upgrades to systems after deployment, making VMEbus technology an excellent choice for defense and industrial applications with long product lifecycles.

Over the last several years, the need for bandwidth and scalability in high-performance applications has resulted in the development of several standards that extend the technology. Using the extensibility provided by the user-defined regions of P2 and P0, standards such as RACE++, SKYchannel, Myrinet and StarFabric added switched fabric interconnect to VME while maintaining compatibility with legacy VME cards. These standards used parallel connections between nodes due to the signal integrity limitations of the connectors. Because there were no open switched fabric protocols, each standard’s underlying technology tended to be controlled by one company, resulting in open standards but limited choices for users.

VXS Architecture

The next generation of scalable embedded computing solutions will be based on open interconnect standards such as PCI Express and Serial RapidIO. Because these standards are being widely adopted, the underlying technology is supported by multiple companies offering endpoints, switches, software and IP core solutions.

However, unlike parallel bus interfaces such as VME and PCI, new switched fabric interconnects are all based on high-speed serial links. Each serial link combines clock and data into a single differential pair, eliminating the need for low-skew layout even when multiple links are used between nodes. One 3.125 Gbit/s differential pair supports data throughput of 312.5 Mbytes/s using two pins. By comparison, a single parallel RACE++ port requires 41 pins to provide throughput of 267 Mbytes/s.

While the density advantages of high-speed serial links are obvious, the signal integrity limitations of the existing P2 and P0 connectors do not support the 2.5 and 3.125 Gbit/s connections used by PCI Express and Serial RapidIO. To support the next generation of interconnects, a new connector solution was required.

The VITA 41 VXS standard adds support for next-generation serial interconnects in a way that is consistent with the VMEbus philosophy of incremental evolution. First and foremost, VXS maintains complete mechanical and electrical compatibility with legacy VMEbus technology through the use of the same 6U form-factor and P1 / P2 DIN connectors. This allows a VXS-enabled system to support any existing VMEbus card that uses the traditional P1 and P2 connectors.

VXS defines two types of cards: payload cards and switch cards. Payload cards look a lot like legacy VMEbus cards. They have the traditional P1 and P2 connectors, standard VME64x with 2eSST, and a new higher-speed P0 connector with 8 full-duplex serial links providing up to 2.5 Gbits/s of throughput in each direction. Switch cards use higher density connectors and provide high-density interconnect between payload cards but do not support legacy VME connections through P1 and P2.

VXS-compatible backplanes can support any mix of payload and switch cards in a variety of topologies. VXS backplanes are completely passive, providing high-speed interconnect between cards but without active switching on the backplane itself. This offers an advantage over interconnects such as RACE++ and SKYchannel, which require active interlink modules mounted behind the backplane to provide the switching function.

VXS is inherently fabric agnostic, supporting a range of high-speed serial fabrics using the same pinout, connector and backplane. The fabric essentially becomes an agreement between the payload and switch cards in the system, allowing VXS infrastructure to be used for open standard fabrics such as PCI Express, Serial RapidIO, InfiniBand and Gigabit Ethernet, as well as application-specific interfaces.

One advantage of the VXS architecture is support for a range of topologies with common payload and switch card interfaces. Larger VXS systems will typically use redundant switch cards to implement a dual-star topology as shown in Figure 1(a). Smaller systems, with flexible payload cards, can implement switchless topologies to avoid the overhead of a separate switch card. In some systems, a mesh arrangement may make sense, with logical choices being 3 card meshes with x4 links, 5 card meshes with x2 links, or 9 card meshes with x1 links, shown in Figure 1(b). Another possible approach is a ring topology, shown in Figure 1(c). Another variant, shown in Figure 1(d), uses an asymmetric ring to maximize left-to-right bandwidth for pipelined signal or image processing applications.

In addition to in-system fabric interconnect, VXS also provides for enhanced rear panel I/O in two ways. First, by implementing the fabric on P0 instead of P2, the 110 lower-speed I/O pins of P2 are now available for rear panel I/O. In addition, VXS allocates an additional 31 pins (14 differential pairs and 3 single-ended signals) on the P0 connector for high-speed connection to rear transition modules, allowing standard PMC I/O signals or multi-gigabit XMC I/O signals to be brought to the rear panel.

VXS Flexibility

The flexibility provided by the VXS standard provides systems integrators challenges along with the benefits. While the ability to support a range of protocols, speeds, bandwidths and topologies provides users with a lot of choices, it also provides a range of options that are not necessarily interoperable. A straightforward implementation of a payload card using PCI Express or Serial RapidIO would logically connect a crossbar switch ASIC to the backplane. This approach will typically work for a dual-star topology, but would not interoperate with alternate topologies and would also not interoperate with other switched fabrics.

Fortunately, FPGA technology has provided the tools needed to increase the compatibility range of a given hardware platform. FPGAs are available from several vendors, such as Xilinx, Altera and Lattice, which combine high-density logic with on-chip multi-gigabit serial interfaces. By inserting an FPGA between the on-board fabric interconnect and the off-board VXS interface, a payload card can be designed to bridge the on-board bus or fabric to a range of off-board interconnects and topologies.

VXS Examples

This approach is used in Tekmicro’s JazzStream product, shown in Figure 2. This board uses a PLX Technology 8532 switch to provide on-board PCI Express fabric, interconnecting two PowerPC processors, two PMC / XMC sites and the VXS P0 backplane interface.

One set of FPGAs is used to provide adaptable fabric bridging between the PLX switch and the XMC sites, providing support for PCI Express or Serial RapidIO XMCs as well as signal or image processing functions within the FPGAs.

Another FPGA is located between the PLX switch and the VXS P0 interface. In the simplest case, this FPGA simply forwards PCI Express traffic in both directions, allowing the card to participate in a system-wide PCI Express topology. In a more complex case, the FPGA “hides” the internal architecture behind an endpoint, creating an abstraction boundary at the card level and bridging between the on-board PCI Express fabric and an off-board fabric. The off-board fabric in this case can be PCI Express, Serial RapidIO, InfiniBand, or even Gigabit or 10G Ethernet.

Because of the flexibility inherent in the FPGA-based implementation, the FPGA is also used to provide interfaces to high-capacity memory buffers, often required for streaming applications. Finally, the FPGA has access to the traditional P2 I/O pins, allowing the FPGA to incorporate a bridge from legacy RACE++ or StarFabric interfaces into the VXS fabric should that be a requirement.

A similar approach is used in QinetiQ’s family of high-speed A/D and D/A cards, shown in Figure 3. These products are VXS-enabled using Xilinx Virtex II Pro FPGA devices, which provide both signal processing functionality as well as fabric interfaces. Through selection of the desired IP core, these cards support several standard interconnects as well as lower-level FPGA-to-FPGA links using the VXS backplane to provide card-to-card interconnect.

VXS Integration

One of the challenges of COTS technology is the integration of off-the-shelf hardware, software and IP cores from multiple vendors into a larger system, and the management of that integration as technology moves forward and the system is incrementally upgraded.

The scalability and bandwidth capability of VXS has resulted in a growing ecosystem of products from companies such as Tekmicro, QinetiQ, Bustronic, Pentek and Transtech. As more VXS products appear on the market, vendors will need to clearly indicate which protocols their products interoperate with. Some vendors will choose to make products that support specific protocols and topologies, either using ASIC-based switches or FPGAs with limited IP support. Other vendors will implement IP-based FPGA solutions and leave interoperability up to the user, allowing a wide range of solutions but more limited guarantees of drop-in interoperability. All of these solutions have value, but the expectations of interoperability need to be clearly defined so that systems integrators can select the optimum components for each application.

The use of fabric-agnostic standards such as VXS and XMC mandate the development of platforms that are fully interoperable at the physical level (connectors, pin assignments, electrical characteristics). Interoperability of specific switched fabric interconnects with standard topologies will develop as the market endorses those configurations.

The integration of FPGA-based bridges, along with comprehensive software drivers and operating environment support, will allow a comparable degree of interoperability at the protocol level while offering an unprecedented degree of flexibility to systems integrators. The combination of FPGA IP cores and software will manage different interconnect standards, topologies and configurations and provide abstraction boundaries to minimize integration issues between multiple vendors’ products.

TEK Microsystems
Chelmsford, MA.
(978) 244-9200.
[www.tekmicro.com].