Serial Interconnects Move to the Next Generations
Serial RapidIO 2.0 Moves into the Wireless Infrastructure
The rapid growth of mobile wireless users?and their incessant demand for more and greater applications?continues to put bandwidth per user and overall processing capacity within the wireless infrastructure at a premium. This, in turn, has a dramatic impact on interconnect performance between processing elements in the base station.
STEPHEN M. NOLAN AND DEVASHISH PAUL, IDT
Page 1 of 1
The wireless industry continues to look to Serial RapidIO to address the needs of mobile users and increase the quality of service. The latest version, Serial RapidIO 2.0, addresses all of these needs and maintains backward compatibility with the previous versions of the specification—helping the installed infrastructure base to be easily upgraded for the addition of new subscriber services.
Clusters of processors—be they digital signal processors (DSPs), field programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs)—process large amounts of data within wireless infrastructure equipment such as cellular base stations. Moving the large volume of data to, from and in between these processors is no small challenge given the constraints of real-time applications, such as voice and video. Entire traffic flows must be received, processed and transmitted in microseconds. This means the end-to-end latency between switching elements needs to occur in the sub-200-nanosecond range.
The required bandwidth in the wireless infrastructure continues to multiply as mobile traffic increases and as subscribers demand more bandwidth-intensive services on their handsets, such as video on demand. Moreover, in dense urban environments, there is a desire to support the maximum number of subscribers off one base station by adding advanced 4G modulation schemes, for example Orthogonal Frequency Division Multiple Access (OFDMA) PHY processing, supporting more antennae per base station and more processing cards per chassis. With the increasing subscriber service needs of 4G and subscriber density, there is a need to distribute the processing over multiple processors and over multiple boards and chassis. Wireless base stations can be best viewed as a peer-to-peer network of processing elements doing a pipeline of tasks replicated multiple times over numerous subscribers.
Given that wireless base stations are implemented with a peer-to-peer network of processing elements, it is important to introduce how RapidIO enables this topology with a simplified approach vs. PCI Express (PCIe)-based options. In PCIe, there is the complex of a host processor. Other processing elements are part of this root complex-based network. In complex multiprocessor systems, the address spaces of multiple roots need to be separated by non-transparent bridges, which can be embedded in PCIe switches, but is nevertheless complex to manage in wireless base stations. RapidIO has been universally accepted for this application because it does not use a root complex-based architecture and supports true peer-to-peer networking. It is based on simple lookup tables with packets forwarded to destination IDs (Figure 1). RapidIO supports messaging, with the receiver deciding what to do with it once it receives it, managing its own memory system, as shown in Figure 2.
The history of RapidIO goes back to 1997 when work began to create a standardized interface that could be implemented on processors and peripherals. In 1999, an open standard, RapidIO, was published that addressed the needs of reliability, increased bandwidth, low latency and other key needs in intrasystem interconnect. The specification gained rapid and widespread acceptance. The RapidIO Trade Association (RTA) was formed in 2000 and continues to provide updated standards that address the needs of the industry.
Migration to Serial
The migration to 2G and 3G cellular base station architectures, along with the expected services, required even larger clusters of processors in base station architectures. The original RapidIO interface, however, was a parallel architecture and did not easily support the required numbers of DSPs. Because of the number of interconnects required, the limitations of board space for traces and chip pin-count restrictions, migration to a serial interface was necessary.
The Serial RapidIO standard version 1.2 was released in 2002 and subsequently upgraded to version 1.3 in 2005. This has become the embedded interconnect of choice, adopted by major DSP, FPGA, switch-fabric and microprocessor providers for DSP clusters in cellular base stations, backplanes, military, imaging and industrial control applications. Due to the interoperability ensured by the standard, a robust ecosystem of devices, software and development tools is based on Serial RapidIO. Because of the multiplicity of Serial RapidIO endpoint devices available, the need for bridging to other interconnects is minimized.
Today, Serial RapidIO is the key data plane interconnect in 3.5 and 4G base stations. Virtually all new base station designs are incorporating Serial RapidIO architectures.
The typical base station architecture partitions the baseband processing onto one or multiple baseband cards. These cards utilize multiple DSPs and possibly other elements, such as control plane microprocessors, “Chip Rate ASICs” and FPGAs on a single board. A Serial RapidIO switch device acts as the switch fabric and aggregation point on the card. Integrated Device Technology (IDT) even offers a pre-processing switch (PPS) family of devices that, in addition to standard switching functions, can offload some of the simple data pre-processing activities of the DSP, ASIC or FPGA to allow those devices to function more efficiently. If multiple baseband cards are used in a system, typically, they are interconnected across a backplane using Serial RapidIO to a control/switch card that would also use RapidIO devices.
A standard implementation of a base station architecture uses the Common Public Radio Interface (CPRI) serial interface for the link between the baseband cards and the radio cards. IDT also offers a stand-alone Fabric Inter-Connect (FIC) device that bridges from the Serial RapidIO interface on the baseband card to a CPRI interface. Another FIC bridges from the CPRI interface to the TDM interface on the RF card side (Figure 3).
The Serial RapidIO 1.3 standard has many attributes that make it optimal for chip-to-chip communication in wireless infrastructure applications. It is a reliable, low-latency transport that supports 1x and 4x lane configurations per port at 1.25 Gbit/s, 2.5 Gbit/s and 3.125 Gbit/s lane rates, allowing up to 10 Gbit/s on a single 4x port. Combinations of 1x and 4x lane ports at different lane rates vary widely, depending on the system targets for bandwidth and user capacity for any given base station architecture. The Serial RapidIO 1.3 standard requires lower header overhead than similar serial data communication standards, such as Ethernet and PCI Express. RapidIO switches offer lower power per payload gigabit than PCIe because less power is effectively consumed by the smaller percentage of header bits, before getting to user data contained in the payload.
Serial RapidIO also allows processor-to-processor communication (peer to peer) without a root complex, which is a key to the architectural topology of wireless baseband systems with multiple distributed processors. There is no need to manage a complex memory map over multiple processing elements, making processing much more efficient than a corresponding PCIe solution.
Next-Generation—Serial RapidIO 2.0
During the past few years, Internet traffic has shown tremendous, rapid growth in mobile access markets. The recent increased demand for 3G smart phones and wireless-enabled PDA-type devices will continue to drive the increase of mobile data traffic in the enterprise market space. On the general consumer side, the biggest driver for increased data traffic for the foreseeable future will be uploads and downloads of photos and video for watching and sharing. Resulting user demand for faster access and shorter download times—while looking for a richer multimedia experience—will encourage service providers to add higher capacity 3.5G and 4G base stations to their existing wireless infrastructure mix, be they WiMAX or 3G LTE.
In 3.5G and 4G base station architectures, increasing data rate capability and the push for a higher capacity of users per base station led to increasing backplane speeds between radio and baseband cards. This overall bandwidth increase, in turn, requires more multicore DSPs interconnected in a standard cluster configuration on the baseband card. The multicore DSPs now offer architectures with three to six cores, each operating in the Gigahertz range. Moving forward, the industry migration to 45nm and lower fabrication processes will allow even higher operating frequencies and possibly the incorporation of more processing cores on-chip.
Despite all of the benefits that Serial RapidIO 1.3 brings to wireless applications, continued demands for greater bandwidth motivated the RapidIO Trade Association to publish the Serial RapidIO 2.0 standard in 2007. All leading wireless infrastructure equipment providers are adopting this technology in their next-generation platform designs to increase overall system performance, support more subscribers per base station with more revenue-generating real-time multimedia features. But Serial RapidIO 2.0 is not limited to the rollout of new platforms or upgrades of base stations to 4G standards.
In some cases, Serial RapidIO 2.0 is ideal for cost reduction and power-saving measures in legacy platform updates. Why? Simply because Serial RapidIO 2.0 can support more bandwidth per serial link, doubling the baud rate on a link from 3.125 Gbaud in Serial RapidIO 1.3 to 6.25 Gbaud in Serial RapidIO 2.0. Moreover, it can be transmitted up to 100 cm over two connectors, making it ideal for cascading multiple chassis, co-located in one physical rack, immediately expanding the processing capacity of a base station.
Two of the key benefits that Serial RapidIO 2.0 provides over its predecessor are bandwidth and port flexibility, while still maintaining backward compatibility with the Serial RapidIO 1.3 standard. The addition of 2x lane per-port configurations allows the same port rates as Serial RapidIO 1.3, with half the lane count for easy trace routing and increased trace utilization.
Serial RapidIO 2.0 also adds support for Virtual Channels (VCs) and Virtual Output Queuing (VOQs), improving overall traffic management and allowing more efficient use of the switch fabric.
The concept of VCs is to take one physical “pipe” and subdivide this into multiple smaller pipes. For example, take a busy highway, such as Highway 101 through Silicon Valley. This highway has the capacity to support a large number of cars (packets) travelling at 60 miles per hour. One can consider the entire highway as a single pipe, or we can introduce the concept of multiple pipes that make up the highway. For example, a commuter lane can be considered as one of the smaller pipes supporting cars (packets) of a given priority. While this is a physical division, in a Serial RapidIO link, it is managed logically. Packets of different priorities are transmitted on different VCs, and each VC can be allocated a certain amount of bandwidth of the actual physical link. What this means is that, using the commuter lane analogy, latency-sensitive video can be allocated a larger bandwidth and a higher priority, and be transmitted on one VC while traffic that is less sensitive to real-time constraints might be transmitted on other channels on the same physical link. This is well summarized by the RapidIO Trade Association, which describes it as follows:
“The last piece of the performance engineering story for RapidIO Specification 2.0 is the virtual channel. Virtual channels (VCs) provide the ability to aggregate traffic flows with similar characteristics, while segregating them from flows with different characteristics, and guaranteeing bandwidth to each flow. The comprehensive flow control mechanisms in RapidIO Specification 2.0 can be used to ensure that the traffic in each virtual channel meets its quality of service (QoS) requirements, without interfering with any other traffic type.”
The Serial RapidIO 2.0 virtual output queue-based backpressure mechanism allows switches and endpoints to learn which destinations are congested, and to send traffic to uncongested destinations. This feature enables distributed decision making, ensuring that available switching bandwidth is optimally used. The latency of decision making is low, as congestion information is exchanged using control symbols that are embedded within Serial RapidIO packets.
There are also improvements to Quality of Service (QoS) that will appeal to carrier-grade communications applications. Within the physical layer, the new standard additions receive equalization capabilities that will allow it to support extended long-reach (100 cm) traces at the full data rate. This can even be done with conventional FR4-based PC board materials.
The continuous advances in the wireless infrastructure to meet the growing demands of mobile users create the need for significant increases in system compute capability and bandwidth. The industry has responded with the evolution of its most successful interface standard to meet the needs for more bandwidth, more flexibility and better quality of service. The Serial RapidIO 2.0 standard addresses all of these needs and maintains backward compatibility with the previous versions of the specification—helping to keep the installed infrastructure base from becoming obsolete. The device provider community is developing devices to support the new interface, including DSPs, CPUs, FPGAs and switch fabrics.
Integrated Device Technology
San Jose, CA.