PCI Express vs. Ethernet – Selecting the Superior Technology for Real-Time, Embedded Systems

PCI Express has become so widely adapted and is so flexible that it is included on most microprocessors thus making it possible to implement all the connections in a rack with one interface and one protocol. Savings in cost, power, latency and complexity are all made possible through a unified design.

by Krishna Mallampati, Avago Technologies

PCI Express (PCIe) is growing up, as Ethernet matures. It’s being used as a chip-to-chip interconnect and Ethernet as a system-to-system technology. This has been the way embedded-system designers have been using both these technologies without a second thought because there have been some reasons (right or wrong) why these boundaries have endured. Regardless, these two technologies have been co-existing. While nothing on the horizon is about to fundamentally change this, PCIe is showing every sign of growing and competing with Ethernet for space that once was solely the domain of Ethernet – specifically, within the rack.

What benefits can PCIe offer embedded-system designers to compete effectively and win against Ethernet? The answers to this question are of significance in real-time computing and embedded applications, in which designers are constantly looking for lower price and power without compromising on performance. In order to get the attention of real-time and embedded-systems users, it is important for PCIe to not just be incrementally better than the competing technology. It must provide significant savings on power and price without impacting performance in any way other than making it better. Real-time computing and embedded-systems users are very demanding!

PCI and its successor PCIe have been around for decades, and PCIe has built a huge ecosystem in its existence — so much so that, aside from a few vendors, almost all semiconductor products come with native PCIe. The PCI-SIG has more than 800 members, reflecting the reach and adoption breadth of this popular interface. With the latest incarnation of PCIe—Gen3 with speeds at 8GT/s—PCIe is now expanding from a chip-to-chip interconnect into an interface of choice within the rack, and in many cases displacing Ethernet. Real-time and embedded engineers won’t be easily swayed to replace Ethernet with PCIe for modest savings; they need an order of magnitude in savings for both power and cost before changing anything. PCIe is definitely up to that task!

Current Architecture
Real-time and embedded systems that are the norm, presently employ several interconnect technologies that need to be supported. Fibre Channel and Ethernet are two examples (Figure 1). There are several shortcomings to this architecture. Among them are the existence of multiple I/O interconnect technologies, low utilization rates of I/O endpoints, high power and cost of the system due to the need for multiple I/O endpoints. In addition, the I/O is fixed early in the design and build process leaving no flexibility to change later And as a result management software must handle multiple I/O protocols along with overhead.

RTC12 TCTW Avago Fig1

Figure 1: Example of a traditional I/O system in use today

The use of multiple I/O interconnect technologies increases latency, cost, board space, design complexity and power. The end points are under-utilized, meaning system users pay for the overhead of the existence of these various endpoints despite their limited utilization. The increased latency is from the PCIe interface native in the processors on these systems, which needs to be converted to the multiple protocols. Embedded designers can reduce their system latency by using the PCIe that’s native on the processors and by converging all endpoints using PCIe.
Sharing I/O endpoints, as illustrated in Figure 2, is the most obvious solution to these shortcomings. This concept appeals to all system designers because it lowers price and power, improves performance and utilization and simplifies their designs. There are several advantages to a shared-I/O architecture.

RTC12 TCTW Avago Fig2

Figure 2: A traditional I/O system using PCI Express for shared I/O 

As I/O speeds increase, the only additional investment needed is to change the I/O adapter cards. In earlier deployments, when multiple I/O technologies existed on the same card, designers would have to re-design the entire system, whereas in the shared-I/O model, they can simply replace an existing card with a new one when an upgrade is needed for one particular I/O technology.

Since multiple I/O endpoints don’t need to exist on the same cards, designers can either manufacture smaller cards to further reduce cost and power, or choose to retain the existing form factor and differentiate their products by adding multiple CPUs, memory and/or other endpoints in the space saved by eliminating multiple I/O endpoints from the card.

Designers can reduce the number of cables that crisscross a system. With multiple interconnect technologies comes the need for different (and multiple) cables to enable bandwidth and overhead protocol. However, with the simplification of the design and the range of I/O interconnect technologies, the number of cables needed for proper functioning of the system also is reduced, thereby eliminating the complexity of the design and delivering cost savings.

Implementing shared I/O in a PCIe switch is the key enabler for architectures illustrated in Figure 2. single-root I/O virtualization (SR-IOV) technology implements I/O virtualization in the hardware for improved performance, and makes use of hardware-based security and quality-of -service (QoS) features in a single physical server. SR-IOV also allows the sharing of an I/O device by multiple guest operating systems running on the same server. Most often an embedded system has complete control over the OS making it easier to accomplish this.

PCIe offers a simplified solution by allowing all I/O adapters (10GbE or FC or IB or others) to be moved outside the server. With a PCIe switch fabric providing virtualization support, each adapter can be shared across multiple servers and at the same time provide each server with a logical adapter. The servers (or the virtual machines on each server) continue to have direct access to their own set of hardware resources on the shared adapter. The resulting virtualization allows for better scalability where the I/O and the servers can be scaled independently of each other. I/O virtualization avoids over-provisioning the servers or the I/O resources, thus leading to cost and power reduction.

Adding to the shared I/O implementation, the PCIe fabric has enhanced the basic PCIe capability to include remote DMA (RDMA), which offers very low latency host-to-host transfers by copying the information directly from the host application memory, without involving the main CPU, thereby freeing up the CPU for processing other useful system functions.
Table 1 provides a high-level overview of the cost comparison, and Table 2 provides the high-level overview of the power comparison when using PCIe as opposed to 10G Ethernet. The price estimates are based on a broad industry survey, and assume pricing will vary according to volume, availability and vendor relationships with regard to top-of-rack (ToR) switches and the adapters. These tables provide a framework for understanding the cost and power savings by using PCIe for I/O sharing — principally through the elimination of adapters.

avago table one

Table 1: Cost savings comparison between PCIe and Ethernet

PCIe is native on an increasing number of processors from major vendors. Embedded designers can benefit from the lower latency realized by not having to use any components between a CPU and a PCIe switch. With this new generation of CPUs, those designers can place a PCIe switch directly off the CPU, thereby reducing latency and component cost.

avago table 2

Table 2: Power savings comparison between PCIe and Ethernet

PCIe has come to dominate the mainstream interconnect market for a variety of reasons. First is its ability to scale linearly for different bandwidth requirements. From x1 connections on server motherboards, to x2 connections to high speed storage, to x4 and x8 connections for backplanes, and up to x16 for graphics applications. The other main advantage for PCIe is its simple, low overhead protocol. Lastly, PCIe is ubiquitous with almost every device in a system having at least one – and often more than one – PCIe connection. From a system designer’s perspective, it doesn’t make much sense to convert this PCIe interface to another technology and then back again to PCIe – highly inefficient!

In addition to the many advantages already cited here, another key attribute of PCIe that it’s also a lossless fabric at the transport layer. The PCIe specification has defined a robust flow-control mechanism that prevents packets from being dropped. Every PCIe packet is acknowledged at every hop, insuring a successful transmission. In the event of a transmission error, the packet is replayed again – something that occurs in hardware, without any involvement of upper layers. Data loss and corruption in PCIe-based storage systems, therefore, are highly unlikely.

To satisfy the requirements in the shared-I/O and clustering market segments, technology innovators such as Avago Technologies are bringing to market high-performance, flexible, and power- and space-efficient devices that help real-time and embedded system designers and users realize the full potential of PCIe for price, power and performance benefits. These switches have been designed to fit into the full range of applications cited above. Looking forward, PCIe Gen4, with speeds of up to 16Gbps per link, will help to further expand the adoption of PCIe technology into these crucial market segments, all the while making it easier and economical to design and use.

Further exending PCIe’s place in embedded and real-time systems is its application as a fabric—an application that, until recently, hasn’t been regarded as a viable general-purpose solution. But that is now changing. Designers are opting for PCIe as the main interconnect inside systems, with either Ethernet or InfiniBand connecting those racks together. The ability of these technologies co-exist and complement one another gives rise to fabrics as a brand-new application for PCIe. And this is bringing the technology to the forefront of new system architectures, while refining the role of traditional interconnect solutions.
PCI-based sharing of I/O endpoints is expected to make a huge difference in the multi-billion dollar embedded markets. However, Ethernet and PCIe will continue their co-existence, with Ethernet connecting systems to one another, while PCIe continuing to blossom in its role within the rack.

PCIe is indeed growing up. And embedded/real-time system designers are beginning to reap the benefits of its steady maturity. Looking forward, PCIe Gen4, with speeds of up to 16Gbit/s per link, will help accelerate and expand the adoption of PCIe technology into real-time and embedded market segments while making it easier and economical to design and use.

Avago Technologies, San Jose, CA. (408) 435-7400. www.avagotech.com