TECHNOLOGY IN CONTEXT
PCI Express 2.0: The Next Frontier in Interconnect Technology
The widespread success of PCI Express continues with the rollout of the 2.0 specification. The spec brings with it wider bandwidth for users and tighter design conditions for developers.
ALI JAHANGIRI, PLX TECHNOLOGY
Page 1 of 1
PCI Express (PCIe) is now the de facto standard for I/O in the server and PC interconnect arena. The PCI Special Interest Group (PCI-SIG) has recently released the updated PCIe 2.0 Base Specification, which offers significant enhancements over its predecessor, PCIe 1.1, at the physical, access control, software notification and system levels. This was accomplished while maintaining full backward compatibility with PCIe 1.1 hardware and software. PCIe 2.0 doubles PCIe 1.1’s rated speed, to 5 gigatransfers per second (GT/s), effectively increasing the aggregate bandwidth of a 16-lane link to approximately 16 Gbyte/s. Gigatransfers are used to measure PCIe 2.0 bandwidth because PCIe uses 8b/10b encoding, whereby every eight bits are encoded into a 10-bit symbol.
PCIe is a layered protocol composed of physical (PHY), data link (DLL) and transaction (TL) layers (Figure 1). The layered architecture has provided PCIe with a modular structure such that even though the PHY bit rate was doubled in PCIe 2.0, it did not affect the upper layers. The Physical Interface for the PCI Express Architecture (PIPE) defines the interface between the PHY and the DLL. PIPE inherently preserved its 10-bit/20-bit wide data path, while the frequency of the PIPE clock was doubled to accommodate 5 GT/s bit rates.
To maintain backward compatibility, the PHY in PCIe 2.0 has to operate at both 5 GT/s and 2.5 GT/s. (The software can also dynamically switch the link speed.) These high frequencies pose a new set of signal-integrity challenges for the PCIe PHY: There are substantial differences between 2.5 and 5.0 GT/s transmitter (Tx) and receiver (Rx) timing, based on the need to account for additional jitter effects and differences in jitter budgeting methodology (Table 1).
Jitter is placed in two categories: random sources (Rj) and deterministic sources (Dj). Total jitter (Tj) is the convolution of the probability density functions for all the independent jitter sources, Rj and Dj. While the allocation to Rj and Dj was not specified in the PCIe 1.1, PCIe 2.0 now explicitly defines jitter tolerance for receivers.
The PCIe 2.0 transmitter phase lock loop (PLL) bandwidth and jitter peaking have to be tightly controlled, whereas PCIe 1.1 had sufficient margin to allow a wide (1.5-2.2 Mhz) PLL range. The key design factor on the Tx side is to control the jitter and noise sources. The PLL and REFCLK have to be designed with very low jitter. Power supply design also has to be optimized for noise reduction as the noise tolerance on the supply is around 30 percent less than it is for PCIe 1.1.
On the Rx side, the minimum receive eye voltage aperture, or VRX-EYE (Figure 2), defines the range over which a receiver must operate at all times. This range has been reduced to 120 mV (PCIe 2.0), from 175 mV (PCIe 1.1) (Table 1). Additionally, Rx jitter compliance, which requires a low-latency clock recovery loop, has been reduced by around 50 percent from that of PCIe 1.1. These changes have tightened the Rx budget significantly. To accommodate these to a certain extent, the PCI Express Card Electromechanical (CEM) 2.0 specification has reduced the allowable impedance to 85 ohms, from 100 ohms (1-2 connector channels). All in all, board layout, material variances and ISI effects will play an important role in determining the system jitter for PCIe 2.0 designs.
In PCIe 1.1, there was no notification mechanism if the intended link width changed because of hardware-autonomous link retraining. Such a down-negotiated link in a system can cause bandwidth throttling without software knowledge and adversely impact system performance. PCIe 2.0 addresses this issue by providing native support for generating an interrupt on link width or speed change. The interrupt notifies the PCIe-aware software if link bandwidth (speed or width) changes due to link width re-negotiation. This robust mechanism enables the software to take corrective action in case certain lanes in a link fail. The software’s ability to dynamically change the link speed can be used to maintain control over bandwidth allocation.
Also added to PCIe 2.0 were access control services (ACS), which determine how a transaction level packet (TLP) should be routed. ACS also includes features such as source validation and peer-to-peer (P2P) control. In source validation, the downstream ports validate the requester ID in the TLP, after which the ID is directed to the root complex (RC). Peer-to-peer controls determine whether to forward directly, block, or redirect peer-to-peer request TLPs to the RC for access validation. ACS functionality is reported and managed via ACS extended capability structures. PCIe components are permitted to implement ACS structures in some, none, or all of their applicable functions.
Finally, a function level reset (FLR) mechanism is introduced. FLR enables the software to stop and reset individual functions within an end point. Such function-level granularity guarantees that external I/O operations performed by the card are stopped and the function’s hardware returns to its default state on a FLR.
Graphics, Storage Applications to Gain from PCIe 2.0
Among the applications most likely to benefit from the performance gain in PCIe 2.0, graphics cards are going to take advantage of its power-limit value, which has been redefined to accommodate devices that consume higher power. Other protocols such as Serial ATA and Serial Attached SCSI standards are poised to move to 6 Gbits/s, from 3 Gbits/s, and in the future up to 12 Gbits/s. The primary advantage of PCIe 2.0 in these applications is that, to achieve similar performance, it would require half the number of lanes with half the latency than that of PCIe 1.1. This translates into less real estate, fewer routing resources and smaller form-factors. By extension, Ethernet controllers, InfiniBand and Fibre Channel could be serviced via faster system links.
The industry is gearing up and introducing PCIe 2.0-based systems in 2007. Intel has released its Stoakley platform featuring the Seaburg chipset supporting PCIe 2.0 for the workstation market. AMD is set to release a trio of chipsets supporting PCIe 2.0: the high-end RD790+, the mid-range RX740+ and the budget RS740+. And nVidia is addressing the PCIe 2.0 market with its MCP72 single-processor chipset for the AMD HT-3 / AM2+ architecture. Finally, I/O interconnect chip companies such as PLX Technology are expanding their PCIe 1.0 products, and developing PCIe 2.0 switching devices to meet ever-increasing demand for more ports and lanes.
In summary, one challenge designers could face in the deployment of PCIe 2.0 is signal integrity of the links, but this can be rectified with careful design and layout methodology. Additionally, software likely will play a more significant role in bandwidth and functionality management, with tools such as dynamic-link-speed-control, ACS and FLR. Despite these challenges, the one thing that is certain is that PCIe 2.0 will usher in a new generation of applications and systems with dramatically increased performance and improved response times.