BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


INDUSTRY WATCH

Advanced Platform Management

LAN-Attached TCA Management Controllers: How to Build and Use Them

Intelligent platform management controllers are indispensible for advanced telecommunication systems. Here is a guide to the possibilities and options for effective platform management.

MARK OVERGAARD, PIGEON POINT SYSTEMS

  • Page 1 of 1
    Bookmark and Share

AdvancedTCA (ATCA) and MicroTCA (μTCA) are being aggressively deployed, creating opportunities for enhancements in the hardware platform management infrastructure that has been such a vital part of the success of these platform architectures. The local management controllers (which are referenced generically here as Intelligent Platform Management Controllers or IPMCs) on Telecom Communications Architecture (TCA) boards and modules all have a mandatory connection via the Intelligent Platform Management Bus (IPMB) to upper level management layers. IPMB links use the ubiquitous and low-cost, but not high-performance, Inter-Integrated Circuit (I2C) bus. It is now clear that a supplementary connection for such IPMCs to an in-shelf LAN can be very valuable, especially for the more sophisticated boards and modules.

The article entitled Using I2C for “Behind-the-Scenes” Management, published in the June issue of RTC, introduces the role of IPMB in TCA management frameworks and focuses on using I2C for non-IPMB purposes in the management of TCA shelves. Figure 1 shows the management framework for both ATCA and μTCA and two promising applications for LAN-attached IPMCs.

Two Key Applications

One of the applications concerns upgrades of the firmware and other configurable elements of a TCA board or module. PICMG HPM.1, the IPM Controller Firmware Upgrade specification, provides a common framework for doing such upgrades, even in shelves that integrate independently implemented components. (See HPM.1 Spec Defines Interoperable Firmware Upgrade for PICMG Management Controllers, published in the August 2007 issue of RTC, for an introduction to HPM.1.) HPM.1-compliant IPMCs are required to support upgrades via IPMB, and the typical size of an instance of management controller firmware, a few hundred kilobytes, is a good fit for this approach.

As Figure 1 shows, however, HPM.1 also allows up to seven additional configurable components beyond the management controller firmware to be upgraded, including one or more programmable logic devices (PLDs) such as field-programmable gate arrays (FPGAs), where the upgrade image size may be an order of magnitude larger. With such upgrade image sizes, which could apply for controller type [B] in the figure, the higher bandwidth of a LAN transport is very attractive and possibly critical.

Another application for LAN-attached IPMCs is accessing serial consoles via the in-shelf LAN that is usually present, versus running separate serial cables for each of possibly hundreds or even thousands of console ports in a large system. The Intelligent Platform Management Interface (IPMI) specification, which provides a foundation for TCA’s management framework, defines an SOL architecture in which a LAN-attached IPMC can communicate with an SOL client somewhere on the LAN. The SOL client uses that LAN as the transport for console traffic involving one or more console ports associated with the IPMC. As demonstrated in Figure 1, those console ports can be used for the payload processor(s) that performs the main functions of a board or for a console interface to the management controller itself (such as the Carrier IPMC for module type [C]).

Payload processor console ports can, for example, allow monitoring of the boot phase of the payload processor during development, diagnosis, or debugging activities. Similarly, IPMC console port access can aid in monitoring the local activities of an IPMC, which can be especially important for a Carrier IPMC that is interacting with AdvancedMC modules installed on its board. In either case, using an in-shelf LAN (which is typically already present, such as the mandatory Base Interface in ATCA shelves, which uses Ethernet) for this traffic can be hugely preferable, from a logistics and operational expense point of view, to connecting individual serial console cables to each console port. But how can we share a single in-shelf LAN between payload and IPMC traffic?

Sideband Interfaces Enable Shared LAN Attachments

One key way to accomplish such sharing, as shown in Figure 2, is by choosing network controllers (NCs) that have a sideband interface for management traffic in addition to the primary interface for use by the payload processor. An NC with sideband interface routes management traffic from the LAN to the IPMC via the sideband and forwards LAN-addressed management traffic from the sideband to the LAN. Server-oriented NCs have long supported sideband interfaces. Until recently they tended to be NC vendor- and device-specific, but often based on the SMBus variant of I2C. As a result of the sideband interface differences, distinct firmware and hardware implementations have been needed for different NC vendors and even different devices from a single vendor.

The recently adopted Network Controller Sideband Interface (NC-SI) specification defines a sophisticated common approach that is already implemented in multiple server-oriented NCs from several companies. The Distributed Management Task Force (DMTF, www.dmtf.org) developed and maintains this specification. The NC-SI physical transport is based on the Reduced Medium Independent Interface (RMII), which is otherwise used for Ethernet Media Access Controllers (MACs) to communicate with their corresponding PHYs. A conventional RMII implementation (in management controller silicon, for example) typically works for NC-SI, but there are some variations for NC-SI, especially regarding advanced configurations.

Realizing LAN-Attached IPMCs

Figure 3 shows two ways to build a LAN-attached IPMC, both based on Pigeon Point Board Management Reference products for the Actel Fusion mixed-signal FPGA. Figure 3(a) shows an SMBus sideband interface, connected to a specific Intel NC, perhaps an 82575 (which implements dual Gigabit Ethernet ports, though the figure shows only one). An SMBus port on the IPMC requires minimal resources, so either of two Fusion devices can be used: P1AFS600 or P1AFS1500. These devices are distinguished by the size of their programmable FPGA fabric. The P1 prefix on the device part number indicates the devices are enabled for a soft ARM Cortex-M1 core and run the Pigeon Point BMR IPMC firmware.

Figure 3(b) shows an NC-SI implementation, where the NC is any NC-SI-compliant device; however, the Intel 82575 NC implements both NC-SI and SMBus as sideband alternatives. On the Fusion-based IPMC side, adding a Core10/100 Ethernet MAC into the FPGA enables the necessary RMII port. The larger P1AFS1500 Fusion device is required, due to the FPGA fabric resources needed by the Core10/100 component.

In TCA, the management controller is powered unconditionally and before the payload. Therefore, SOL can provide access to the payload console interface even before the payload is powered and can enable console interaction with the payload operating system from the beginning of the boot process, which is often critical to debug/diagnosis efforts.

In addition, any PLD updates managed by the IPMC via HPM.1 can be applied while the payload is not powered. Such PLDs may well implement payload-critical logic, meaning that updates with a powered payload could be disruptive.

SOL visibility for an IPMC serial console can significantly aid diagnosis of tough problems during qualification or in the field. Serial console cabling is typically not configured in such contexts. Some tough problems may require enabling special debug tracking output. Using SOL for such output allows it to be collected and stored remotely for analysis.

Implementing Direct LAN Attachment in an IPMC

In some boards or modules, the payload does not need a LAN interface, but the board can still benefit from an interface via the IPMC. For instance, the payload may consist of one or more large PLDs with large upgrade images that are much more feasible to deliver via LAN than via IPMB. Figure 4 shows an example implementation of this configuration, again based on the Actel Fusion P1AFS mixed-signal FPGAs.

Here, a simple RMII-capable Ethernet PHY replaces the NC-SI capable NC. NC-SI does not require interoperability with generic RMII PHYs, but the Core10/100 MAC block supports it. Annex B of the NC-SI specification lists the differences in RMII as used for NC-SI. Those differences include a) no requirement for 10 Mbit/s support and b) no requirement for 5V tolerance.

NC-SI supports a range of configurations for one or more NC(s) connected with an IPMC. In addition to the simplest configuration with one single-port NC, NC-SI allows multi-port NCs and up to four autonomous NCs, all connected to a single IPMC. Figure 5 shows some examples. The NC-SI specification defines extensions for using RMII as a multi-drop bus and the necessary corresponding protocol provisions between IPMC and NC.

Building LAN-Attached IPMCs

One way to build an IPMC that includes LAN-attached support, including either an NC-SI or an SMBus sideband interface, is to use Pigeon Point’s Board Management Reference (BMR) solutions based on the Actel Fusion mixed-signal FPGA. These solutions provide a board schematic, an FPGA design, and complete firmware for either an IPMC or a Carrier IPMC. They also include a bench top implementation of the reference design, which enables quick ramp-up on the TCA hardware platform management framework, including the use of a direct Ethernet connection to do Serial Over LAN and HPM.1 firmware upgrades via LAN.

A user of the BMP board could experiment with LAN-attached SOL and HPM.1 firmware upgrades by attaching a logical “payload processor” to the SOL UART interface on the board and a computer running a SOL client application and an HPM.1 upgrade agent to the Physical 10/100 Ethernet port on the board. In fact, the open source application ipmitool (http://ipmitool.sourceforge.net/) is capable of handling both the HPM.1 upgrade agent and SOL client roles.

Pigeon Point System
Scotts Valley, CA
831-438-1565
www.pigeonpoint.com