A platform is a suite of hardware and software building blocks that can serve
as the basis for many different types of network elements. By beginning with the
mature designs of these building blocks and adding some applications-specific
elements, the telecommunication equipment manufacturer (TEM) can construct an
entire portfolio of network elements at a fraction of the development cost of
the traditional development mode (where all elements of each system are custom
built from scratch). Some of these platform building blocks will undoubtedly be
purchased ready-made in the COTS marketplace. Other platform building blocks will
be designed and built using the TEM’s traditional development processes
and factories, but built to conform to the Advanced Telecom Compute Architecture
(ATCA) standard. The ability to freely mix and match COTS and TEM-produced building
blocks is a significant advantage.
The TEM’s customers, the network operators, also benefit from the platform-based
approach. If many or most of the elements that make up their networks all rely
on a common hardware base and a small core set of platform building blocks, they
can realize significant operational cost savings. For example, they would only
need to have a single, very small pool of spare parts to cover a large network
wire center. Their expense for training their maintenance personnel and operating
the network would be greatly reduced. Because of these benefits, it has been suggested
that the various tenders that the network operators will be fielding in the coming
years may actually express a strong preference for equipment based on ATCA.
Integration is an advantage to ATCA-based platforms that is applicable to both
the TEMs and network operators. In highly integrated systems, the functions that
were served by many different network element boxes are combined into a single,
multi-function box. For example, many of the access and edge functions currently
performed by elements like Digital Subscriber Line Access Multiplexer (DSLAMs),
Fiber to the Home Terminals (FttH), Cable Modem Termination Systems (CMTS), edge
switches and firewalls could all be combined into a single ATCA-based universal
edge element. The high-performance, density and reliability inherent in the ATCA
standard greatly simplify the creation of this sort of highly integrated box.
The TEMs benefit from the development and manufacturing efficiencies of integration.
The network operator benefits because they have far fewer boxes to install, configure,
service and maintain.
ATCA is an ideal choice to serve as the standards base for universal platforms.
It has a number of attributes that make it ideally suited to the needs of telecommunications
network elements, including:
- • Scalable capacity to 2.5 Tbit/s
- • Scalable availability to 99.999%
- • Large (but volumetrically efficient) board form-factor—355
mm x 280 mm x 30 mm
- • Large cooling capacity—200W per board
- • Multi-protocol support for interfaces up to 40 Gbit/s
- • High levels of modularity and configurability
- • The ability to host large pools of DSPs, NPs, processors and storage
- • Convergence of telecom access, core, optical and datacenter functions
- • Advanced software infrastructure providing APIs and OAM&P
- • World-class element cost and operational expense
- • A healthy, dynamic, multi-vendor, interoperable “eco-system”
These attributes greatly enhance ATCA’s ability to serve as the basis
for a universal telecommunications platform and will be available on the platform
in 2003, with a design-in life through 2009.
To achieve the ATCA-based universal platform vision, several key building blocks
must be created. These building blocks are used again and again on many network
element designs (potentially with widely different functionalities). A surprisingly
small number of shelves, boards and mezzanine boards can accommodate a significant
portion of many different network element designs. With the addition of some
application-specific building blocks (for example, implementing application-specific
transport technologies or algorithms), a suite of ATCA-based hardware is created
that can be drawn from to assemble nearly any network element type in a major
TEM’s product portfolio.
ATCA shelves contain four important structures. First, they include the mechanical
means of rigidly holding the boards into the proper position, for example, mounting
flanges, subracks and cardguides. Second, they include the backplane that provides
high-speed serial electrical interconnect between the boards. Third, they contain
the cooling apparatus that forces a sufficient flow of air up through the boards
to remove up to 200 Watts of heat from each board. Finally they distribute the
power for the system; shelves also often contain shelf management processors,
various power conditioning modules and fan control.
Several variations of ATCA shelves will be common in platform applications.
Some shelves will need to fit in 19-inch racks due to the legacy environment
in which they will be installed, and those shelves are limited to 14 board positions.
Other installations can use the 600 mm wide ETSI frames, or 23-inch racks, and
these shelves can accept up to the ATCA maximum of 16 active boards. The 16-slot
shelves often have significant density and system cost advantages over 19-inch
systems, so they are becoming the preferred solution.
Interconnect topology represents another variation to the ATCA shelves. In dual-star
topologies, fabric slots one and two form the hub of a dual-star network, with
the higher order slots (numbers 3-14 or 3-16) providing the nodes of the dual-star
network. The theoretical bandwidth of ATCA dual-star systems will be 240 or
280 Gbit/s, respectively. Alternatively, the ATCA spec supports a very high
capacity full-mesh interconnect, where every board has a dedicated path to all
other boards, and there are no centralized fabric resources. Fourteen-slot full-mesh
backplanes have a theoretical bandwidth of 1.82 Tbit/s, while sixteen-slot full-mesh
systems support 2.4 Tbit/s. Full-mesh backplanes make the most sense for applications
where huge bandwidth, pay as you grow scalability or higher fault tolerance
is required. Conversely, dual-star backplanes are the choice for more cost-sensitive
or lower-power systems.
In some cases, ATCA boards will come with specific functions (I/O, processing,
DSPs, etc.) built directly on the board. This often leads to an absolute cost
minimum for high-volume designs. However, there are significant advantages to
providing the system’s important functions in a more modular approach.
A carrier board holds several (nominally four) mezzanine boards to perform the
system functions and provides the basic services needed to interconnect these
mezzanine boards with the ATCA backplane. In this way, it is possible to mix
and match mezzanine functions to provide exactly the correct processing and
I/O features required by a particular application. As technologies change, there
is a good chance that the system can be updated by simply swapping out the mezzanine
boards, without the need to change the carrier boards or backplanes.
Two types of carrier boards are required. The first one, called a mesh carrier
board, serves two purposes. First, it acts as the main fabric of a dual-star
system. However, it can also be used in all slot positions of a full-mesh implementation.
The second carrier board design is called a node carrier board. It is used in
the 12- or 14-node slots in dual-star systems. Node carrier boards are similar
in design to, but simpler and less expensive than mesh carrier boards. Their
backplane bandwidth is only 20 Gbit/s compared with the 150 Gbit/s theoretically
available on the mesh carrier boards.
Figure 1 is a block diagram of a reference design for a mesh carrier board.
Its primary function is to adapt the interfaces of the four mezzanine boards
it carries to the ATCA backplane and mechanical specifications.
The backplane interface/switching fabric block connects the bandwidth of up
to fifteen 10 Gbit/s backplane fabric channels—implemented as four lanes
at 2.5-3.125 Gbits/s each—to the local clients on the carrier board. The
switching capability can also direct packets from any of the fifteen backplane
channels to any other backplane channel. This capability is needed if the mesh
carrier board is used as a dual-star hub, and is also valuable in full-mesh
implementations to provide additional system throughput or fault tolerance.
The switch fabric drops local bandwidth through its sixteenth port to a network
processor complex. This may be a traditional network processor chipset, or an
FPGA providing programmable interconnect and bandwidth management. This block
performs protocol conversion, bandwidth management, queuing, QoS, security and
packet address translation functions.
The local CPU complex performs board-level control functions. It includes a
microprocessor, a large memory and support peripherals. It performs control
functions like initialization, bandwidth management, table updates and the service
and maintenance HMI. In some implementations, this CPU will also perform core
network control and call processing functions, while in others a higher performance
CPU will be installed in one or more mezzanine positions on the carrier board.
The power infrastructure block accepts the duplicated -48V power feeds from
the ATCA backplane and produces the lower voltages used by the various chips
on the carrier board and all the mezzanines. It must combine the two -48V feeds
into a single DC bus, and isolate the board power system from the backplane
power system. Often power ORing diodes are used here, but for high-availability
systems they can have problems, and a pair of load-sharing DC-DC converters
perform much better.
The board management processor block is a very small CPU that accepts the two
shelf management buses from the backplane and performs elementary board control
functions like power control, inventory, keying and reset.
Synchronization of the board is accomplished in the clock logic block. It can
accept up to six backplane distributed clock signals, manipulate their frequencies
and drive the various clock nets on the carrier board. Optionally, the clock
block can also accept a timing reference signal from the board (perhaps derived
from a received transmission facility, or a high-quality board-mounted oscillator)
and drive it out onto the backplane. This permits the construction of a distributed
clocking infrastructure for the ATCA shelf.
Finally, the carrier board has four mezzanine positions that are supplied with
interconnect bandwidth, control signals and power. These mezzanine boards are
where the real I/O and processing functions of the platform take place.
The node carrier board’s architecture is nearly identical to the mesh
carrier board. The major difference is that instead of the fifteen backplane
interface links and the complex switching fabric, only two links (one each to
the hubs of the two stars) need to be terminated. The other blocks are the same
as the mesh carrier board.
Using the ATCA modular platform approach, nearly all of the important system
functions are provided by highly programmable mezzanine boards that are plugged
into the two types of carrier boards described above. Some systems will have
nearly all of the 64 maximum mezzanine slots on an ATCA shelf filled with the
same types of mezzanine boards, while other systems will contain a rich mixture
of different mezzanine types.
Initially, the mezzanines will use the PMC spec extensively. Once the emerging
AdvancedMC specification is ratified by PICMG, it is anticipated that many of
the mezzanine functions will migrate to that hot-swappable, and much higher
performance standard. Figure 2 contains example block diagrams of four of these
The CPU mezzanine contains a carrier board interface, one or more high-performance
microprocessors, a large bank of DRAM and a peripheral chipset. It may be based
on Pentium, PowerPC, SPARC or other CPU architectures. The primary function
of the CPU mezzanine is as a compute server, performing call processing, network
control or web server functions.
A pool of perhaps a dozen DSP chips can be configured on the DSP mezzanine card.
They are tied together with an aggregator device that distributes the bandwidth
from the carrier card to each DSP chip in the farm. The aggregation function
may also perform protocol conversions, for example, converting packet-based
voice traffic to TDM timeslots. The DSP mezzanine is used for signal processing
tasks like voice processing, echo cancellation and video compression.
Although the carrier board contains some network processing capabilities, for
many platform applications supplemental network processing is required in conjunction
with a transmission facility interface. The mezzanine-mounted NPs support higher
bandwidths, more complex protocols, deeper layer packet inspections and can
convert between different protocols (like IP to ATM).
The mezzanine holds a network processor chipset, a large buffer RAM, a table
RAM and a facility interface (often optical). Special variations of the NP mezzanine
may add CAM chips, network search engines or encryption engines to supplement
the basic functions of the NP. The network processor mezzanine performs packet
processing functions like routing, protocol conversion, Quality of Service,
queuing and address translation.
All the processing and I/O provided by the above building blocks must be complemented
with a high-performance storage capability. This storage is hierarchical; the
first level consists of a large (multi-gigabyte) DRAM array, acting as a cache.
The second level is an interface to an array of disk drives, typically using
redundant array of inexpensive disks (RAID) techniques. The storage mezzanine
consists of a large DRAM bank and control logic, as well as an interface to
an external disk farm. It is used for storage-intensive applications like e-commerce,
voicemail, video servers and web caches.
Some algorithms of interest are not amenable to running on CPUs, DSPs or NPs,
and these are best implemented as custom hardware designs loaded onto FPGA mezzanines.
The FPGA mezzanine consists of a backplane interface along with approximately
six FPGA chips (providing millions of user programmable gates, and some optional
RAM banks). These mezzanines are ideal for running applications like special
protocol processors, encryption and video processing logic.
Software Platform Building Blocks
Software is at least as important a platform component as the hardware discussed
above. Benefits exactly analogous to the hardware story exist if standards-based,
multiple source software infrastructures are chosen as the basis of the platform’s
software. The software infrastructure requires several layers. Device drivers
for each of the hardware building blocks constitute the lowest layer. Next,
an operating system is required. Carrier Grade Linux is often cited as an appropriate
choice for the OS.
A layer of middleware is above the OS. Middleware provides the common API that
isolates the application code from the often-changing details of the hardware,
and provides important services like fault tolerance and resource management
in consistent ways. The Service Availability Forum’s standard middleware
is a reasonable choice here. Finally, the application program runs on top of
the entire stack, providing the mainline functions of the application. By standardizing
the lower layers, development expense can be greatly reduced, time-to-market
can be enhanced and the overall customer perceived desirability of the platform
Using the ATCA platform building blocks, a TEM can construct a large percentage
of the products in a full service portfolio. The platform approach saves development
effort and provides operational expense benefits. By basing the platform building
blocks on an industry standard like ATCA, the TEM is able to optimize a sourcing
strategy by selectively buying building blocks from leading vendors or building
them in house.
Murray Hill, NJ.