Is ATCA the conundrum that wouldn’t die? By that I mean less the technical understanding—which is quite complex in itself—but more figuring out how and when it will grow to this projected multi-billion dollar market, who will actually benefit from such growth and what will be needed for all this to come together. It appears that the world of embedded systems is bifurcating into one area dominated by small form-factor boards armed with powerful processors and serial connectivity, and another region of what might be called “the last of the great backplanes”—ATCA and its offshoot, MicroTCA.
There has been a lot of controversy over whether or not ATCA is real, and there has also been what I’ve called the “moving hockey stick phenomenon”—the predicted big market upswing to the multiple billions that seemed to be receding ever further into the future, drawing VCs in its wake like rats behind the Pied Piper. There were arguments that really big OEMs had no interest in an open backplane standard, that the spec was too complex and that supporting technologies were not really in place.
Many of these reservations were valid. Let’s face it. ATCA is a very involved, complex technological and business issue. But then so is rewiring the planet. It has taken longer than expected for all the elements to come together since the initial announcement of the specification to make a market. The primary factor is that in order to address the demands of IPTV, data, voice and mobile communications over IP, we have to radically raise the idea of what constitutes a “platform.” Basically, a platform represents a basis upon which a customer can begin to provide a unique added value to a customer. There was a time when a platform consisted of a microprocessor and an RTOS, and if you were lucky, a board support package.
In the world of digital telecommunications, a platform is a whole different deal, a much more complex mix of hardware and software, compatibility and optimization in a world where time-to-market is literally everything. An ATCA platform consists of hardware—boards in a chassis. But that hardware mix must be compatible within the ATCA spec because there are many options. Then there is the operating system, which is usually carrier grade Linux, but can consist of Linux and an RTOS. On top of that there is the high-availability middleware, which is increasingly required to support the Service Availability Forum’s interface specifications in order for the customer to reliably develop applications on top of that. Then there is the system management layer, which consists of another mix of hardware and software.
In order for the customer to feel that he or she can meet the demanding market window, all these elements must be verified to work together and that the customer can rely on a single vendor for a point of support. The vendor will have to be able to supply reliable proven platforms consisting of chassis, boards, power supplies, cooling, middleware and system management and will have to be able to do this for several mixes of options depending on the target market of the customer. Realizing this, it is no wonder that it has taken far longer than at first was optimistically predicted for this market to really take off. That hockey stick now appears to be swinging up in the 2009 to 2012 time frame. No, we really mean it this time!
But it has been a long and involved process and we’re just now getting there. People in this industry are an impatient bunch, so many of the disparaging remarks and articles can be understood. Still, there is a good deal of consolidation going on. Emerson has purchased the Telecommunications unit of Motorola, and RadiSys the telecom board business from Intel. Middleware companies like GoAhead and OpenClovis are forging tight partnerships with hardware vendors. Continuous is launching its FlexTCA line of top-to-bottom integrated platforms with hardware, middleware, protocol and management. And orders are starting to become reality.
Once the beachhead is achieved, we can probably expect even more activity that the biggest optimists had predicted, with applications we haven’t even thought of yet. The biggest challenge then will be establishing the bandwidth, not figuring out how to fill it.