From Microcontroller to System-on-chip: A Challenge to the Full-Custom ASIC?

By Tom Williams Editor-in-chief RTC Magazine – September 2014

Consider the humble microcon- troller. In the vast majority of appli- cations—keyboards, toasters, coffee makers and more—it goes almost completely unnoticed. Yet since its appearance in the mid-1970s, and mostly in its 8-bit form, it continues to operate in the background of everything we do in our daily lives. However, in order to adapt this
little device to its huge world of functionality, it has been necessary to modify the chip design around the basic MCU with a bewildering variety of on-chip peripherals, I/O and memory options. Perhaps the best known example of this is the Intel 8051, whose variants fill count- less catalog pages and for which compatible versions have been de- veloped by over 20 vendors. Each of these also has catalog pages of variants. Now, of course, the core is also available as IP for use in SoCs and FPGAs as well. 8-bit microcon- trollers are even showing up buried deep in powerful, multicore CPUs and SoCs to handle such mundane but vital tasks as on-chip power management.
This last little item leads us to the present, which is being driven by Moore’s Law and integration. There are, of course, much more complicated things for microcon- trollers to do than run toasters.
In fact, the concept of a micro- controller—a small processor to attend to detailed, limited control functions—has blossomed into 32- and 64-bit multicore processors; multicore processors with different CPU, GPU or DSP architectures on the same device; multicore CPUs with FPGA fabrics on the same die; and the addition of huge numbers of peripherals, memory and I/O functions on-chip. This scale of ntegration blurs the distinction between microcontroller and SoCs; for systems that must include audio and graphics, it pulls the traditional CPU with companion processors into the world of the MCU and calls into question the traditional role of the ASIC.

Take a look at the block dia- grams for devices such as the PIC32MZ EC family from Microchip, the OMAP 5 family from Texas Instruments, the Atom-based Bay Trail from Intel, the NVIDIA Tegra K1, the Cyclone V from Altera and the Zync 7000 family from Xilinx, to name a few, and be astonished. We have moved beyond the mere multicore processor to the hetero- geneous SoC with a wealth of other on-chip devices, processor cores, memory options and internal buses. Many of these are product families, meaning that even at this level of integration, there are variants de- signed to address different market needs. The big advantage is that these devices are mass-produced and readily available at reasonable and predictable costs. The devel- oper considering a complex and powerful product design will have to carefully consider the cost and time-to-market implications of such devices.

Even if such a device is not totally “custom,” that is, there may be on- chip peripherals, for example, that are not used in a particular applica- tion, it may still be advantageous to use it given cost and time-to- market considerations balanced with power and size requirements. One might initially think, for ex- ample, that given their size, power requirements and volume, smart- phones would be a natural home for full-custom ASICs. However, a large number of smartphones from different manufacturers are using the dual ARM Cortex-A9-based OMAP4430 and 4460 devices from TI.

Not surprisingly, the big hurdle for heterogeneous SoCs as well as for ASICs is software develop- ment. Diverse processors on a chip require different versions of oper- ating systems, different compilers and different development tools.

In addition to programming the different cores, such a device must also be configured for such things as defining shared memory, shared peripherals, priorities and the “peck- ing order” of the on-chip processor cores. At this point, the vendors of this generation of SoCs are well aware that if they are going to offer products with a ton of peripherals, they also need to supply drivers for them so that the customer can get up and running without having to worry about low-level details.

Software also drives the ASIC developer but in a different way. Since the hardware device is being designed for a specific application, the idea is not to write a bunch of drivers, but to move as much of the software functionality as possible into hardware. That means starting with a software definition to begin the hardware design and quite a bit of customization of application and operating system for optimal performance. The expected time to accomplish this is currently about 48 months. It would appear that if heterogeneous SoCs can get over their software hurdles with innova- tions in the tool arena, they could achieve a considerable leap over the conventional ASIC.