It’s Not Your Father’s Eight-Bit Microcontroller Any More

BY TOM WILLIAMS, EDITOR-IN-CHIEF
December 2013

 

Those of us who came into what is now called the embedded computer industry oh so many years ago, initially became accustomed to think of an embedded controller as a relatively small device with a fairly singular dedicated purpose. Little modules buried in machines and making them hum. How rapidly and profoundly that has changed. The urge arose to embed machine intelligence into practically everything large, small and in-between and it is still happening. I am reminded of the old Monty Python skit about the Society for Placing Things on Top of Other Things where the officer comes into the meeting and notes, puffing unctuously on his pipe, that on the way there he noticed quite a number of things that were not on top of other things and that this was not acceptable.

Well today we literally have vacuum cleaners that run Linux, but there are still quite a number of things . . . They are not little things either, and the intelligence that resides in them is no longer dedicated to a single purpose. We have entered the age of high-performance embedded computing. That sounds all very nice, but what is it really? First of all, it challenges the assumption that there might be more computing power available in a given processor or system than anyone might really want to embed somewhere.

There is no end of large, complex, connected and vital systems, machines, vehicles, industrial operations, medical devices and more that can benefit from ever more embedded computing power. Serving those needs is limited only by the ability to cram ever more performance into ever smaller and lower power packages and harness it with software applications and connectivity. This is the result of the ever ongoing progress of Moore’s Law—the inexorable progress of ever-increasing density of integration on silicon. The world of embedded is moving from what was systems based on modules or boards, to one based on systems-on-chip. Of course, those systems-on-chip can also be placed on boards and modules that can in turn be integrated into even larger systems. And so it goes.

At the same time, what we once tended to think of at the mention of the term “SoC” is undergoing a change as well. Once, it seems, SoC seemed at least partially synonymous with ASIC. That is, a highly integrated silicon device that included a CPU core and a selected mix of on-chip peripherals and memory, which was able to execute code from off- or on-chip memory, operate its own I/O and pretty much act as a stand-alone embedded computer. These devices, however, were usually not general-purpose but, due to limited silicon area and the use of a specific selection of on-chip functionality, were targeted at a specific application. Thus the close association with ASIC. Both SoCs and ASICs required high volumes to justify the expense of development, the fabrication and the risk of errors that could force a re-spin of the design. Now thanks to Moore’s Law, that is changing too.

It is now possible to integrate all kinds of things onto a single silicon die, many of which would have been difficult to imagine fitting onto a reasonably sized circuit board not so long ago. So now we see devices appearing that have a 32-bit core with cache and flash memory, with on-chip DRAM, with a host of peripherals, network controllers, high-speed interfaces such as USB, large amounts of digital I/O, graphics accelerators, A/D converters and more. These are connected to high-speed internal buses, and sometimes there are even small embedded microcontrollers whose sole purpose is to manage the power consumption among so many on-chip functions. And these devices are certainly not ASICs; they are commercial, mass-produced parts offered for general sale, albeit usually in families with a choice of variants.

Now the OEM gets an interesting choice. Where once a single-chip solution might have seemed out of the question due to the volume/cost issue, one of these devices could look very attractive from a cost, size and power perspective even if the target design didn’t use 30 percent of the on-chip functions. At that price, who cares? And especially since the semiconductor vendors who are starting to roll out such devices must, must, must supply the underlying software infrastructure of RTOS, drivers and libraries, which are as complex and varied as the on-chip silicon, the OEM can feel doubly blessed. Here is a cost-effective single-chip solution with a platform interface that lets him quickly start at the point of value added for a target product. Of course, that is not the only choice for the OEM. The choices are just bigger and more attractive for getting ever more and higher computing power into smaller spaces for greater connected intelligence from the smallest parts of the Internet of Things to the Cloud. Because, as I’m sure we have all noticed, there are still quite a number of things. . .

 

Be the first to comment

Leave a Reply