Ever Denser Silicon Lies at the Heart of the Future of Connected Embedded Devices

I recently made an offhand remark that thanks to the speed of today’s processors, we didn’t have to worry quite so much about counting the nanoseconds in servicing interrupt routines. I was immediately taken to task with the admonition that there was still such a thing as hard real time and we definitely do have to worry about meeting deadlines. This is being said to someone in publishing. OK. No argument. We absolutely do have to meet hard real-time deadlines and we have to ensure that odd glitches do not stretch out service routines in anomalous, if rare, instances. There are definitely enough critical systems that absolutely demand strict deterministic behavior.

I guess what I should have said is that it looks like it has gotten a lot easier to confidently do that thanks to amazing speed increases in today’s processors. Of course, the other result is that the size and complexity of code has grown along with the processing speed. Still, there are some limits to just how fast a response is required for any given interrupt. Yes, it’s fast, really fast, but it has become more manageable.

This, along with the emergence of multicore processors, has given us the luxury of having familiar desktop operating systems like Windows and Linux along with the ability to exhibit hard real-time behavior. Now in many cases, “exhibiting” what looks like real-time behavior is not the same as actually performing in real time. But then it has always been acceptable to be “real-time enough,” to reliably meet the timing constraints of a given application. Even the most stringent hard real-time performance is acceptable if it is “real-time enough” to avoid causing a core melt-down—as is a consumer app if it is “real-time enough” to satisfy the user’s performance expectations.

At the bottom of all this, of course, is Moore’s Law. Along with the scale and integration in processors has come the truly epochal increase in memory capacity—both volatile and nonvolatile. This has enabled the incorporation of embedded versions of desktop operating systems such as Windows and Linux and complex mobile operating systems like Android (which is built on top of Linux) into phones, tablets and other mobile devices. The ability that has been bestowed on embedded systems—mainly thanks to multicore CPUs—to incorporate both a desktop operating systems and an RTOS will be of enormous value as we move further into the Internet of Things.

This makes non-real-time systems like phones and tablets able to interact with both non-real-time and real-time systems by way of Ethernet/Internet connections. Of course, the systems that rely for vital parts of their functionality on real-time performance must also have a separate partition that can interact with the definitely non-real-time Internet communications while not disturbing their vital real-time functionality.

In fact, this ability to separate not only real-time but also sensitive functionality from an Internet-connected operating system environment may be the most significant contribution that the advent of multicore processors and their attendant hypervisor and virtualization technologies have made to the future of the Internet of Things. The need for security is never far from the thoughts of developers planning industrial or other proprietary and sensitive applications that will also depend on connectivity to do their work. Partitioning, virtualization and separation kernels can make huge contributions to improved (never absolute) security for connected devices while providing controlled access for users.

Multicore allows an architectural model concept for developing connected devices. The ability to separate two different operating systems or two regions of functionality under one operating system as well as the ability to distribute processing across two or more cores gives the developer enormous room for creativity along with a framework for security. Of course, this still requires care. Inattention to details in the use of shared memory, for example, in a multicore design can open what was an otherwise secure design to all sorts of wormholes and crevices.

Is there a limit to the amount of computational power that we can embed into devices? The answer appears to be: not in the foreseeable future. As high-performance computing shrinks into silicon, that silicon with its low power consumption and small size will find its way into embedded systems that constantly thirst for more functionality. As these things are connected, they will rely on the raw power, the denser architectures and the inherent connectivity built into silicon to do more and more in ways we have not yet conceived.


Be the first to comment

Leave a Reply