Latency and System Architecture: Looming Issues for Design within the Internet of Things

Latency and System Architecture: Looming Issues for Design within the Internet of Things

BY TOM WILLIAMS, EDITOR-IN-CHIEF

Does anybody remember real time? Well, of course we are all aware that many systems have timing deadlines and issues of “determinism.” In other words, there are certain timing restraints governing the response to inputs that can be critical to a system’s operation. To put it in the classic sense, “The right answer late it wrong.” But there is another saying that, “Real time is real time enough.” In other words, if you get the right reaction to an interrupt reliably within whatever time constraints apply, you can claim to be real time.

But there was a time not so long ago that designing systems had engineers pouring over interrupt responses to make sure they always resulted in the right answer within the right time with no anomalies. Tools were used that could graphically depict interrupts and their responses. And such tools, of course, still exist and have even been greatly improved. But for some reason we hear less talk and read fewer papers and articles on the topic. I think that part of this may simply be due to the vastly increased performance of today’s processors. We would not have rich operating systems such as Linux making such inroads into embedded systems if the silicon power were not available to override many of the potential timing problems. That’s not to say that there ae not development projects today that do not demand the same close attention to timing as before, but they are definitely fewer in number.

And that is most certainly not to say that there are fewer issues of timing facing developers in this brave new world of the Internet of Things. It is hard to go to anto y industry event or company without hearing almost everyone chattering about “IoT, IoT”. But in the words of the old spiritual, “Half the people talkin’ ‘bout Heaven ain’t goin’ there.” What has come to be called the “Internet of Things” has been quietly evolving for some time pretty much on its own. But now that we have a name for it, we start seeking a definition and that leads to preconceived notions of how devices and systems are to be designed to fit into this vast, interactive environment. Such devices have to simultaneously be specialized enough to achieve their intended functions and also general enough to have the interfaces, protocols and connectivity software to fit in seamlessly.

The IoT inevitably consists of a great number of connected devices and systems united for a specific purpose but still within the connected universe. Needless to say, there are a great many ideas about how such interconnected systems should be configured to form an appropriate system architecture. For example, it is possible to simply connect large numbers of sensors and/or actuators to the Cloud and collect Big Data to analyze and act on. Most architectures consist of the edge devices (sensors, actuators and/or small automated devices with a fixed set of functions) that are connected to aggregation or gateway devices, which in turn connect to either enterprise servers or directly to servers in the Cloud. For some reason, we hear a lot about how users can manage systems via attachment from the Cloud to the gateway systems and how gateway devices can preprocess raw data for easier consumption up the line, but so far there does not seem to be any systematic treatment of how to decide which functions, analysis algorithms and decisions to put where. Why is this?

It gets us back to the topic of real time and real time enough. The automated systems that were the focus in the past could be defined in terms of action and reaction under well-understood time constraints. Those same kinds of systems are now connected within the IoT and newer ones are coming on line. Has there been any systematic analysis of the nature of time constraints on overall functionality given the structure and the latencies that are built into a censor-to-gateway-to-cloud architecture? For example, what decisions are so critical and well-defined that they should be left to a program on a gateway for action with, of course, notification to operators linked in via the Cloud. Which situations and decisions really require human input and what are the time windows that will allow sending data up to the Cloud, awaiting a human response—or a more sophisticated automatic response—and then communication back down to the device in question?

In similar way, when does it make sense to send raw sensor data up to the cloud for collection and analysis and when would it be more practical to have a certain amount of analysis done at the gateway level not only for reaction to inputs, but also to reduce the data traffic going up. These are questions that involve not only timing, but also the availability of system resources, efficiency of code, available communications bandwidth and the specific functions and goals of the system.

There really needs to be some analytic discipline to sort out all these factors and come up with a sensible data structure and system architecture that can meet the demands. But first we need to know what those demands are and for that we need an approach that not only allows us to find the answers but also to communicate and share the methods that allow specialists to collaborate on projects that are becoming increasingly vital to industry and society in general. So far there appears to be no such defined discilpline.