Autonomous UIs—A New Path for Customizing Application Look, Feel and Function
The concept of Autonomous UIs lets application developers specify generic or abstract presentation of controls, widgets and even content, giving downstream developers the freedom to brand and customize without altering the underlying application code.
ROBI KARP, CEO, FLUFFY SPIDER TECHNOLOGIES
Page 1 of 1
Embedded computing has its roots in industrial automation and instrumentation. The vast majority of traditional systems was headless, or communicated with operators and other users through dedicated physical means—knobs, dials, gauges, indicator lights, etc. Today’s intelligent devices, by contrast, boast greater sophistication of operator interaction with user interfaces (UIs) comparable in scope and capability to desktop applications. Leading this trend are mobile computing devices like smart phones, in-vehicle (IVI) systems and home entertainment (media players, HDTV, DVRs and STBs). However, nearly every design domain, from factory automation to medical devices to aerospace systems and from networking equipment to Point-of-Sale, demands increasingly engaging user interfaces.
With traditional design methodologies, application code “owns” the particulars of UI implementation, determining the type, orientation, placement and other attributes of objects on the display (button, widgets, etc.). This includes the flow of their use and the callback code that powers those elements. The attributes of a UI design are thereby set in the original design and are only minimally mutable downstream, by channel partners, third-parties and end users. Some UI and application frameworks support theming—customization of color schemes, menu text styles, window frames, widget sets, etc. However, the fundamental structure and flow of an application UI remains set in stone—a closed box as imagined by the original design team.
Decoupling the UI from the Application
The concept of Autonomous UI design and implementation goes beyond custom themes, icon sets and color schemes common on many mobile phones and other intelligent devices. It entails letting developers bind custom functionality to individual UI elements via runtime scripting. It supports the addition and/or removal any item from an application UI, including images, videos and widgets without changing any application code, i.e., with binary images.
An autonomous UI further enables existing applications to integrate reaction with new device events and capabilities, like shaking and orientation by adding an accelerometer, location and movement (GPS), and definable data and network events such as calendars, stock quotes, sports scores, wireless traffic, etc. It also lets integrators, operators and end users easily add new UI personalities at runtime without changing shrink-wrap application code (Figure 1).
Different Presentations of the same applications are made possible by decoupling the UI from the application code.
Autonomous UI Architecture
Breaking out application and presentation code doesn’t require radical rethinking of the core application design. Application code can still solicit input and generate output. It is still up to the application design team to determine how much control resides inside the application itself vs. the amount exposed to subsequent modification.
But decoupling does require specific support from the underlying graphical and multimedia framework. Key enablers of an autonomous UI include providing safe binding between underlying graphical system APIs and an external, open programming environment. One very good tool for accomplishing this is the Lua scripting language (see sidebar “Lua Scripting Language” p. 26). In addition, there must be a way to expose inventories of (public) application objects that implement UI functions. The decoupling mechanism must also support a protocol between presentation code and the application for information exchange.
It is also important to provide an open high-level API for developer use. It is of course the main point of entry for developers, and also simplifies translation of information between the language bindings using the protocol. At Fluffy Spider Technologies, we chose C in building FancyPants itself, and for underlying libraries. For building Autonomous UI code, we bind to the Lua scripting language at a high level, taking advantage of Lua features and rapid prototyping capabilities.
SMS Client – On a mobile handset, the device manufacturer will typically include a short messaging system (SMS) application, sourced from the mobile OS supplier, a third-party (ISV) or created in-house. This “preload” SMS application likely includes a traditional, straightforward display of messages and addressees. A mobile network operator (MNO) or other channel partner has few options for customizing or branding this kind of application, and is often forced to pass uninspired software through to end users “as is” or invest in replacing preload applications at considerable effort and expense.
An SMS client, or comparable application, designed to leverage Autonomous User Interface design, would offer MNOs and other channel participants numerous options for customization and differentiation. For example, developers at the MNO could enhance addressee information with status and location-based data supplied from the operator’s network. Similarly, a third-party ISV could offer an alternate look and feel to that same SMS client, using previously unavailable functions like accelerometer input or GPS coordinates—all without modifying any original application code. An example of such different look and feel designs is shown in Figure 2.
Two presentations of a single SMS application are obviously targeted at different classes of users.
Universal Access UIs – Many operating systems, devices and even individual applications offer some degree of support for disabled users, ranging from alternate input methods to zoomed/large typeface output to spoken and Braille-based UIs. While surely helpful, most attempts at adaptive UI customization merely augment one or two attributes of existing interface design without really accommodating actual user needs.
Building on the foundation of Autonomous UI, applications could easily be augmented to support alternate input methods by supporting aftermarket integration of voice-input engines, iconic input schemes like Minspeak and Bliss Symbols, and accommodating the particulars of adaptive communication equipment. Similarly, application output could be conditioned for user visual acuity, limited field of vision, etc.
Ruggedized Instrumentation UIs – Most instrumentation systems are designed for nominal viewing and input conditions, targeting stable, well-lit workbench, laboratory and clinical environments. Using the same device in more challenging environments—in moving vehicles, high-noise environments, in bright sunlight, etc.—presents challenges to off-the-shelf UIs and can render a device almost completely ineffective.
OEMs can’t begin to predict the ways and locales in which their wares will be deployed. By employing Autonomous UI design, both OEMs and integrators can facilitate mission-specific customization to accommodate vibration, the need for ad hoc mixtures of audio and visual output, and “off book” ad hoc integration with other types of equipment.
Manufacturers Producing Multiple Product Families – Depending on the industry, a new product—not just a new product version—can require between 2 to 10 or more man-years of engineering effort to reach the market. For point-of-sale terminals, with stringent security requirements, development time can stretch longer, while mobile phones, with quicker sales cycles have a much shorter market window time but usually involve an even greater engineering investment. A significant part of the engineering effort lies in the creation of a compelling, differentiated user interface.
For OEMs creating families of products with multiple members, being able to deploy the same application code base with different user interfaces saves time, money and can also help focus development effort on truly differentiating features. For subsequent iterations of the same product line, an Autonomous UI helps new products in the family arrive to market more quickly and with confidence.
The arena where this phenomenon is most evident is in common operating platforms. OEMs choose a common, interoperable COTS OS like Android or WinCE to save on non-differentiating engineering, and to leverage existing or evolving ecosystems that revolve around those platforms. However, these platforms typically leave little room for OEM branding and customization. Unless OEMs invest in significant incremental engineering, as Motorola did with Blur UI, users will be greeted with the same UI as on every other Android gadget, relegating the new device to the status of commodity as in the PC market. And if OEMs do make the required investment, they will likely need to repeat that effort with each new platform release. Figure 3 shows two home screens for an application based on the Android operating system but enabled by an autonomous user interface.
Two Android home screens can offer widely different look and feel for the same application while delivering the same functional value to different classes of users.
Autonomous UI design is not just another way to subdivide application functionality. It actually offers developers benefits that emerge directly from decoupling UI and application code. It offers shorter development time to create brand new, unique user interfaces without modifying application code. At the same time it adds the ability to add intelligence to existing UI code, it can, in fact, make decisions and process events unforeseen in the original design. An autonomous UI results in shorter time devoted to quality assurance because the application code has already been quality assured and only the different versions of the user interface need to go through the process.
The ability to create a family of products based on a single application code base means that high-end devices can have different user interface presentation and reactions than the low-end products. Product iterations can also take advantage of the shorter Q/A time and reduction in software development time. The ability to build devices with a unique look and feel, even on commodity software and hardware platforms, leads to enhanced brand retention. A manufacturer can create a recognizable user experience that will come to be identified with that brand.
The concept of decoupling UI and application design principles embodies core principles that can be applied to using other frameworks and toolkits—abstraction and limiting application dependency on platform particulars. In today’s dynamic landscape of multiple applications OSs (especially in mobile), and of burgeoning device SKU counts, it’s especially important to build a strong base product that can be easily tailored for different packages, channels and markets.
Fluffy Spider Technologies.