Robotic Systems: Sense, Think, Act
Simplifying Robot Software Design Layer by Layer
There are many approaches to designing the software architecture for mobile robotics. However, an effective design requires planning. A layered approach allows developers to work in parallel by using partitions with well-defined interfaces.
MEGHAN KERRY, NATIONAL INSTRUMENTS
Page 1 of 1
- Figure 1 - Robotics Reference Architecture.
- Figure 2 - Mobile robots designed by SuperDroi...
- Figure 3 - The driver layer interfaces to sens...
- Figure 4 - An environment simulator should be ...
- Figure 5 - The platform layer acts as a transl...
- Figure 6 - The algorithm layer makes control d...
- Figure 7 - The user interface layer allows a u...
- Figure 8 - Mobile robotics reference architect...
Robot software architectures are typically a hierarchi¬cal set of control loops, representing high-level mission planning on high-end computing platforms, all the way down to motion-control loops closed with field pro¬grammable gate arrays (FPGAs). In between, there are other loops controlling path planning, robot trajectory, obstacle avoidance, and myriad other responsibilities. These control loops may run at different rates on differ¬ent computing nodes, including on desktop and real-time operating systems, and on custom processors with no operating system. Too often, robot software designers and robot hardware designers prematurely design their architectures in a way that imposes a specific mapping of software to computing nodes.
Robots, by their very nature, are built by engineers who come from several different engineering disciplines. It is tempting for these engineers to work in their own areas and ignore, to as large an extent possible, the other disci¬plines. For example, a mechanical engineer may focus on the physical platform—perhaps optimizing size, weight, ruggedness and agility. The robot platform may include a rudimentary microcontroller and a bit of software for moving the robot around. Meanwhile, a computer sci¬entist might be developing much higher-level autonomy or “mission” software in a simulator—software that will eventually direct the platform to do something interesting and useful.
At some point, the pieces of the system have to come together. Often, this is accomplished by predetermining very simple interfaces between software and platform—perhaps as simple as just controlling and monitoring heading and speed. Sharing sensor data across different layers of the software stack would be a good idea, but often not worth the integration pain. Each participant brings a different view of the world, and an architecture that works well for the computer scientist may not work well for the mechanical engineer, for example.
The proposed software architecture for mobile robotics, shown in Figure 1, takes the form of a 3-4 layer system represented by the graphic. Each layer in the software depends only on the specific system, hardware platform or end goal of the robot and remains completely blind to the contents of the layer above or below it. A typical robot’s software will contain components in the driver layer, the platform layer and the algorithm layer; however, only applications with some form of user interaction will include the user interface layer. For fully autonomous implementations, this layer might not be needed.
Robotics Reference Architecture.
In this specific example, the architecture represents an autonomous mobile robot with a manipulator that is designed to execute tasks including path planning, obstacle avoidance and mapping. This type of robot might be used in several real-world applications, including agriculture, logistics, or search and rescue. The onboard sensors include encoders, an IMU, a camera, in addition to several sonar and infrared (IR) sensors. Sensor fusion is used to combine the data from the encoders and IMU for localization and to define a map of the robot environment. The camera is used to identify objects for the onboard manipulator to pick up, and the position of the manipulator is controlled by kinematic algorithms executing on the platform layer. The sonar and IR sensors are used for obstacle avoidance. Finally, a steering algorithm is used to control the mobility of the robot, which might be on wheels or treads. The NASA robots shown in Figure 2, designed by SuperDroid Robots, are similar to a robot that might be described by this architecture.
Mobile robots designed by SuperDroid Robots.
These layers will be described in more detail as they might be implemented for a mobile robot platform within the NI LabView graphical development environment. A commonly used hardware platform for robotics is the NI CompactRIO, which includes an integrated real-time processor and FPGA. The LabView platform includes built-in functionality for communicating data between each layer, and for sending data across a network and displaying it on a host PC.
As the name suggests, the driver layer handles the low-level driver functions required to operate the robot. The components in this layer depend on the sensors and actuators used in the system as well as the hardware that the driver software will run on. In general, blocks in this level take actuator set points in engineering units (positions, velocities, forces, etc.) and generate the low-level signals that create the corresponding actuation, potentially including code to close the loop over those set points.
Similarly, this level contains blocks that take raw sensor data, turn it into meaningful engineering units, and pass the sensor values to the other levels of the architecture. The driver level code shown in Figure 3 is implemented in LabView FPGA and executes on an embedded FPGA on an NI CompactRIO platform. The sonar, IR and voltage sensors are connected to digital input and output (I/O) pins on the FPGA, and the signals are being processed within continuous loop structures that execute in true parallelism on the FPGA. The data output by these functions is sent to the platform layer for additional processing.
The driver layer interfaces to sensors and actuators.
The driver layer can connect to actual sensors or actuators, or it can interface to simulated I/O within an environment simulator. A developer should be able to switch between simulation and actual hardware without modifying any layers in the system, other than the driver layer. The LabView Robotics Module 2011, show in Figure 4, includes a physics-based environment simulator that allows users to switch between hardware and simulation without modifying any code other than the hardware I/O blocks.
An environment simulator should be implemented at the driver layer if simulation is required.
The platform layer contains code that corresponds to the physical hardware configuration of the robot. This layer frequently acts as a translation between the driver layer and the higher level algorithm level, converting low-level information into a more complete picture for the higher levels of the software and vice versa. In Figure 5, we are receiving the raw IR sensor data from the FPGA and processing it on the CompactRIO real-time controller. We are using functions in LabView to convert the raw sensor data into more meaningful data—in this case, distance. We are also determining whether or not we are outside the range of 4-31 meters.
The platform layer acts as a translation between the driver layer and algorithm layer.
Components at this level represent the high-level control algorithms for the robotic system. Figure 6 shows how blocks in the algorithm layer take system information such as position, velocity or processed video images and make control decisions based on all of the feedback, representing the tasks that the robot is designed to complete. This layer might include components that map the robot’s environment and perform path planning based on the obstacles around the robot.
The algorithm layer makes control decisions based on feedback.
The piece of code in Figure 6 shows an example of obstacle avoidance using a vector field histogram (VFH). In this example, the VFH block receives distance data from a distance sensor, which was sent from the platform layer. The output of the VFH block contains path direction, which will be sent downwards to the platform layer. In the platform layer, the path direction will be input into the steering algorithm, which will generate low-level code that can be sent directly to the motors at the driver layer.
The algorithm layer makes control decisions based on feedback.
Another example is a robot that might be tasked to search its environment for a red spherical object that it needs to pick up using a manipulator. The robot will have a defined way to explore the environment while avoiding obstacles—a search algorithm combined with an obstacle avoidance algorithm. While searching, a block in the platform layer will process images, returning information on whether or not the object has been found. Once the blob has been detected, an algorithm will generate a motion path for the endpoint of the arm to grasp and pick up the sphere.
Each of the tasks in the example provides a high-level goal that is independent of the platform and the physical hardware. If the robot has multiple high-level goals, this layer will also need to include some arbitration to rank the goals.
User Interface Layer
Not always required in fully autonomous applications, the user interface (UI) layer allows a human operator to interact physically with the robot via relevant information displayed on a host PC. Figure 7 shows a graphical user interface (GUI) that displays live image data from the onboard camera, and the X and Y coordinates of nearby obstacles on a map. The Servo Angle control allows the user to rotate the onboard servo motor that the camera is attached to. This layer can also be used to read input from a mouse or joystick, or to drive a simple text display. Some components of this layer such as a GUI could be very low priority; however, something like an emergency stop button would need to be tied into the code in a very deterministic manner.
The user interface layer allows a user to interact with a robot or display information.
Depending on the target hardware, the software layers could potentially be distributed across multiple targets. In many cases, all of the layers will be running on one computing platform. For non-deterministic applications the software will target a single PC running Windows or Linux, and for systems that require tighter timing constraints, the software should be targeted to a single processing node with a real-time operating system.
Due to their size, power requirements and hardware architecture, the NI Compact RIO and NI Single-Board RIO make excellent computing platforms for mobile applications. The driver, platform and algorithm layers can be distributed across the real-time processor and the FPGA, and if required the UI layer can run on a host PC, as shown in Figure 8. High-speed components such a motor drivers or sensor filters can run deterministically in the fabric of the FPGA without tying up clock cycles on the processor. Mid-level control code from the platform and algorithm layers can run deterministically in prioritized loops on the RT processor, and the built-in Ethernet hardware can stream information to a host PC to generate the UI layer.
Mobile robotics reference architecture overlayed onto an NI CompactRIO or NI Single-Board RIO Embedded System.
A generalized answer to the problem of how to structure a mobile robot’s software shows that any design will require forethought and planning to fit into an architecture. In return, a well-defined architecture allows developers to easily work on projects in parallel by partitioning the software into layers with well-defined interfaces. Furthermore, partitioning the code into functional blocks with well-defined inputs and outputs allows components of code to be reused in future projects.