High Speed Sensors and Shared Memory

Many of today’s defense systems involve the use of high-speed sensors such
as Radar sets that produce an almost continuous stream of data. The data must
then be processed by a signal processing system to extract the intelligence for
the data. In many cases the sensor is connected directly to the signal processor
providing a low-cost, low-latency system.

In cases where the sensor and the processor reside within the same enclosure,
an interconnect standard call Front Panel Data Port (FPDP) VITA 17 has been adopted.
This standard specifies a flat ribbon cable of about one meter to connect the
sensor to the signal processor. As systems have grown it has become increasingly
difficult to keep the sensors and the processors within the one-meter distance
allowed by the FPDP specification. To get around this distance problem a serial
version of FPDP has been developed.

Serial FPDP

This new standard, called Serial FPDP (VITA 17.1), eases the one-meter restriction.
The data from the sensor is serialized and then sent to the processor, which now
allows the distance between the sensor and the processor to be 10K meters or more.
While this addresses the distance question, the standard—which is still
point-to-point—only supports connecting one sensor to one processor.

Figure 1 shows a typical system with sensors connected to signal processors
using a point-to-point interconnect. While this type of interconnection does
have certain advantages, including ease of implementation and interjection of
minimum latency on the data, there are some inherent limitations with a point-to-point
interconnect scheme. First, all components are connected in series so if any
one of the parts fails the entire chain fails. Another problem is that there
is a pairing between the sensors and the processors—if another sensor
is added another processor must also be added.

One technique to avoid the failure of the entire link in a point-to-point
interconnect system is to include a non-blocking crossbar switch between the
sensors and the processors (Figure 2). The switch has minimum impact on the
latency, but provides the freedom to interconnect various sensors to processors
as the need arises.

The Serial FPDP interconnect has a bandwidth over 200 Mbytes/second. If all of
the sensors require this amount of bandwidth then there is no obstacle to using
a separate interconnect for each new sensor. A problem arises, though, if not
all sensors need that much bandwidth all of the time; with a point-to-point interconnect
there is no easy way to share excess capacity. One technique that does allow the
network bandwidth capacity to be shared is to use shared memory.

With a shared-memory-based interconnect system such as shown in Figure 3,
each sensor is assigned an area of memory to write its data. The data is then
replicated via hardware in all of the other nodes. In a properly designed shared
memory network the movement of data is performed without any software involvement
therefore incurring no CPU overhead.

In a shared memory system, the applications on a processor are aware of the assigned
memory locations for each of the sensors. By selecting an area of memory, the
processor has access to the data from one of the sensors. By changing a pointer
into memory an application can switch to any other sensor, which allows a single
processor to support multiple sensors as the need arises.

The entire shared memory network acts as a giant switch by allowing any given
sensor’s data to be processed by any available processor. This flexibility
gives the system designer the ability to plan around system bottlenecks that might
occur due to changing mission phases or system faults.

Load Sharing

During different phases of a mission some of the sensors and /or processors may
have a heavier or lighter load. With a shared memory system the system architecture
is able to assign multiple processors to perform a task that is critical to the
mission’s success and to delay tasks that might have a lower priority.

As an example, the radar used to guide a pilot in for landing at an airfield would
probably not be needed when flying over hostile territories and could be given
a low priority. On the other hand, the radar used for tracking targets is very
important and might warrant some additional processing horsepower. There are many
examples where radar sensors or signal processors could, if given the opportunity,
do extra duty to ensure a mission’s success.

Since all data from a sensor is available to each processor, the processors would
only need to be commanded to work on a portion of the task. In this case, the
application within one of the processors only processes a portion of the data
or only generates a subset of the outputs. This results in significantly faster
answers to a problem compared to the single processor approach.

Another example of load sharing is when a number of slower sensors don’t
require the full processing capability of a processing node. In this case, each
of the sensors is assigned a portion of the memory, and a single processor running
multiple applications (tasks) would be able to process each of the data streams
with the priority required. Here, load sharing eliminates the need to run individual
connections or the need to have separate processors for each sensor as they are
added.

Fault Tolerance

The shared memory network provides natural fault tolerance to component or interconnect
failures. For example, if a sensor is damaged and no longer capable of performing
properly, a backup sensor can be assigned to that task. Sometimes, though, this
option isn’t available. Sensors tend to be physically oriented to perform
the task originally assigned. A radar pointed down toward the ground for landing
would not provide much useful data about threats located above the craft.

Moving tasks between processors, though, is a very different proposition. Generally,
tasks can be assigned to different processors even though efficiencies may be
impacted. If the software has been compiled for the processor and is available
for execution then the mission can proceed. Even if the inefficiencies are too
large, it may still be possible to complete the mission albeit at a less aggressive
rate.

Moving tasks between processors may not always be a preferred operation, but if
the original processor is no longer functioning, then moving is better than aborting
a mission. Depending upon the system designer’s plans, shared memory provides
the framework to handle several faults and still maintain a reasonable level of
system readiness.

Because a shared memory network is scalable, it enables additional sensors and
/or processors to be added as the requirements placed on the network grow. If
a new sensor is added it must receive an allocation of memory for it to write
its data. The data is then added to the data already being handled by the network.
It is important to make sure the maximum network throughput has not been exceeded.
To increase the processing power on the network it is only necessary to add more
processing nodes. The applications running in these nodes will select the proper
data to work on to improve system performance.

It is also often important to archive data from a mission in order to review and
compare to previous missions. This archived data is also useful for training future
crews in proper procedures for countering threats. Because the data is available
in each node, it is only necessary to add a node with a storage device attached.
The processor within the node filters the data being sent around the ring and
records only the data required.

Shared Memory Solution

The shared memory network must have sufficient bandwidth to make this approach
practical. If the sensors have a combined output greater than the network bandwidth
the system will not work, and it may be necessary to divide the shared memory
network into multiple networks. This can easily be accomplished through the use
of the non-blocking crossbar switch. The switch can arrange all nodes into a single
network or subdivide them into multiple smaller networks.

Another parameter to be cautious of is maximum memory size. If there are four
sensors that require 32 Mbytes each, a memory size of at least 128 Mbytes is required.
It’s important to realize that this would allow no room for control data
or growth.

Curtiss-Wright Controls
Embedded Computing
Leesburg, VA.
(703) 779-7800.
[www.cwcembedded.com].