Data Acquisition Systems Track Signal Processing Technology

As processing capability continues to grow, signal processing systems are using ever larger amounts of sensor data—in resolution, bandwidth and number of channels—to perform their functions. Data acquisition and recording systems are required that can test and support these advanced signal processors’ capabilities. Fortunately, the same tools and technologies that enable faster signal processing—switched fabric interconnect and FPGA-based processing—can also be used to implement advanced data acquisition systems for a wide range of applications, including radar.

Using a Network Model

The primary mission of a data acquisition system is to acquire and store data, and lots of it. The first design parameter to consider is the amount of data that needs to be stored. If the application can be implemented with a single channel to disk, typically up to 200 Mbytes/s, the system can use either embedded storage technology or a PC-based data recorder. If the application requires multiple channels to disk, from 200 Mbytes/s up to several Gbytes/s, the system will typically use a switched fabric interconnect to provide both scalability and modularity.

A variety of switched fabrics are available with off-the-shelf support for modular data recorders. Many legacy radar data acquisition systems use RACE++, which offers up to 533 Mbytes/s per 6U VME slot. Newer systems being developed today use VITA 41 (VXS) technology to scale up to 2.5 Gbytes/s per 6U slot. VXS systems can use fabrics such as PCI Express or Serial RapidIO, or point-to-point links based on the Xilinx Aurora protocol. The choice of protocol depends on the interoperability requirements within the system and the complexity of the endpoint solution, which is typically implemented in an FPGA on each VXS card.

One benefit of using a switched fabric is the built-in support for a network model for the system as a whole. The system can be viewed as a loosely coupled set of processing nodes, each with a PowerPC processor, local memory, I/O module site and bridge to the fabric (Figure 1).

Nodes can be configured as either storage or I/O nodes, depending on the type of I/O module installed. Because each I/O module has its own dedicated processor, the software model is very simple. If a Fibre Channel module is installed, the node acts as a storage server, responding to client requests through the fabric network. Alternately, if an I/O module is installed, the node acts as both an autonomous I/O server and a storage client, managing its own I/O module and requesting storage to disk through the fabric network.

High-Speed Fiber Optic Data Transfer

In many radar applications, the sensor data being recorded is converted from analog to digital outside the recorder and is transferred using high-speed fiber optic interfaces. This approach makes it easy to insert a data recorder into the system without degrading the signal integrity of the data being acquired. The data recorder typically implements a copy mode that re-broadcasts the input data, allowing the recorder to be inserted between the sensor and its signal processor without interrupting the data flow.

The most common format for high-speed fiber optic transmission is Serial FPDP, or ANSI/VITA 17.1. Serial FPDP supports 1.062, 2.125 or 2.5 Gbit/s physical links, providing data rates of up to 247 Mbytes/s per fiber. Serial FPDP is designed to be a simple, low-latency protocol, making it well suited for FPGA-based implementations.

In many radar data acquisition systems, the building block that provides high-speed fiber interfaces is a PMC module, such as TEK Microsystems’ JazzFiber (Figure 2). Each of these modules provides four independent fiber optic interfaces connected to an onboard FPGA. The module also includes two banks of DDR buffer memory to support wirespeed buffering of all four data channels. When installed on a PCI-X carrier, the PMC module supports full throughput, 1 Gbyte/s transfers between all four channels and the host.

The FPGA can be used to implement a wide range of protocols, including Serial FPDP, Fibre Channel and Gigabit Ethernet, allowing the same module to support different types of interfaces through FPGA reconfiguration. Each processing chain is independent in the FPGA, allowing a single module to support a mix of protocols if required (Figure 3).

Adjunct Data Processing Channels

While the primary mission of a data acquisition system is to record high-speed sensor data, most systems also require some amount of adjunct low-speed data to be recorded as well. This low-speed data typically gives information about the platform itself, which provides the operating context necessary for analysis of the high-speed data. In some cases, the adjunct data affects the high-speed data recording process directly, modifying the type or amount of data being recorded in real time as the platform state changes.

While the adjunct data channels tend to be lower speed, they also tend to require some processing for interpretation and formatting of the data. Adjunct data channels can use off-the-shelf interfaces—such as Ethernet, 1553, SCRAMNet and the like—or they may require tailored low-level interfaces such as serial or parallel TTL, ECL, EIA-485 or LVDS. As long as the interface can be implemented on a PMC or XMC I/O module, it can easily be integrated into the data acquisition system. The network model enforces a modular approach to adjunct channels,
allowing any customization or tailoring to affect only the interface in question, not the other building blocks of the system.

Applying Effective, Accurate Timestamping

Another common requirement for data acquisition systems is the need to accept an accurate timecode input and to apply a highly accurate timestamp to all of the data streams being recorded. Typically, precise sample-to-sample timing is maintained in the sensor, but the timing of each packet of data, both high-speed and low-speed, is important for both data analysis and for precisely reproducing the data at a later time.

Effective and accurate timestamping requires a number of elements, both at the system level and at the individual processing node. First, the system needs a system-wide timebase that is distributed to all of the processing nodes and is accessible to both hardware and software in each node. Second, the system needs an IRIG or other timecode input that can be precisely synchronized to the system-wide timebase and the results broadcast to all of the processing nodes. Third, each I/O channel needs a mechanism for applying the system-wide timebase to input events as close to the actual input as possible.

In TEK Microsystems’ data acquisition systems, the switched fabric interconnect is used along with hardware support on the carrier cards to implement a system-wide timebase. In RACE++-based systems, the timebase is based on the RACE++ XCLK and has a timing accuracy of 15 nanoseconds. In VXS systems, the timebase is based on an adjunct reference clock signal and has a timing accuracy of 8 ns. Hardware support on the carrier card allows IRIG or 1 pulse per second inputs to be precisely timestamped against the system-wide time. This allows software to broadcast timing reference points to the other processing nodes at a frequency of once per second.

Each processing node can perform timestamping in either hardware or software, depending on the capability of the I/O interface being implemented. Off-the-shelf I/O modules such as 1553 and Ethernet do not support hardware timestamping. Instead, they use software-driven timestamping of messages or packets for accuracies of 20 to 50 microseconds.

High-speed fiber optic interfaces using FPGA-based PMC modules can support hardware-based timestamping through the FPGA, using the system-wide timebase provided by the carrier on the PMC Pn4 connector. Each fiber optic input accesses the timebase at an early stage in the processing chain, providing a timestamp with an accuracy of 100 ns or better per packet.

During playback, the same FPGA-based approach is used to precisely reproduce high-speed data, using the timestamp and the system-wide timebase to hold back transmission of each packet until the exact time it is required. For applications where such timing is critical, this allows the system to precisely reproduce the packet timing that was recorded.

Applying Static or Dynamic Windowing

In some applications, the high-speed data input to the system contains a mix of critical and non-critical data. Because it is often necessary to limit the number of attached RAID storage devices due to volume, weight or cost constraints, the application often must selectively decide to record or discard portions of the high-speed data streams based on the amount of available throughput.

In some applications, the windowing algorithm is determined in advance as a part of the mission configuration. In other applications, the algorithm is driven by adjunct channel input during the mission and needs to be applied to the high-speed data in real time.

The network model can be used to minimize the complexity of these different implementations. The I/O server software that controls the high-speed data is designed to accept windowing parameters from other processing nodes. If the windowing parameters are static, they are simply defined at the beginning of the mission and are not changed. If the windowing parameters are dynamic, they can be changed in real time by the adjunct channel processing node through a simple API call.

To maximize the efficient use of memory and fabric resources in the system, it is usually best to apply windowing to the high-speed data as early as possible in the processing chain. For fiber optic data, the FPGA processing capability on the PMC module can easily accommodate either static or dynamic windowing as a part of the high-speed data procession chain (Figure 4).

Industry Standard File System

By building a data acquisition system using a network model, supporting a mix of high-speed and adjunct channels, and implementing timestamping and windowing, the storage nodes are presented with the right data, along with enough additional information to make that data useful. The user still needs to be able to access the recorded data, either for analysis, transcription or playback. Typically, data is recorded once but accessed many times after the mission is over, making the data format on disk a critical part of the overall usability and effectiveness of the data acquisition system.

One approach to data formatting is to use a real-time implementation of the standard FAT32 file system for all data recording and playback. This file system is directly supported by Windows, Linux and Solaris workstations, allowing RAID storage arrays to be directly accessed by standard workstations without requiring special software or drivers.

Each channel of data is written to its own file on the disk, with a common format for headers and other adjunct information such as timestamps. The use of a standard file system and a common format enables the development of a body of transcription and verification software that is common to a range of data
acquisition systems and largely independent of the specific type of data being recorded.

Signal processing systems today make use of switched fabrics to create modular, scalable solutions, using FPGA-based processors to perform processing at higher densities than is possible with general-purpose processors. The use of the same tools and techniques supports a scalable and flexible approach to building data acquisition systems. Leveraging off-the-shelf hardware and software, along with tailoring when necessary through FPGA-based processing, allows the use of industry standard components, enclosures, backplanes, I/O modules and RAID disk arrays. This makes it possible to develop very high-performance data acquisition and playback systems, with application-specific tailoring where necessary, while reusing existing hardware, software and FPGA components for the majority of the system.

TEK Microsystems
Chelmsford, MA.
(978) 244-9200.
[www.tekmicro.com].