BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECHNOLOGY IN CONTEXT

FPGA-BASED BOARD SOLUTIONS

FPGAs Are Everywhere – In Design, Test & Control

Expanding beyond the supporting role of glue logic to solve only what could not be solved by off-the-shelf logic, FPGAs are now center stage with the ability to complete a fully custom SoC design targeted specifically to the needs of the application.

WAYNE MARX, XILINX AND, VINEET AGGARWAL, NATIONAL INSTRUMENTS

  • Page 1 of 1
    Bookmark and Share

Imagine if a single-board design could fulfill the needs of every project. What if you could configure an entire board, or several boards in the same platform, to meet the needs of any part of a system or all needs of a complete system? What if you could rapidly fashion system hardware components like microprocessors; peripherals; filters; control loops; (add your desired functionality here); and UART, SPI and I2C controllers in the right mix to specifically and exactly meet the needs of your application?

Ten years ago this would have seemed like pie-in-the-sky thinking, when most standard logic blocks like Ethernet, CAN and USB controllers; microprocessors and memory controllers; and UARTs were still relegated to off-the-shelf hard logic silicon devices. However, FPGAs are now achieving attractive volume price points and sizes, such that even a full 32-bit microprocessor constitutes only a small fraction of overall cost and size. FPGAs now represent a more viable option than ever for tackling nearly any type of application imaginable. This includes designing FPGAs into products, using FPGAs to test products or even using FPGA-based hardware for controlling and manufacturing products.

FPGAs are everywhere, and with the availability of an enormous amount of FPGA-targeted intellectual property (IP) blocks from FPGA hardware vendors, third-party IP suppliers and the FPGA community, you can take an idea from paper to silicon more rapidly than you would think. With the task of creating all but application-unique IP blocks out of the way, most of the work in creating a completely unique system targeted to your specific needs becomes more of the integration and assembly process itself. FPGA vendors and companies with FPGA-based products have even taken this to the next level of productivity by introducing new levels of abstraction with higher-level design tools that integrate IP components and I/O through graphical block diagrams.

The value of using FPGAs goes beyond simply being able to tackle many applications with one board because you also can solve problems with more degrees of freedom. Previous board solutions generally contained a fixed microprocessor or application-specific standard product (ASSP) and associated hard logic in a rigid architecture, limiting the ways in which performance could be achieved; an FPGA-based board has an architecture that can be tuned for accelerated performance. Tasks can be optimally moved between hardware and software and implemented to operate in parallel—by adding more soft microprocessors to the mix, by duplicating hardware function blocks or by a mixture of these by adding coprocessing components directly to the microprocessors themselves. The combination of performance and flexibility provides clear benefits for any given application, and whether you are working with embedded designs, test equipment or control systems, FPGAs have become an integral part of our world today.

FPGAs in Embedded Designs

It is clear that FPGAs are employed in a wide range of applications today. You are probably already considering an FPGA for your next design for many of its basic benefits such as flexibility and integration. However, as the available IP catalog continues to grow, more FPGA designs are beginning to look like system-on-a-chip (SoC) designs. With the majority of SoC designs containing a microprocessor, it is no small coincidence that one of the most recent arrivals on the FPGA IP scene is the embedded 32-bit microprocessor. Creating application-specific embedded designs using FPGAs as the base technology is gaining traction. A Gartner report shows that by 2010, more than 40 percent of all FPGA designs will contain an embedded microprocessor.

Based on processing speeds, with more than 200 MHz for soft processor implementations and hard block implementations exceeding twice that amount, nearly 80 percent of all embedded 32-bit application needs are addressable inside an FPGA. Having the microprocessor inside the FPGA does not imply compromises either. As an example, Xilinx offers a soft 32-bit microprocessor called MicroBlaze with configurable instruction and data cache sizes, which includes an optional memory management unit (MMU) for protected memory accesses. While this soft core can be targeted to any of its FPGA devices, Xilinx also offers a hard 32-bit PowerPC processor in its higher-end Virtex line. With the open standard processor local bus (PLB) interface present on both of these processors, they can connect to a large number of supplied peripherals and acceleration logic (Figure 1).

For many reasons, it makes sense to incorporate embedded processing functionality into an FPGA—some reasons being less obvious than others. To begin, there is no fixed architecture implementation and therefore no fixed boundary between which functions are performed in hardware versus software. Therefore, there is a wide spectrum of possible solutions to explore for any application, ranging from very generic to application-specific.

On the generic end, the processor and associated peripherals can be configured on a shared bus with the single master CPU having access to all slaves. This approach solves for the widest range of functions. The advantage is simplicity of system architecture and the ability to accommodate changes in system requirements simply by modifying software code and recompiling. The downside of this approach is that it places most of the demands on the CPU, such as moving data through the system and performing computations, and it consumes the shared bus bandwidth of the system with every data transaction. This may cause the microprocessor and bus to run out of steam very quickly as they service more of the system needs.

On the opposite end of the spectrum, the system can be architecturally tuned to meet an application-specific need, or it can be scaled to support more channels or higher throughput. These changes are accommodated in the FPGA by adding “point-of-use” dedicated hardware functionality in the form of interconnect buses, direct memory access (DMA) engines or hardware acceleration logic. Optionally, entire microprocessor subsystem blocks can be replicated inside the FPGA. No matter the approach, the net result is the same—higher system-level performance.

As an example, for TCP/IP processing, it may make sense to have hardware-assisted offload functionality for checksum rather than force the CPU to perform that function. In addition, reducing the need to have the microprocessor perform block data transfers by incorporating DMA can dramatically affect the overall system performance and significantly reduce the demand on CPU bandwidth.

Some tuned architectures necessitate a different topology. For example, it may be advantageous to keep data in one location and have the processing elements operate on the data in situ rather than moving the data through the system to each new processing stage. A multi-ported memory controller to a common memory element would fit the need in this case.

Many control applications require precision processing of wide dynamic-range device input and output voltages and currents at varying degrees of performance. These calculations are well served in the floating-point domain; however, achieving precision with performance while using floating-point emulation is not always possible. Having the ability to selectively accelerate floating-point calculations by using a floating-point coprocessor to balance performance and cost is valuable. A soft processor can have an optional floating-point unit (FPU) that is tightly coupled to the CPU and can be selectively designed in with a simple parameter option. Likewise, a full double-precision FPU is available for the PowerPC.

Once the design is optimally tuned within an FPGA-based board, there is no reason it has to stay there forever. For many applications, you can use a more generic prototyping platform for initial definition stages with the intent of eventually moving to one of lower cost, higher performance or different form factor. A natural benefit of a soft IP core is the ability to retarget to another hosting device should the need arise. Given that the primary FPGA vendors have a wide range of devices from which to choose, ranging in density, performance and cost, several retargeting options are generally available. In fact, most IP blocks supplied by the FPGA vendors themselves are qualified for all of their device families, thereby eliminating most of the retargeting task. By taking advantage of the open interconnect standards for application-specific IP, developers can also gain portability benefits for their custom blocks. In addition, as vendors introduce new devices, they continue to qualify their IP on the devices, which essentially makes this a future-proof approach to engineering solutions and reduces design cycle time and concerns of device obsolescence.

FPGAs in Test Applications

While there may be clear benefits to using FPGAs in your next embedded design, taking advantage of FPGA technology in test applications might be less apparent. From design validation to automated test equipment, all test systems can benefit from the low-level I/O control and parallelism that FPGAs provide.

When characterizing the performance and functionality of a device under test (DUT), you can only be as precise as your measurement system, and using FPGA chips as the primary interface adds high-speed intelligence to your I/O. Verifying the unique features of any device can be accomplished in hardware, and incrementally adding test functionality is simplified with parallel operation.

A good example of leveraging FPGA performance in test systems is the NASA Goddard Space Flight Center’s ongoing development of the James Webb Space Telescope. To mask out unwanted light from distant solar systems, engineers at NASA are perfecting a custom microelectromechanical system (MEMS) device with thousands of tiny microshutters in a configurable grid formation. The shutters open and close by synchronizing a passing magnetic field with more than 500 digital lines that index individual shutters on the grid (Figure 2).

The lifetime of every component on the James Webb Space Telescope must be guaranteed for the 10-year mission in space, and NASA uses FPGA-based hardware for the reliability testing of the MEMS chip design. A custom ASIC would not give NASA the flexibility to iterate on its test system as the design improved, and a software-based approach would not provide the necessary low-level timing or reliability. NASA saved hundreds of man-hours and thousands of dollars by going with off-the-shelf FPGA-based hardware.

Another key advantage to using FPGAs in test equipment is the diversity of options when implementing digital communication interfaces. You can program both standard and custom digital protocols at the physical layer with complete control over timing and synchronization. There is an abundance of available IP ranging from basic SPI to high-speed serial PCI Express interfacing, and much of it is open source and free for the taking. In situations when the DUT uses digital buses that are proprietary or confidential, fixed ASIC interfaces face challenges with regard to maintenance and forward compatibility. However, you can upgrade FPGA designs over time without any physical hardware changes.

FPGAs in Control Systems

Control systems provide another great example of how FPGAs have clearly grown from their roots as hardware glue logic and interface chips. Closed-loop control systems vary across many different industries, but the primary factor affecting overall system performance is often the speed of the control loop. At the most fundamental level, the loop speed is the total time needed to read sensor inputs, process the control algorithm and output the resulting values to the actuators. An FPGA-based hardware solution has the unique advantage of true hardware parallelism; independent control loops can run at different rates without relying on shared resources that might slow down their responsiveness (Figure 3). Multiple control loops running on processor-based systems, on the other hand, must compete for processor bandwidth, and different parts of the application have the potential to starve one another and induce jitter into time-critical tasks.

The ability to dedicate a particular section of FPGA circuitry for a specific control function gives system designers the hardware guarantee of reliable operation without the risks of resource unavailability. Just like the earlier examples of design and test applications, the available IP blocks for controls help system designers get a jump start on using FPGA-based hardware. Whether it is basic PID control or advanced algorithms like model-free adaptive (MFA) control, pre-built IP exists to simplify the FPGA design so you can spend more time on other aspects such as tuning the values of mathematical coefficients.

As system requirements become more complex, FPGAs also provide the scalability to add functionality without having an impact on the rest of the system. A machine control application, for example, might require the addition of sensors for monitoring the system temperature and mechanical vibrations, with the goal of detecting the early stages of machine failures. An FPGA-based system could add that functionality without affecting the machine control and even help integrate an emergency shut-down sequence to ramp down the machine safely and reliably.

Over time, parts of the actual system being controlled are likely to change, and a controller needs to be able to adapt to these changes. An FPGA-based approach provides hardware reconfigurability to evolve with application needs. With reprogrammable hardware, you can recompile the FPGA to accommodate new and improved algorithms, different types of I/O and bug fixes, all of which can be accomplished in the field. Avoiding a complete hardware redesign reduces long-term sustaining costs and system downtime.

FPGAs are everywhere, and they are growing within all different facets of design, test and control. The flexibility and performance of FPGAs combined with the rich set of available IP provide a spectrum of solutions for a range of problems, all with multiple degrees of freedom within a single piece of hardware. While making system component choices normally limits options down the road, the choice to use an FPGA-based solution actually further expands the options available to you. Even throughout the life of the device, the options continue to exist, and you can carry system upgrades into future generations with relative ease.

Xilinx

San Jose, CA.

(408) 559-7778.

[www.xilinx.com].

National Instruments

Austin, TX.

(512) 683-9300.

[www.ni.com].