By John Aldon, PhD, President, MILCOTS
Integrating a display into a subsystem, whether it is a 24in large display for an operator console or a 15in rugged panel PC controlling a gun system, may seem an obvious and straightforward task. The field experience brings a different feedback, and various factors will influence the performance of a video link, regardless of the display being used. We provide a few examples of real situations and emphasize the best way to anticipate problems, save time and frustration to all parties.
Knowing the video sources
As for all topics, a display manufacturer has to deal with various situations when exchanging with a customer on a new project, and aligning expectations can be a challenge if key points are not properly reviewed upfront. Most customers will not spontaneously disclose much about the architecture of the system the display will be embedded in. Even if a display may seem a pretty simpleitem, some of the systems we deal with, such as a weapon control station, a multi function operator console or a large 55” 4K damage control panel, involve many customer controlled subassemblies that may lean on legacy obsolete video sources. Assuming that the video feed provided by the customer always meets today’s standards is like shooting in the dark: that may work…. But it can also lead to a tremendous amount of time spent afterwards when for whatever reasons, the final performance of the video link falls short of expectations. A recent representative example worth noting was the request for an HD-SDI port as an alternate video input on a 17” display, without mentioning that the video feed was delivering a 30 Hz signal. The video controller planned for this project was capable of HD-SDI @60Hz and unable to handle a 30Hz signal. This last minute finding ended up with 5 weeks of delay to adjust the configuration.
Key #1: Knowing the video sources (whether it is a computer generating a still picture at a given resolution and frequency, a fast moving video, or a camera feed used as a picture in picture input…) is mandatory to allow the display manufacturer to identify, during the design phase and in the acceptance test procedure, all the video configurations required and test each single one of them prior to shipping.
Understanding the hardware configuration
Running a rapidly changing high resolution video on a display is one of the factors that increase the risk of a video degradation that can be noticed by the human eye, and shall be a flag triggering a careful evaluation of the complete video link. Avoiding post delivery issues requires spending a comprehensive amount of time questioning customers about the architecture of their systems. The typical questions relate to the use of KVMs, the length and nature of the video cables and connectors, the number of cable connections, the type of computer or graphic cards present in the system, the video ports that will be used to feed the display, the nature of the power supply… Those questions come on top of the usual and mandatory environmental requirements. This step shall be done thoroughly by experienced engineers. As an example, past experience has shown that KVMs are not created equal and that 5 or more DVI segments, each with MIL connectors, are not a rare occurrence in operator consoles… Those 2 points alone are sufficient to generate poor video results, especially for high resolution signals and regardless of the quality of the display itself. The blame is usually rapidly put on the display alone: “that’s where the problem is noticed right…?” The experience acquired when dealing with various HMI defense systems, is allowing us to identify those situations quite rapidly and suggest standard design rules to correct potential problems or mitigate risks. Among the first ones that come to mind: avoid non amplified KVMs, verify that EDID files can be read by the computer through the KVM, avoid a high number of cable segments between the computer and the display, carefully specify custom DVI cables by paying attention to twisting and shielding pairs close to the terminations, especially when video signals are expected to be close to the maximum bandwidth of a DVI link…
Another recent example, representative of such a situation was brought to us by a customer complaining about noise on a display. The same display was passing all factory tests with no issue, regardless of the video input. We ended up sending one engineer on site to investigate further, and finally discovered that the neutral and phase lines were wired incorrectly in the power supply feeding the display. Swapping those 2 wires addressed the issue, after 1 month of back and forth.
Key #2: Understanding the hardware configuration of the video link and following basic design rules, from the video source to the display is paramount to avoid experiencing signal distortion, noise or pixilation on the display.
Understanding that low temperature affects performance
Among the other factors that influence the performance of a display, and that are not necessarily identified upfront by customers (we spend quite some time educating young engineers and we enjoy it), is the behavior of the display with temperature. A representative example is the requirement for a display to operate at -40 deg C, which is typical for most ground army applications. This requirement is usually coupled with a not to exceed warm up time (usually a few minutes), and a not to exceed power draw (usually less than 100W for the smallest displays, less than 200W for the mid size range). The question that inevitably has to be answered right after this requirement is identified, is the nature of the information that will be displayed on the screen. Having a still picture showing various icons can be accommodated pretty easily with a combination of heaters on the LCD and on the front filter. Running on the screen a video coming from a surveillance system can be another challenge that requires a better understanding of how fast the image is supposed to change over time, and the nature of the video signal feeding the display. A good example of a challenging low temperature situation is a gun control panel fed by an electro-optical sensor: due to the nature of the mission, those panels are expected to provide near real time performance over their entire operating temperature range which extends to -40 C. Even if the latency information and the temperature range are usually listed in the various component specs, the system requirements lead to use COTS components out of their OEM suggested operating ranges, where information is not available and performance not guaranteed. The mandatory step to be implemented for fully assessing the performance of a display at low temperature is a real test in a thermal chamber that requires a representative test display. Because of obvious cooling reason and its impact on the internal temperature of components, the packaging of the test display shall be thermally close to the final product to provide accurate and representative results. When it comes to integrating heaters in the test display, we always plan for extra power capability: this allows for a quick evaluation of the performance beyond the power limit specified by the customer. Past experience has shown that customers are often able to give away some extra power during the warm-up time, and tend to favor a tradeoff where video performance and a quicker warm-up time prime over power dissipation. Even if the rest of the components constituting the video channel inside a display shall not be neglected, the electronics processing the video signal are usually a second order matter, and the overall video performance of the display with temperature is mostly driven by the LCD (this is especially true when there is no complex video processing involved, and when the mission of the monitor is limited to display a digital or analog video signal through a video controller built around a specialized chip).
Key #3: Understanding that operating requirements might prevent the display from achieving full performance, especially when low temperature startup and operation are required. Prioritizing the requirements will help converge faster if tradeoffs have to be discussed with the customer and end user.
Understanding that dealing with rapidly changing high resolution video adds to the challenge at low temperature
When it comes to selecting an LCD optimized for fast moving videos (such that the human eye and brain would identify the video as “fast moving”), the response time of the LCD is often the first criteria considered by the design engineer. Even though the overall latency of the display is driven by the response time of the LCD, measurements show that the latency can be quite far from the published response times found in the LCD specification. One of the difficulties with meeting the latency listed in the customer specification for the display is the location on the screen where this value shall be measured: it is not necessarily obvious but latency varies significantly from the top left corner of the screen to the bottom right one…
Here again temperature plays a significant role in the response time of the LCD: liquid crystals are organic molecules and their ability to change state (pixel switch) is strongly affected by the ambient temperature. Liquid crystals need some time to switch from one orientation to the next, and submitting them to a rapidly changing electric field (as required to properly display a fast moving video) makes each cold pixel acts as an electro mechanical filter: part of the information is too fast to be processed properly and the user will end up with a “choppy” video on the screen. The video will degrade as the temperature decreases, down to the limit of all crystals being “frozen” thus unable to change state at extreme low temperatures.
Key #4: Understanding that all video signals will not behave the same way: a rapidly changing high resolution video signal might show degraded on the screen, especially at low temperatures.
Understanding that selecting the right LCD involves numerous criteria, including what the market has to offer
Most of the demanding applications that we deal with require the LCD to be enhanced, whether because NVIS requirements are listed in the specification, or because high bright performance is expected. For a good number of ground army applications, we usually have to deal with both NVIS and high bright. When summing up all the LCD requirements (size, resolution, high contrast ratio, fast response time, wide temperature range, industrial flavor, recent start of mass production to avoid near term obsolescence, wide viewing angles, rear housing design allowing for NVIS and high bright backlight enhancements…), the display manufacturer is usually left with zero, one, or sometime two choices to work from. This is the type of situation where the quality of the exchanges with the customer is paramount, and where the situation usually evolves from a vendor-to-client relationship to a closer partnership with a common goal: finding the best compromise starting with what the market has to offer to avoid a dead end.
The first responsibility of the display manufacturer is to establish an honest list of potential infringement of the customer specification, identifying what are the risks and the associated technical options to address those risks. Assuming this step does not end up as a show stopper for the customer (and this frequently leads to similar discussions between the customer and the end user to validate the potential tradeoffs), then the LCD can be selected and the tests to characterize the performance of the LCD can start.
Key #5: Understanding that all LCDs are not created equal primes. Programs with very long life cycles and demanding requirements imply starting from an industrial LCD, from a reputable OEM, designed for near 24/7 operation and optimized for wide temperature ranges. The most recent commercial LCDs designed for office applications for example, might offer better performances and a lower price but will not provide the key benefits such as long term availability, ability to be enhanced, long term support, and advanced EOL notices of the industrial versions.
Understanding that new challenges will have to be tackled, regardless…
One interesting question we had to deal with recently was the request from the customer to provide, during operation, an indication somewhere on the display that the video was degraded. This was practically done by adding a colored light indicator (amber LED) on the bezel. The LED turns on when the degraded situation is reached. The key question is of course to establish a discrete threshold that will trigger the amber light, knowing that the criteria of video degradation can be quite subjective (each observer may have its own idea of when the video starts to be degraded), depends on temperature and depends on the nature of the video signal. Solving this problem can be done a number of different ways: the “smart” complex way (that we have avoided) requires processing a set of data that includes the ambient temperature, the temperature of the LCD, the resolution of the video signal, and a monitoring of the video signal to track its variation over time. This approach would require some data processing and a dedicated processor to handle the information, with a potential risk of further degrading the latency… A simpler and more pragmatic way to address this point (that we have selected) requires some discussion with the customer to identify a commonly agreed visual criterion using a reference video. Playing the reference video at various low temperatures allows identifying when the video starts meeting the degraded criterion. Recording that temperature then gives a simple threshold that can be used to trigger the amber light and warn the user of the condition, with no impact on the latency. This threshold can eventually be adapted to new signals, on a program by program basis.
Key #5: Regardless of all previous keys, there is a high chance that something unexpected will pop up…:)
360 route 59
Airmont, NY, 10952