The Power Of Pixel Response
Analyzing the impact of LCD pixel response on images in motion.
“Blur,” “jaggies,” or “smear” artifacts are caused by the way LCD/TFT displays project images. Unlike a CRT, which scans images onto the screen pixel by pixel and line by line, LCD/TFT panels project an entire frame. All pixels of an LCD are turned on for the full duration of the frame period at the same time. CRT scanning beam technology displays one pixel or line at a time by radiating or charging a phosphorus particle bonded to the glass of a CRT. By the time the beam of the CRT moves to the next pixel or line, the previous pixels and lines have already started to decay — kind of like a domino effect.
Because we've been accustomed to CRT displays for the last 50 or so years, we see the differences between CRT and LCD/TFT without much effort.What do we see?
Because LCD/TFT panels are progressive display devices, interlace images must be fully assembled into a single video frame. The assembly of the fields into a frame occurs through a process called de-interlace. Two fields are loaded into a memory device and assembled into a single image. If an image of a fast-moving second hand is de-interlaced, two images (one for each field) will be displayed on the progressive LCD/TFT panel. The difference in motion between fields can be perceived by our eyes as two separate images as well. This occurrence of two bodies in motion happens for each frame, and can appear as a blur in motion because the temporal/spatial difference between the pixels of each field remains the same.
This same blurring phenomenon will occur in progressive frame-based images in the same way a still photo or frame of film does. The primary method to reduce temporal/spatial blur is to increase the sample rates. In a frame of film or a still image, this is controlled by shutter speed and exposure time. As video frame rates become faster or even super sampled at rates faster than frame rate, many blur-type artifacts may become eliminated.
Another method of improving sample rates is to display each field as a progressive image. This can help somewhat with the motion issues, but requires considerable processing of the image.
Take for example a widescreen 525 interlace image projected on a 1920x1080 LCD. Each field of a 525 image has about 240 lines of image information vertically and up to 720 pixels or dots horizontally. To display a single field as a progressive image, it must be enlarged or scaled to eight times original size vertically and 2.7 times horizontally — much the same as looking at an image through a myopic lens magnifying glass. Information must be created from the original pixels (interpolation) to fill this space. Because this process magnifies any negative parts of the image, such as noise from high camera gain or CCD, image aliasing is amplified and can make the final result look pretty bad.
The only fix for this is extensive image analysis and filtering in the interpolation process. In other words, keep the good stuff and throw away the bad. Many display manufacturers are now employing LSIs and proprietary algorithms to make the enlarged image look very good. But you have to ask yourself, “Is it real or artificial?”
After filtering, there are two common processes used to perform field to frame interpolation, intra-field or inter-field. Intra-field uses a single field of data to create a progressive frame on the LCD. This method is often preferred for fast-moving images or live production as it also has less propagation — usually one field. A downside exists when the image is scaled to fill a 1920x1080 display. No matter how you approach the problem, the image must be manipulated to fit on screen. In the case of widescreen standard definition, 240-276 lines of vertical resolution must become 1080 lines, and 720 horizontal pixels must become 1920. Interpolation of the pixels for intra-field process is line and pixel repetition to fill the space. Lower resolution screens are better for SD images because they involve less manipulation of the image to fit the screen.
Inter-field interpolation uses information from two or more fields to create the progressive frame for the LCD. A simple de-interlace will create edges or “jaggies” for adjacent lines and pixels in motion representing the spatial difference in the capture period of each field. Jaggies can be reduced by an interpolation process called “blending” where the edges in motion become blurred. Other issues with the basic de-interlace methods are a reduction of image samples by half. Since f1+f2=F, a 60i image will become 30p and may display steppy motion when compared to your CRT. If you're doing live production with talking head talent, you'll begin to notice a one frame delay of video to audio with this method.
The best interpolation method at this time involves using samples for at least three fields (current, previous, and next), and will tend to eliminate most visual artifacts. However, your video will now be at least three fields or more delayed from audio. The poor guy monitoring a live satellite downlink through a frame synchronizer may go crazy trying to figure out why he just can't get a good lip-sync on his new LCD monitor.