Picture This: Wireless Visuals and Video
Jan 1, 2008 12:00 PM, By Jeff Sauer
Sending PC and motion-video content without the cables.
Cables and cords are the virtual blood vessels of professional AV. They're the (mostly) unseen conduits that carry the content upon which the industry exists. Yet, like veins and vessels, nobody really likes to see them very much. So we mask them and hide them and, increasingly, try to get rid of them in favor of invisible wireless connectivity (and here ends the blood-vessel analogy).
There's nothing new about wireless, of course. We all use cell phones. Many of us have wireless-enabled laptop computers. And professional audio has featured wireless solutions for a few years. Yet, when it comes to professional video, the ante is a little higher. Video involves a lot more data than even multiple tracks of audio. What's more, the term “video” can be used to mean both the video display from a computer desktop and television-like motion video. And, while these are both visual, they are very different in terms of the nature of the data and how that data needs to be prioritized if it's to be transmitted wirelessly.
An XGA-resolution desktop image is roughly 18.8Mb, assuming 24 bits/pixel — more if it's a higher bit depth — and it refreshes dozens of times every second. A single frame of standard-definition motion video, on the other hand, is about 8.4Mb (again assuming 24 bits/pixel), and it's measured at 30fps in North America. However, unlike a typical computer desktop, motion video is constantly changing.
The 802.11 “a” and “g” standards supported by the NewSoft's wireless adapter (reviewed on p. 64) — as well as many other wireless solutions for video, audio, and data — have a theoretical maximum bandwidth of 54Mbps (the maximum for 802.11b is only 11Mbps). However, actual usable bandwidth varies depending on signal strength and interference, and it is generally less than half of that amount. That's not enough for sending video wirelessly without some sort of compression, and the different types of video require different types of compression.
Fortunately, given that limited bandwidth, a computer desktop changes very little one moment to the next — aside from new windows opening or presentation slides advancing. Small cursor movements and keystrokes often occur constantly, but occupy very little of the screen. So, while a single desktop image does include a lot of data, the full desktop need only be transmitted sporadically, and it's not critical that it gets displayed instantaneously. Tracking cursor movements and keystrokes in near real-time requires much less data.
The opposite is true of motion video. If any of the 30 frames each second is dropped it causes very visible glitches. Therefore, motion video requires a much different compression approach to maintain image quality than the output of a computer display, one that focuses more on the timely delivery of bits than on displaying sharp, crisp lines for text and numbers.
Most wireless-video solutions from KVM companies such as Avocent and projector makers such as Epson, InFocus, and NEC do a reasonably good job with the video from a computer desktop by compressing the analog signal on the fly and uncompressing it again at the display side. Almost all use a variation of vector quantization (VQ), compression that is able to group neighboring identical pixels, or lines and shapes, into one piece of code.
Unfortunately, that approach doesn't work well for motion video because very few neighboring pixels in video are exactly the same. For example, even if the sky is blue, it's rarely a single, solid shade of blue. Grass is green, but it's the subtle variations of green that give it texture. What's more, these solutions literally need to uncompress compressed video files and recompress them in a less efficient way. The digital-video industry graduated from VQ-based compression some 15 years ago in favor of formats such as MPEG-1, -2, and -4; and VC-1, which use a variety of techniques that are more efficient for the continuous stream of motion video.
NewSoft's approach with 802.11a/b/g succeeds affordably because it divides the problem of wireless video into two parts, addressed by two separate modes. It uses traditional methods for displaying computer desktop visuals (albeit with a faster processor to dramatically reduce latency), and it avoids the recompression problem for motion video altogether by sending the existing file to be decoded by the player.
However, there are other approaches on the horizon that should allow for much higher throughput and, thus, more flexibility.
A year ago, TZero announced a new wireless chipset that transmits HDMI using ultrawideband (UWB) instead of any of the 802.11 flavors. As the name implies, UWB is not limited to a specific frequency range such as 802.11 (802.11 b and g use 2.4GHz, while 802.11 a uses 5GHz). By sending low-power bursts across a much wider frequency range (from 3.1GHz to 10.6GHz), UWB is theoretically less subject to interference and capable of offering a much higher throughput. UWB has a theoretical maximum bandwidth of more than 600Mbps, although TZero more conservatively puts that number at 300Mbps. Gefen, the first company to announce a transmitter-receiver product (although it has yet to deliver Wireless for HDMI in volume), rates the actual performance for uninterrupted video at 65Mbps over a 30ft. line-of-sight distance. The TZero solution is designed to support motion video up to 1080p using the JPEG2000 video-compression standard from Analog Devices. The potentially lossless image-compression standard is frequently used in HD digital-video and movie production.
Amimon uses a proprietary technology called Wireless High-definition Interface (WHDI), which is expected to support video resolutions up to 1080i and PC resolutions up to XGA on a 20MHz channel and 1080p on a 40MHz channel. Amimon claims respective channel bandwidth of 1.5Gbps and 3Gbps, enough to send both motion video and computer desktop video “uncompressed.” However, because wireless bandwidths are always subject to interference and congestion, Amimon does “prioritize” video information. That sounds like semantic gamesmanship for “compression,” although Amimon's distinction is that its technology can send and receive uncompressed video in a perfect environment, and it adapts to imperfect network conditions with very little or no visible image-quality degradation. WHDI defaults to the same 5GHz frequency as 802.11a, but it is agile enough to find other frequencies if necessary.
Other future wireless standards include 802.11n, which would increase the maximum bandwidth to 540Mbps and use either the 2.4GHz or 5GHz frequency of the previous 802.11 standards. The 802.11n standard is expected to be finalized some time in 2008. WirelessHD (WiHD) is the aim of a consortium that includes some of the largest company names in the display industry, working to create an industry-wide protocol for all consumer-electronic player and display devices. It will use the unlicensed 60GHz frequency band to achieve multigigabit data rates. A finalized standard was originally expected in 2007, but it is still pending.
Acceptable Use Policy blog comments powered by Disqus