Industry Publications Index ... Click Here




Multimedia Technology
Part 2

Originally published  June, 1996
by Carlo Kopp
¿ 1996, 2005 Carlo Kopp

Bandwidth - the Achilles Heel of Current Technology

The high cost of bandwidth is by all means the principal obstacle faced in trying to proliferate multi-media techniques. This central problem is true not only for transmission between sites, but also for transmission between a display and a storage device, if we are trying to distribute digitised raw video.

The cost factor in transmission between sites bites in two ways - should a multi-Megabit point to point link be required, telecommunications carrier charges will bleed you dry. Should you own the channel between sites, you are still up for the cost of the required transmission equipment, such as suitable very high speed point-to-point modems. Within a piece of equipment, bandwidth between storage and display is still an issue, even if it won't kill you quite as dead as site to site costs will.

It is this cost pressure which has spurred the development of lossy picture compression techniques. We are all familiar with lossless compression techniques, used by common tools such as compress, zip, gzip, pkzip and their peers. Lossless compression has the property of allowing the complete reconstruction of the original data from the compressed version. Indeed this is why it is termed lossless. The limitation of lossless techniques is that for conventional pictures it achieves only modest compression performance, which can be as poor as 1:1 (ie incompressible) to say 3:1, offering a 70% reduction in size and hence bandwidth.

This isn't however the only limitation of lossless techniques. Another limitation is that they require a high quality channel to work efficiently. Channels with modest or even high error rates can devastate a compressed signal. If no error control is used, even small numbers of errors can badly compromise the quality of transmission, as upon decompression these errors will typically affect many more pixels than data bytes sent. If error control is used, we are still up against the problem of picture frames getting delivered on time. If the channel is being loaded with packet retries, attempting to burn through noise, there is a very good chance that picture frames will not arrive in order and this can also compromise viewing quality significantly.

Needless to say the only place where lossless compression is used for image transmission is in military reconnaissance applications, where cost has traditionally not been a major issue. In such applications, lossless transmission is a must as every detail must be captured. This is because automated searching techniques are used, where a software tool compares pictures of selected sites taken on different occasions, searching for differences. Any loss of detail information would severely compromise this searching technique. Not noticing a new ballistic missile silo could prove somewhat costlier than buying a faster channel !

Lossy Compression Techniques

The limitations of lossy compression methods have led researchers to look for alternatives which are more suitable for the somewhat pathological environment of the real world. The requirements for such compression techniques are quite formidable, as the protocol must not only achieve better than 10:1 compression, but also be robust in its ability to handle transmission errors. Broadcast over a radio channel (TV) or a cable (cable TV) or a telephone channel (videophone) will in all instances involve channels with non-zero error rates.

The central idea behind all lossy picture compression techniques is that of reconstructing an approximation to the original picture, with sufficient quality to fool the human eye. This approach is not unlike broadcast television, where many little tricks are played to save bandwidth. In lossy picture compression, the idea is taken several steps further.

An interesting aspect of lossless compression techniques used in such applications is that encoding is typically a one-to-one mapping. An uncompressed picture will produce only ever one compressed picture. However, at the decoding end, the situation is quite different. A single compressed image may be decompressed into many very similar, but different, renderings of the original. Quality then becomes very much subject to the performance of the algorithm used to reconstruct the picture.

This has some interesting implications. One is that Vendor A's decoder may indeed be far superior to Vendor B's decoder. The other is that more sophisticated decoding algorithms usually devour more CPU time than less sophisticated algorithms. Where CPU performance is adequate to decode the stream of picture frames at the required display rate, this may not be an issue. However, where CPU performance is not adequate, the machine may not be able to keep up and will begin to fall behind the incoming stream of pictures. The usual safety valve employed in this situation is to discard frames in order to catch up. Discarding frames however compromises viewing quality.

Therefore, a sophisticated reconstruction algorithm can be a double edged sword, particularly in the consumer marketplace where every customer wishes to use his or her existing platform, which may or may not be up to the task. The only viable solution to this problem is to decode in dedicated hardware, and this is indeed the reason why multimedia boards are becoming increasingly common. Objective metrics for determining quality in lossy picture compression protocols are at this time lagging significantly behind the development of compression techniques. Buying the proverbial cat in the bag is thus a distinct risk to be faced when selecting a decoder product at this time.

Subsequent parts of this series will take a closer look at the technical issues surrounding the MPEG and H.26X series of protocols.



$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here