Industry Publications Index ... Click Here




Multimedia and Analogue Video

Originally published  June, 1996
by Carlo Kopp
¿ 1996, 2005 Carlo Kopp

The transmission and display of video signals is one of the central problems encountered with multimedia technology. While the term encompasses video, sound and graphics, the latter two types of data are significantly easier to handle than moving picture imagery.

The established technology for video transmission is analogue in nature, and the current formats for video signals used both in broadcast and tape media are all structured around the most common display device in use - the Cathode Ray Tube or CRT. These formats are very similar in function to the video signals produced by graphics cards to drive monitors.

The CRT will be with us for some time yet, as it still provides better image quality on large screens than its LCD derived or other flat panel competitors. For a technology with origins in the 1930s, this is indeed no mean feat.

Importantly, analogue TV video formats form the basic starting point for video formats used in multimedia applications. The latter are designed to specifically exploit the ubiquity of analogue TV technology, indeed any other approach would incur unnecessary cost overheads in what is a very much consumer oriented and thus cost sensitive mass market.

What this means is that an analogue format produced by a video camera or tape recorder must be converted into a digital format for storage or transmission through a multimedia environment, and ultimately, converted back into an analogue format for display on a user's monitor.

The starting point in understanding analogue video formats is understanding the behaviour of the CRT display, and how the analogue video signal is used to display a moving image.


In a CRT display tube, a device termed an electron gun produces a beam of electrons, which are accelerated by a high voltage (up to 35 kV) so as to impinge on the face of the tube. The face of the tube is coated with a substance called a phosphor, which glows when the electron beam hits it. By controlling the current of the electron beam, it is possible to change the brightness of the phosphor. This is the mechanism via which brightness and contrast can be introduced into the image.

To paint a picture we however need more than just one spot of brightness, ie we require a screenful thereof. This is accomplished via a scheme called raster scanning. In most raster scan applications, the electron beam is deflected by magnetic fields, produced by a set of deflection coils. These are an orthogonal set, ie one coil deflects horizontally and one vertically.

The image is then built up from lines which are produced by sweeping the beam across the face of the CRT, while the beam current is manipulated to vary the brightness at each and every point. At the end of each line of the picture, the beam is "retraced" rapidly to the beginning of the line, and also stepped down a line.

A critical issue in this scheme is synchronisation of the beam deflection to produce a picture, and to properly position it in the middle of the screen. This is accomplished by embedding what are termed synchronisation pulses in the video signal. These pulses are detected by the display hardware, and used to control the timing of the deflection fields to produce the desired effect. In monitor applications, the sync pulses are very often asserted on separate pins on the device interface, although some monitors will support embedded sync (sync of green).

What has been described will produce a monochrome or single colour image. To produce a full colour image, three electron beams are used, to drive a red, green and blue gun each. A mask arrangement and regular pattern of red, green and blue phosphors are used to separate the colours at the face of the CRT.

Needless to say, the technical details of designing such devices are quite interesting, and quite complex once we make provisions for adjusting and registering all three colours, and compensating for geometric distortions in the picture, and analogue distortions in the video brightness signal. It is fair to say that prior to the advent of the domestic PC, the television was by far the technically most complex device to be found in a household.

What interests us at this point is how to convert an analogue picture signal into a robust digital signal. The simplest method available is to simply run the analogue video signal(s) into an analogue to digital converter (A/D), and produce a stream of 8 bit samples. While this may be appealing conceptually, it incurs a prohibitive cost in terms of bandwidth. A 5-6 MHz video signal (TV resolution) will need to be sampled at about 12.5 Msamples/s, and even with a modest 8 bits per sample this yields the frightening bandwidth requirement of no less than 100 Mbits/s. A fully loaded FDDI channel would hardly keep up, and conventional lossless compression methods might at best cut this to 25-50 Mbits/s.

This is where the fundamental problem with multimedia video transmission arises, as the bandwidth requirements to achieve broadcast TV quality pictures are prohibitive.

Multimedia applications therefore exploit what are termed "lossy" compression techniques, where a single digitised frame of analogue video is cleverly manipulated, encoded and then sent or stored. Because consecutive frames are very often similar in content, multimedia protocols will typically send the first frame, and then incremental update it with each consecutive frame. Proper application of lossy techniques can achieve compression ratios in excess of 100 to 1.

Established multimedia video protocols such as MPEG-1, MPEG-2, H.261 and the planned MPEG-4 and H.263 are all based upon similar techniques, and allow the compression of usable video imagery down to bandwidths compatible with CD-ROMs or even analogue telephone lines. These will be the subject of a future tutorial.




$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here