Television is perhaps the most important form of communication ever invented. It is certainly the most popular and influential in our society. It is an effortless window on the world, requiring of the viewer only the right time and the right channel, or for the nondiscriminating viewer, any time and any channel (except channel one).
Computer presentation of information could certainly benefit from the color, motion, and sound that television offers. Television viewers could similarly benefit from the control and personalization that is promised by computer technology.
Combining the two seems irresistible. They already seem to have much in common, such as CRT screens and programs and power cords. But they are different in significant ways, and those differences are barriers to reasonable integration.
The problems on the computer side will get fixed in the course of technical evolution, which should continue into the next century. We've been fortunate so far that not one of the early computer systems has been so popular that it couldn't be obsoleted (although we are dangerously close to having that happen with UNIX, and there is now some doubt as to whether even IBM can displace the PC). The worst features of computers, that they are underpowered and designed to be used by nerds, will improve over the long haul.
Television, unfortunately, has been spectacularly successful, and so is still crippled by the limitations of the electronics industry of 40 years ago. There are many new television systems on the horizon, a few of which promise to solve the integration problem, but for the time being we are stuck with what we've got.
These limitations are not noticed by audiences, and could be completely ignored if they were merely the esoterica of television engineers. Unfortunately, the television medium is far more specialized than you might suppose. Interface designers who ignore its limitations do so at their own peril.
Computer displays are generally designed for close viewing, usually in an office environment--most often as a solitary activity. The display is sharp and precise. Displays strongly emphasize text, sometimes exclusively so. Graphics and color are sometimes available. Displays are generally static. Only recently have computers been given interesting sound capabilities. There is still little understanding of how to use sound effectively beyond BEEPs, which usually indicate when the machine wants a human to perform an immediate action.
Television, on the other hand, was designed for distant viewing, usually in a living room environment, often as a group activity. The screen is alive with people, places, and products. The screen can present text, but viewers are not expected to receive much information by reading. The sound track is an essential part of the viewing experience. Indeed, most of the information is carried audibly. (You can prove this yourself. Try this demonstration: Watch a program with the sound turned all the way down. Then watch another program with the sound on, but with the picture brightness turned all the way down. Then stop and think.)
Television was designed for distant viewing because the electronics of the 1940s couldn't handle the additional information required to provide sufficient detail for close viewing. Television has lower resolution than most computer displays, so you have to get some distance from it to look good.
The correct viewing distance for a television viewer is as much as ten times what it is for a computer user. Where is the best place to sit in order to enjoy fully integrated interactive television, the arm chair or the desk chair? Many of the current generation of multimedia products, such as Compact Disc-Interactive, suffer from this ambiguity. The color images are best viewed from a distance, but the cursor-oriented interface wants to be close.
Every pixel on a computer display is precious. Because the visible window is a rectangle, and the corners of CRTs are curved, the visible rectangle is inset, with sufficient black border to assure that even the corner pixels will be visible. Television, unfortunately, does not use such a border.
The first picture tubes used in television were more oval than rectangular. It was decided that the picture should fill every bit of the face of the screen, even if that meant that viewers would be unable to see the portions of the images that were near the edges, particularly in the corners.
This was well suited to the distant viewing assumption, but the uncertainty of what is visible on a viewer's screen (it can vary from set to set) causes problems even for the producers of television programs. They had to accept conventions of Safe Action Area and Safe Title Area, which are smaller rounded rectangles within the television frame. Most actions that happen within the Safe Action Area will be visible on most sets. All text should be confined to the Safe Title Area, which is visible on virtually all sets.
Many computer systems have displays that run 30 or 60 frames per second, because it is commonly believed that television runs at a rate of 30 frames per second. This is incorrect for two reasons:
Timecode is a scheme for identifying every frame with a unique number, in the form hour:minute:second:frame, similar in function to the sector and track numbers on computer disk drives. For television, there are assummed to be 30 frames per second, but because the true rate is 29.97, over the course of a half hour you would go over by a couple of seconds. There is a special form of timecode called Drop Frame Timecode, which skips every thousandth frame number, so that the final time comes out right. However, it can be madness dealing with a noncontinuous number system in a linear medium, particularly if frame accuracy is required.
Computers want to be able to deal with images as units. Television doesn't, because it interlaces. Interlace is a scheme for doubling the apparent frame rate at the price of a loss of vertical resolution and a lot of other problems. Pictures are transmitted as alternating fields of even lines and fields of odd lines.
Images coming from a television camera produce 59.94 fields per second. Each field is taken from a different instant in time. If there is any motion in the scene, it is not possible to do a freeze frame, because the image will be made up of two fields, forcing the image to flutter forward and backward in time. A still can be made by taking a single field and doubling it to make a frame, with a loss of image quality.
Twitter is a disturbing flicker caused by the content of one line being significantly different than its interfield neighbors. In extreme cases, it can cause the fields to separate visibly. Twitter can be a big problem for computer generated graphics, because twittery patterns are extremely common, particularly in text, boxes, and line drawings. The horizontal stripes in the Macintosh title bar cause terrible twitter. Twitter can be removed by filtering, but with a lose of detail and clarity.
Field dominance, as mentioned above, is the convention of deciding what a frame is: an odd field followed by an even, or an even followed by an odd. There are two possible ways to do it; neither is better than the other, and neither is generally agreed upon. Some equipment is even, some is odd, some is random. This can be critical when dealing with frames as discrete objects, as in collections of stills. If the field dominance is wrong, instead of getting the two fields of a single image, you will get a field each of two different images, which looks sort of like a superimposition, except that it flickers like crazy.
RCA Laboratories came up with an ingenious method for inserting color into a television channel that could still be viewed by unmodified black and white sets. But it didn't come for free. The placing of all of the luminance and color information into a single composite signal causes some special problems.
The color space of television is not the same as that in a computer RGB system. A computer can display colors that television can't, and trying to encode those colors into a composite television signal can cause aliasing. (Aliasing means "something you don't want.")
Television cannot change colors as quickly as a computer display can. This can also cause aliasing and detail loss in computer-generated pictures on television. There are other problems, such as chroma crawl and cross-color, which are beyond the scope of this article. But they're there.
In the Golden Age, there was no good way to save programs, so all programs were produced live. Videotape was developed years later.
Our problems with videotape are due to two sources: First, the design of television gave no thought to videotape or videodisc, which results in the generation loss problem. Second, the control aspects of interactive television require greater precision than broadcasters require, which creates the frame accuracy problem.
Generation loss is the degradation in the quality of a program every time it is copied. Because videotape is not spliced, the only way to assemble material is by copying it, and with each copy it gets worse. This problem is being corrected by the application of digital technology, and can be considered solved, at least at some locations. It remains to make digital video recording cheap and widely available.
The frame accuracy problem is another story. A computer storage device that, when requested to deliver a particular sector, instead delivered a different sector would be considered defective. In the world of videotape editing, no one can notice that an edit is off by 1/29.97 seconds, so precise, accurate-to-the-frame behavior is not always demanded of professional video gear. This can make the production of computer interactive video material extremely difficult, because if your interest is in a particular frame, the off-by-one frame is totally wrong.
This chapter has mostly concentrated on the NTSC system used in the United States. Other countries use the PAL and SECAM systems, which have their own worlds of problems. These are compounded for the designer who wants to make programs that work in all nations.
A number of new television systems are being proposed to replace or enhance the existing systems. To the extent these have progressive scan (noninterlaced), component color (not composite), a frame rate than can be expressed as a whole number (60 fps, not 59.94 fps), and digital encoding (not analog), then computers and television can be integrated successfully, and the limitations listed above will be techno-historical trivia.
The convergence of television and computer media is extremely desirable. Computer technology would benefit from animated displays and high-bandwidth digital video storage. Camcorders would be wonderful computer input devices. Television technology would benefit from being less mysterious and more straightforward, eliminating the video priesthood in much the same way that good interface design will eliminate the computer priesthood.
Although desirable, this convergence is not inevitable. Some of the worst use of computers is in television engineering. Some of the worst television is "desktop video." The full power of a new medium based on the well considered unification of computer and television technology is distant and elusive. The design challenge is not technologically difficult. It requires only a commitment to excellence and a willingness to accept change.
This New Television could make the tools of production available to every individual. The New Media Literacy could grant people a significant power over the technology of the Information Age. The New Television could perhaps be the most important for of communication ever invented.