6 Terminal Insanity Curses! Foiled Again! Unix is touted as an interactive system, which means that programs interact with the user rather than solely with the file system. The quality of the interaction depends on, among other things, the capabilities of the display and input hardware that the user has, and the ability of a program to use this hardware. Original Sin Unfortunately for us, Unix was designed in the days of teletypes. Teletypes support operations like printing a character, backspacing, and moving the paper up a line at a time. Since that time, two different input/output tech- nologies have been developed: the characterbased video display terminal (VDT), which output characters much faster than hardcopy terminals and, at the very least, place the cursor at arbitrary positions on the screen and the bit-mapped screen, where each separate pixel could be turned on or off (and in the case of color, each pixel could have its own color from a color map). As soon as more than one company started selling VDTs, software engi- neers faced an immediate problem: different manufacturers used different
112 Terminal Insanity control sequences to accomplish similar functions. Programmers had to find a way to deal with the differences. Programmers at the revered Digital Equipment Corporation took a very simple-minded approach to solving the heterogenous terminal problem. Since their company manufactured both hardware and software, they sim- ply didn’t support terminals made by any other manufacturer. They then hard-coded algorithms for displaying information on the standard DEC VT52 (then the VT100, VT102, an so on) into their VMS operating system, application programs, scripts, mail messages, and any other system string that they could get their hands on. Indeed, within DEC’s buildings ZK1, ZK2, and ZK3, an entire tradition of writing animated “christmas cards” and mailing them to other, unsuspecting users grew up around the holidays. (Think of these as early precursors to computer worms and viruses.) At the MIT AI Laboratory, a different solution was developed. Instead of teaching each application program how to display information on the user’s screen, these algorithms were built into the ITS operating system itself. A special input/output subsystem within the Lab’s ITS kernel kept track of every character displayed on the user’s screen and automatically handled the differences between different terminals. Adding a new kind of terminal only required teaching ITS the terminal’s screen size, control characters, and operating characteristics, and suddenly every existing appli- cation would work on the new terminal without modification. And because the screen was managed by the operating system, rather than each application, every program could do things like refresh the screen (if you had a noisy connection) or share part of the screen with another pro- gram. There was even a system utility that let one user see the contents of another user’s screen, useful if you want to answer somebody’s question without walking over to their terminal. Unix (through the hand of Bill Joy) took a third approach. The techniques for manipulating a video display terminal were written and bundled together into a library, but then this library, instead of being linked into the kernel where it belonged (or put in a shared library), was linked with every single application program. When bugs were discovered in the so-called termcap library, the programs that were built from termcap had to be relinked (and occasionally recompiled). Because the screen was managed on a per-application basis, different applications couldn’t interoperate on the same screen. Instead, each one assumed that it had complete control (not a bad assumption, given the state of Unix at that time.) And, perhaps most importantly, the Unix kernel still thought that it was displaying information on a conventional teletype.