313 Two famous people, one from MIT and another from Berkeley (but work- ing on Unix), once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC2 loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such an input/output operation involving IO buffers. If an interrupt occurs during the operation, the state of the user pro- gram must be saved. Because the invocation of the system routine is usu- ally a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, reenters the system rou- tine. It is called “PC loser-ing” because the PC is being coerced into “loser mode,” where “loser” is the affectionate name for “user” at MIT. The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to deter- mine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing. The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the inter- face to the functionality was complex. The New Jersey guy said that the right trade off has been selected in Unix—namely, implementation sim- plicity was more important than interface simplicity. The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken, but the New Jersey guy didn’t understand (I’m not sure I do either). Now I want to argue that worse-is-better is better. C is a programming lan- guage designed for writing Unix, and it was designed using the New Jersey approach. C is therefore a language for which it is easy to write a decent 2Program Counter. The PC is a register inside the computer’s central processing unit that keeps track of the current execution point inside a running program.
314 The Rise of Worse Is Better compiler, and it requires the programmer to write text that is easy for the compiler to interpret. Some have called C a fancy assembly language. Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50% to 80% of what you want from an operating system and programming language. Half the computers that exist at any point are worse than median (smaller or slower). Unix and C work fine on them. The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they? Unix and C are the ultimate computer viruses. A further benefit of the worse-is-better philosophy is that the programmer is conditioned to sacrifice some safety, convenience, and hassle to get good performance and modest resource use. Programs written using the New Jersey approach will work well in both small machines and large ones, and the code will be portable because it is written on top of a virus. It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its func- tionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing. In concrete terms, even though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers bet- ter than want to make Lisp compilers better. The good news is that in 1995 we will have a good operating system and programming language the bad news is that they will be Unix and C++. There is a final benefit to worse-is-better. Because a New Jersey language and system are not really powerful enough to build complex monolithic software, large systems must be designed to reuse components. Therefore, a tradition of integration springs up. How does the right thing stack up? There are two basic scenarios: the “big complex system scenario” and the “diamond-like jewel” scenario. The “big complex system” scenario goes like this: