270 The File System might be useful. Few people, that is, except for Macintosh users, who have known and enjoyed file types since 1984. No Record Lengths Despite the number of databases stored on Unix systems, the Unix file sys- tem, by design, has no provision for storing a record length with a file. Again, storing and maintaining record lengths is left to the programmer. What if you get it wrong? Again, this depends on the program that you're using. Some programs will notice the difference. Most won’t. This means that you can have one program that stores a file with 100-byte records, and you can read it back with a program that expects 200-byte records, and won’t know the difference. Maybe… All of Unix’s own internal databases—the password file, the group file, the mail aliases file—are stored as text files. Typically, these files must be pro- cessed from beginning to end whenever they are accessed. “Records” become lines that are terminated with line-feed characters. Although this method is adequate when each database typically had less than 20 or 30 lines, when Unix moved out into the “real world” people started trying to put hundreds or thousands of entries into these files. The result? Instant bottleneck trying to read system databases. We’re talking real slowdown here. Doubling the number of users halves performance. A real system wouldn’t be bothered by the addition of new users. No less than four mutu- ally incompatiable workarounds have now been developed to cache the information in /etc/password, /etc/group, and other critical databases. All have their failings. This is why you need a fast computer to run Unix. File and Record Locking “Record locking” is not a way to keep the IRS away from your financial records, but a technique for keeping them away during the moments that you are cooking them. The IRS is only allowed to see clean snapshots, lest they figure out what you are really up to. Computers are like this, too. Two or more users want access to the same records, but each wants private access while the others are kept at bay. Although Unix lacks direct record support, it does have provisions for record locking. Indeed, many people are surprised that modern Unix has not one, not two, but three completely different systems for record locking. In the early days, Unix didn’t have any record locking at all. Locking vio- lated the “live free and die” spirit of this conceptionally clean operating system. Ritchie thought that record locking wasn't something that an oper- ating system should enforce—it was up to user programs. So when Unix
UFS: The Root of All Evil 271 hackers finally realized that lock files had to be made and maintained, they came up with the “lock file.” You need an “atomic operation” to build a locking system. These are oper- ations that cannot be interrupted midstream. Programs under Unix are like siblings fighting over a toy. In this case, the toy is called the “CPU,” and it is constantly being fought over. The trick is to not give up the CPU at embarrassing moments. An atomic operation is guaranteed to complete without your stupid kid brother grabbing the CPU out from under you. Unix has a jury-rigged solution called the lock file, whose basic premise is that creating a file is an atomic operation a file can’t be created when one is already there. When a program wants to make a change to a critical database called losers, the program would first create a lock file called losers.lck. If the program succeed in creating the file, it would assume that it had the lock and could go and play with the losers file. When it was done, it would delete the file losers.lck. Other programs seeking to modify the losers file at the same time would not be able to create the file losers.lck. Instead, they would execute a sleep call—and wait for a few seconds—and try again. This “solution” had an immediate drawback: processes wasted CPU time by attempting over and over again to create locks. A more severe problem occurred when the system (or the program creating the lock file) crashed because the lock file would outlive the process that created it and the file would remain forever locked. The solution that was hacked up stored the process ID of the lock-making process inside the lock file, similar to an air- line passenger putting name tags on her luggage. When a program finds the lock file, it searches the process table for the process that created the lock file, similar to an airline attempting to find the luggage’s owner by driving up and down the streets of the disembarkation point. If the process isn’t found, it means that the process died, and the lock file is deleted. The pro- gram then tries again to obtain the lock. Another kludge, another reason Unix runs so slowly. After a while of losing with this approach, Berkeley came up with the con- cept of advisory locks. To quote from the flock(2) man page (we’re not making this up): Advisory locks allow cooperating processes to perform consistent operations on files, but do not guarantee consistency (i.e., processes may still access files without using advisory locks possibly resulting in inconsistencies).
Previous Page Next Page