Why Y2K?

The Y2K Decision Was Rooted in Imperfect Information

Bill O. Reitz is a pen name for a West Coast electronics executive.

We are fast approaching that fateful day, January 1, 2000. Whether the much-debated Y2K problem will come in with a bang or merely a whimper, only time will tell. But it is interesting to ask why we are in this situation today.

In a recent issue of The Freeman, Mark Skousen blamed the problem—which stems from storing dates with only two digits for the year—on a “classic… shortsighted blunder” based on “stupidity and incompetence.”[1] I must take issue with his characterization. Managers, engineers, and even economists live in a world of imperfect information: we all do the best we can, given the information at hand. I believe that anyone who had to make the decision at that time would have invariably come to a similar conclusion. To call this an “error” is one thing; to blame it on shortsightedness, stupidity, and incompetence is quite another.

Certain important factors are missing from many analyses of Y2K, including the time value of money, the expected probability of a particular outcome, and the uncertainty about projections far into the future. These elements played such an important part in the Y2K decision, and its eventual impact, that they must be discussed further.

In the mid-1950s, computing power was far more expensive than it is today. Programmers of the time, and the managers they worked for, would have been accused of the height of arrogance had they claimed that the software they wrote would still be in use nearly half a century later, or that any decision they made would have any effect so far into the future. Most programmers expected that the life of their products would be a few years—ten at most—and that by the 1960s new software would be written to run on newer, better machines. Even the most arrogant among them would probably not have estimated more than a 1 percent chance that anything they did then would matter to anyone in the year 2000. In hindsight we know that it was the decision these programmers made about how to store data, and not the programs themselves, that would create the legacy known as the Y2K bug.

Moore’s Law

If there is anything that is truly responsible for the Y2K predicament, it is Moore’s Law, or more properly, the engine of human progress that Moore’s Law describes. Gordon Moore, chairman emeritus of Intel Corporation, first observed the rapid improvement in computing power in 1965. At the time, he thought that power doubled every 12 months, but with better information he found 18 months to be more accurate.

The programmers of the original software did not have the benefit of Moore’s data to draw on. Even if they had, they most likely would have thought there was a limit to how far computing power could expand—a wall, as it is called. (Moore himself has been recently quoted as saying that there is a wall.[2])

The implication of Moore’s Law is that the cost of a megabyte of computer main memory (now a dollar or two) would have been a few billion dollars in the mid-1950s. Actually, the cost was closer to a few million, because the performance (which for memory is measured as access speed, or the number of times per second you can read or write a piece of data) was thousands of times slower. Because of the incredibly high cost of computers and their limited capabilities, even the most optimistic estimates of the time placed the total world market at a few hundred units. In hindsight we now know that this estimate was off by a factor of about a million. If it were not for this progress, the impact of the Y2K problem would be felt only by a few of the world’s largest corporations and government agencies, and its cost would have been far less significant.

Even if these people had thought their decisions might have consequences so far into the future, they would have had to discount the expected value of such a future expense by the time value of money—a factor of around 100 at a discount rate of 10 percent. If we say that the total cost of fixing the Y2K bug is $1 trillion, we see that the expected present value to those programmers and managers would have been $100! Even this improbable result would have been weighed against the present and very real cost of including the extra digits in the databases of the time.[3]

Whether anyone actually went through such an analysis we will probably never know. But if they had, the result was a foregone conclusion—based on the best information available at the time. To conclude that this was entrepreneurial error is simply inappropriate.

In the early days of computers, the central processing unit, or the CPU, used vacuum tubes for logic gates, and memory technology consisted of magnetic “drum” and “core.” The engineers and programmers who developed the initial software containing the Y2K bug had probably never heard of three researchers working quietly in a laboratory off the beaten path at AT&T Bell Labs. Walter Brittain,William Shockley, and John Bardeen were developing what would become the transistor. After the first transistors were built, the first practical commercial application would require many years of research and development. One of the early adopters of transistors was Sony Corp., at the time a small, little-known, and struggling Japanese radio manufacturer trying to eke out a profit in the difficult years of rebuilding after World War II. While the big U.S. radio manufacturers were not particularly interested in transistors, Sony realized that they might allow miniaturization and portability. This insight certainly must rank with the greatest advances in the field of electronics, for no matter how clever an innovation is, it is worth nothing until someone uses it for something consumers will buy.

After the transistor was in production, computers also used it. Solid-state memory (as opposed to magnetic core memory) was theoretically possible but neither cost-effective nor size-competitive at that time. Advances in transistor manufacturing made the devices smaller, faster, and cheaper. One of the advances was an imaging process, whereby patterns were placed on the surface of a silicon wafer using photosensitive coatings. This made possible the production of thousands of transistors at once; but more important, it led to the next major advance in electronics.

The Integrated Circuit

An integrated circuit (IC) is a silicon wafer containing the patterns of several transistors, along with a connection pattern hooking them together in a circuit to perform a function. The first IC, built in 1958 by Jack Kilby of Texas Instruments, contained just one transistor and four other components. The first production IC, designed by Robert Noyce (later co-founder, with Gordon Moore, of Intel Corp.), contained eight transistors and eight resistors. The first microprocessor, the Intel 4004 in 1971, used 2,300 transistors. Today’s Pentium II has 7.5 million.

Moore’s Law applies today primarily to the ever-decreasing scale of the patterns of ICs. The first IC used a “geometry,” as it is called, of about 100 microns. This means that the size of the metal connections making up the transistors and wiring on the chip was about 100 microns wide, which is about 0.004 inch or the width of a human hair. While this sounds small enough, today’s leading-edge microprocessors (CPUs) and memory ICs use geometry of 0.25 micron. This is 400 times smaller, but because ICs are basically two-dimensional, you can get 400 squared or 160,000 times as many transistors on the same piece of silicon as you could at 100 microns. The next generation of ICs will have 0.18 micron geometry, nearly doubling capacity yet again.

However, Moore’s Law also applied to the cost of computing in the pre-IC days. Just as a wall was hit with one technology (say, vacuum tubes), another came along (the transistor or the IC) to make possible the continuing reduction in cost that has so greatly benefited mankind. Moore’s statement that there is a wall in our future is conditioned on limits inherent in the continual reduction in scale of current IC processes. But who is to say that there will not be another technology to come along at that point—or earlier? One obvious possibility is to develop some sort of three-dimensional process for IC manufacture; people are working on this idea today.

Lost Opportunities

You will recall that the true cause of the Y2K bug was not actually the programs developed by the original team of programmers, but rather their decision to store the year with two, not four, characters. Recall also that there have been times when new software has been put into production over the last half century to take advantage of improvements in hardware capability and of software design.

While it is difficult to modify an existing program to use four-digit years, it is relatively simple to design a program that way from the start. New programs, however, had to be compatible with existing databases, which contained thousands or even millions of records representing vast value. It would not only be prohibitively expensive to re-enter these databases, but numerous errors would occur, which would have to be found and fixed, all adding to cost and causing disruption.

However, one issue that is often overlooked in this discussion is that it is relatively simple to write a program for an existing (two-digit) database and append a leading “19″ to every date field, thereby converting it to a four-digit database. While these databases may be huge, they are very structured: every record has exactly the same format. The dates are always in the same place, and adding a couple of characters to each record would be relatively easy.

So why didn’t this happen? As the ‘70s, ‘80s, and then ‘90s came upon us, the problem was understood by many. The old justification for two-digit years (cost of memory) was no longer valid. Why, other than the natural human tendency to procrastinate, was this simple change not made? Perhaps it was because multiple programs used the same databases and they couldn’t all be updated at once. I don’t know the answer to this question, but if the Y2K problem turns out to have consequences as dire as some predict, it will be a question that many will wonder about.

If there is any blame to be placed in all of this, surely it does not rest on the shoulders of those who, nearly half a century ago, used their best engineering and management judgment to make what seemed then to be a simple decision. It lies, instead, with the inevitable shortcomings of making decisions based on imperfect information. Let us hope that our descendants will be more generous with us when they examine the effects of our work in the year 2050.


Notes

  1. Mark Skousen, “Y2K and Entrepreneurial Error,” The Freeman, March 1999.
  2. Gordon Moore, Intel Developer Forum, September 30, 1997, as quoted by Michael Kanellos, CNETNews.com.
  3. The calculation goes like this: $1,000,000,000,000 divided by one million multiplied by 1 percent and divided by 100 equals $100. One million is for the greater number of computers now in use than were expected; 1 percent for the likelihood that the programmers’ decisions would affect us today, and 100 for the time value of money at 10 percent.