Was the Y2K threat real, imagined, or invented?

In response to my post regarding false predictions not being properly punished, some blog readers took exception to my argument that the hysteria that surrounded Y2K was a false prophesy. Their argument is that all of the preparation leading up to Y2K averted what would have been a disaster.

That just doesn’t ring true to me. Was there not anyone, anywhere who either failed to properly prepare, or maybe happened to overlook some aspect of how Y2K might affect their systems? Did every small business and third world country catch every bug? Did anything go wrong as a result of Y2K? Did anyone ever test a system in advance of Y2K and find that had they not tested, something catastrophic would have happened?

Here is a good article from Larry Seltzer who knows much more than me about the subject and holds the same view.

My recollection is that programmers were getting paid far above standard wage rates due to the great demand for their services with Y2K. Could it be that there were strong incentives on their part to exaggerate the danger? Sounds logical.


Also, the problem in Excel is NOT a y2k bug. In fact it isn't a problem at all and doesn't have anything to do with Excel; it is a setting in Windows on how to handle dates. Is there a study on the cost of lack of information on operating systems settings?
DOS came with a book and I think I knew more then as I had a hard copy reference. Windows I just take for granted as my programs run. I would't have known about the date handling, except for a friend who cares to read online documentation (I don't).


The Y2K issue was largely, but not exclusively a COBOL problem. The abreviated date format traces at least as much to the 80 character limit on punch cards as core memory constraints. The history predates the introduction of C and the C standard library which were not developed until the '70s.

Embeded systems were also a concern. These were often used to control systems where failure to take an expected action could cause mischief.

It was plausible that actions taken or not taken because of time miscalculations and date overflows could have severe secondary or tertiary effects.

Let's put this into perspective. The great Northeast power outage of 1965 resulted from a single solenoid relay failure starting an overload cascade in the power grid. The failure of the Patriot misile in Iraq War I that killed 28 members of the Army 14th Quartermaster corps was a simple but cumulative numeric round-off problem leading to a miscalulation of the intercept. They periodically "fixed" it by resetting (rebooting) the system. For Y2K catastrophic failure was unlikely, but not impossible. The network of consequences were unknowable in advance, but a reasonable person would test for failure where practical.

I witnessed quite a bit of resource waste remediating low risk Y2K issues, but much of it had to be done. Despite the overblown EOTWAWKI hype, that truth remains.



The real Y2K problem is that most people had/have no idea how commonly computers crash and power is interrupted. Both happen all the time, and the minders figure out the problem, or at least a workaround, and life goes on without catastrophe.

Upon hearing that computers might crash, the public (NOT programmers, who know the truth well) went nuts extrapolating from their mistaken assumption that computers very rarely crash and power is constant.

As for elevators plummeting and planes falling from the sky, all anyone had to do to identify troublesome embedded devices was to see if they had some means of telling the thing today's date. Any objections about "mysterious hidden factory date-setting" is a repetition of the false assumption that computers rarely crash - many would know about the "mysterious hidden factory date-setting" from prior experience of crashes.


It is plausible that catastrophe was averted by fixing some fraction (for example 80%) of the problems.

In some network topologies the system is robust to a high degree of random failure, but vulnerable to a small but targeted failure. The internet is a good example.

In other examples, a phenomenon comparable to a phase transition can be seen. The system properties transform at some critical point. The spread of contagious disease has be modeled this way.

In the former scenario, fixing the "right" 20% might avert diaster, in the latter just fixing any random 80% might have the same effect. See for example the work of Albert-LAszló BarabAsi http://www.nd.edu/~alb/

Dean's World

Hysteria: Remembering Y2K

Speaking of the Freakonomics guys, who always have something interesting to say, I came across this excellent

The New Market Machines » Blog Archive » Y2K: Ancient History

[...] Was the Y2K threat real, imagined, or invented? (Freakonomics) has a lot of techies up in arms. [...]


You have a fire department with two fire engines.
If one house catches fire, you send both engines and put it out.
If two houses catch fire, you send one engine to each, and do not put it out but instead concentrate on keeping nearby houses from burning down.
If three houses catch fire, you have a problem.
It's not that we were worried about one computer going down because of soft alpha error or something, but on many computers going down at the same time and complicating things.


Russia took the following Y2K approach: if something breaks, we will fix it. Virtually nothing was spent in preparation for the dawn of the millenium (in regards to computers, that is; lots was spent on champansky, vodka, and fireworks). Nothing much happened. I think Freakonomics is right.

anonymous economist

For the individual firm, couldn't you calculate the expected value of the total cost of suffering a visible Y2K-related software failure? Taking into account expectations of lost business activity and of the loss of investor confidence, that total cost estimate could be a very large number. Also, considering the number of companies using legacy computer code originally written by long-departed programmers ('I don't know what it does, but if we don't run this program first, the accounts payable application crashes'), I can imagine management being hesitant to assign too small of a probability to having one or more critical Y2K bugs. End result: Throw money at the I.T. department and make the problem go away.

Also, I can remember studying the Y2K bug in a Systems Analysis class in 1980. They knew...


Y2K can be amusing.
A friend works in a midsized retail business owned by his father and brother.
Their accounting software would not work after 1999 and they had other plans for investment. What should they do?

For each year since 1999 they pick a year from the past that has the same correspondence of day and date as the year they are in and run the business with that year.


Compare the approach of Y2K with Global Warming.
Unknown result on a certain date vs.
certain result on an unknown date.

OK - not completely true

If the world continues to warm and the ice cap on Greenland melts (10,000 feet thick in spots on .43% of the surface of the world) certainly the oceans (70% of the surface is water) will rise. Perhaps by 80 feet.

So if my teenager wants ocean-front property where should it be?


I think that the Y2K bug was one of many potential tragedies that was averted. No, I don't think the hysteria matched the real consequences of the problem. Very few people truly understood what a break down of computer systems could have done. Since the proper authorities had ample time to deal with the problem a solution, or series of solutions, lessened the problem and we were able to come out on the other side relatively intact.

Those of us who lived through the black out a few years ago probably have a better idea of what can happen when systems break down. For more than a day my city didn't have functional street lights, traffic lights or television. As well, financial institutions were closed and consumers were forced to live off the cash they had in their pockets. In my case that was about $5.45.


I worked on code - i.e. I was a programmer - that had y2k issues. That is, the programs would not have worked if they hadn't been fixed and money would have lost.

Sure, there was a lot of hype around y2k and much of it was commercially cunning, but the problems were real. Would we have taken it seriously if there hadn't been the hype?

In fact, here's a Y2K problems that still exists in Excel.
- Imagine you are doing long range planning.
- Start up MS Excel and type in the 1/1/29. Exel adds 1st of January 2029 into the cell.
- Now type in 1/1/30. What does Excel come up with? It's not 1 January 2030.

This problem won't hit most people because they're not using dates that far in the future, but what if you - or your pension manager - is using Excel to caculate retirement dates and making investments based on how many years until you retire. I'll turn 60 in 2029 so my calculations will be fine. What if you're younger than me?


Charlie (Colorado)

The answer to your question is "yes."

It was real, in that there were programs written in the 60's and early 70's that didn't consider the century change. (Why? because it meant saving 2 bytes a record at a time when 20 megabytes of disk memory cost tens of thousands of dollars.)

It was imagined, in that most of the catastrophic predictions simply didn't make sense: elevators weren't going to fail and stop, because elevators don't actually care whether its Tuesday or Saturday. And while there might have been exceptions, not all odd behavior is a catastrophic failure.

And it was invented, in part: folks like Ed Yourdon made significant chunks of money blowing the story up, selling books about it and selling consulting services to resolve it.

Quoderat » Remembering the Y2K panic

[...] Steven Levitt (of Freakonomics fame) has started a small controversy by casually mentioning that the Y2K crisis was a false prophesy (his more detailed followup posting is here; he also points to a paper that I didn’t bother reading, but probably does a better job than my posting of going over the issue). [...]


I believe three things: some disasters were averted because of the hype; some programmers were overpaid or did unnecessary work; and some real problems occurred. I was surprised at the time that I couldn't find anyone producing a comprehensive list of the last, but I heard of three major incidents: in the US, a 911 telephone system failed to prioritise calls correctly; in the US, a nuclear power plant suffered a fail-open in its security system; in Japan, a nuclear power plant suffered a control failure in its safety systems. There have, of course, been similar disruptions in the years since Y2K when various other clock systems have rolled over.


Also, the fact that Y2K was so anticipated, and many CEO (and then CIO) had been made responsible from the board for the problem, had made them cover the failures.

I have esperienced at least an example of a very big company here in Italy that had a problem on 1/1/2000 but did not acknowledged it.

I was their client, at that time and receiving a data flux from them, and their data did not make sense unless they had fixed it by hand.


"or maybe happened to overlook some aspect of how Y2K might affect their systems?"

Those things that were overlooked, and there were some, were just quietly and quickly fixed. No one had any interest in advertising their failures.


The title of this entry and the text are at odds. Do you want us to answer
real, imagined or invented?
the hysteria that surrounded Y2K was a false prophesy?
Y2K was real and there was also both hysteria and false prophecy.

At the time I was CEO of an electronics manufacturer. Our first reaction was our products were clear of Y2K issues and we were wrong.

We learned that problems caused by Y2K bugs can be quite subtle. We fixed most and let one ride without disclosure betting that the failure in about 2016 would be well past the date any of the devices would still be in-service AND that said failure at that time would be merely inconvenient.

One of my siblings was a C_O at a large New York investment firm and was in charge of a $20+ billion bond house. Our family was together that weekend and the s__t really hit the fan on Saturday Jan 1. Interest was not being dealt with properly and the dollar amounts were significant. Stress levels were high until Sunday when the fix was made and the database(s) corrected.
Interestingly, I was asked to refrain from discussing the event even in sanitized form. A promise I kept for just over 6 years.

A lot of older computer equipment could not be easily fixed and thus folks bought new stuff.

I am a bit fuzzy on facts below so please correct mistakes but be gentle.

Pre-Y2K UNIX had a time system counting seconds from January 1, 1970 and the 32 bit number used to store time would run out in 2038 (or thereabouts).

Many systems had nailed the date to be 19xx where the 19 was fixed and the xx was a counter. So if you see an old website today with a year that reads 19106 you can just smile knowingly.

Here is a subtle one.
Early Global Positioning System equipment had a bug that probably was only fixed by equipment replacement.

GPS runs on Ephemeris Time. This is essentially a counter that just runs forever - just like time itself. However, earth-time has occasional leap seconds inserted. Since Ephemeris Time never changes they built in a counter for the number of weeks between the Leap Seconds. Unfortunately they did not foresee that there would be a period of about 4 years or so between Leap Seconds at one point and the week-counters overflowed. The error of position/location introduced by a second is significant.

For further research go find the archives of comp.risks
Look for articles by Dave Parnas and Peter Neuman.


Orcmid's Lair

What Was Y2K All About?

Levitt gives programmers a bad rap. I think there are three levels to this question: the catastrophe predictions, the situation in computer-based systems and information technology at the time, and the specific technical matter and the role of program...