So what happened? Nothing major really? A little glitch here and there, but people kept living their normal life. Why such a big hype before, but so quite afterward? It was because many programmers and engineers paid great attention to the Y2K bug and worked diligently before the millennium passing to make sure things went okay. I could clearly remember where I celebrated the new millennium at -- next to my servers at work, to make sure nothing fails when time went pass 0:00am on 1/1/00.
So what was really the Y2K bug. In simple words, a lot of programs only use two digits to store year and assumes the first two digits of the year will always be 19. that means after the last two digits turned from 99 to 00, the first two digits remained the same, resulting in 1900.
Interestingly enough, when everyone's clock went from 11:59pm 12/31/2009 to 12:00am 1/1/2010, a bug similar to the Y2K bug hit the world (especially Europe) and caused all kinds of havoc:
- ATMs and point of sale machines rejected debit cards of 30 million people in Germany since New Year's Day
- Similar problem occurred in Australia where point of sale machines skipped ahead to 2016 rather than 2010
- Symantec's software (anti-virus, anti-spam, etc.) treated all new updates from the company as old data and refused to update
- Some users of mobile phones also reported that they started to receive messages dated in the future: 2016
- Palm had to release a new version of the operating system so their customers' Palm Pre phones would continue to sync and the calendar program would continue to function
A French (credit) card manufacture, Gemaltao, had admitted that the problem was because of a programming failure, which affected chips on credit cards and debit cards. The estimated damage is over €300m.
So how could a software bug cause 2010 to turn into 2016? My guess would be that the programmer used a hexadecimal data type instead of the decimal data type for the last two digits of the year field. Let me explain here.
For decimal numbers, each additional digit would mean 10 units of the previous digit. For example:
10 = 1 x 10 + 0
15 = 1 x 10 + 5
123 = 1 x 10 x 10 + 2 x 10 + 3
For hexadecimal numbers, each additional digit would mean 16 units of the previous digit. Therefore, the same representations in the previous examples would mean very different numbers in decimal:
Hex 10 = Decimal 1 x 16 + 0 = Decimal 16
Hex 15 = Decimal 1 x 16 + 5 = Decimal 21
Hex 123 = Decimal 1 x 16 x 16 + 2 x 16 + 3 = Decimal 291
If you look at the first example closely, you'll see that year 10 in hexadecimal turned into year 16 in decimal. This also means the program at fault still only used two digits to represent the year field (probably assuming 20 for the first two digits to save memory), but used the wrong data type.
The other merit of the story is that when people are relying on technology more and more nowadays, what consequences do we face if some key technologies fail? We can only do so much to anticipate certain failures (such as the Y2K bug), and when failure strikes, we suffer.
To think further, what if the failure were caused intentionally by criminals? I don't see them taking over the world easily, but they can certainly cause a lot of damage (disruption of power grid comes to mind). How can we get ready to deal with this kind of challenges? I guess that is an open question still waiting to be answered...
"Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live." -- Martin Golding
Picture of the Day:
Look at all those people stranded at the Austrian ski resort!! :)