People have this funny idea that time measurements should be reliable: 24 hours in a day, 365 days in a year, and so on. Of course the physical universe doesn't work that way: a year is slightly more than 365 days, a day is every so slightly longer than the number of seconds in 24 hours, so we need to adjust things now and then. A second is defined by Cesium radiation decay and that's TAI time, but if we used that for our clocks, we'd get slippage, so we use UTC time, which introduces leap seconds when needed. This keeps the Sun overhead at high noon, but is more than a small problem for computers.
It's not that computers can't set their time just as you set your watch, but programs that rely on doing something at specific intervals will of course be unreliable if time changes in mid-stream. Naturally, that could be upsetting: (from https://www.uwsg.iu.edu/hypermail/linux/kernel/9809.1/0219.html ):
.. proposed a similar solution: gettimeofday() will not return during 23:59:60. If a process calls gettimeofday() during a leap second, then the call will sleep until 0:00:00 when it can return the correct result. This horrified the real-time people. It is, however, strictly speaking, completely correct.
Unix time is based on seconds since Jan 1 1970, and of course every OS has some similar scheme. Microsoft takes its lumps on UTC leap seconds: DateTime - Not As Simple As You Think by Olav Lerflaten discusses .NET's issues on this subject.
There are proposals to fix the problem: Make NTP timestamps leap-second-neutral (like GPS time), and these of course generate lots of sometimes heated discussion.
Got something to add? Send me email.
More Articles by Tony Lawrence © 2011-03-12 Tony Lawrence
Software engineering is the part of computer science which is too difficult for the computer scientist. (Friedrich Bauer)