16

I'm asking myself after having read on papers that around the year 2035, we, on the earth and our computers by the way, might have to correct our time by one second back to comply with astronomical time.

When I read that, I believed first that this would put computers in great trouble: A clock that is going back and replays a second that already existed wouldn't be supported by many programs, and first: operating systems.

But then I've figured that we are doing something similar every year: we are going back one hour when applying winter time. I don't know if it's the same in every country, but at 3 o'clock in the morning we decide that it's 2 o'clock.


How does a Linux operating system manage timestamped operations when this hour change happens and the timestamp goes from 2024-10-27 02:59:59.999999 to 2024-10-27 02:00:00.000000? Every message ordered by time should be tricked then, for example.

But maybe this isn't what is happening, and going from 2024-10-27 02:59:59.999999 to 2024-10-27 02:00:00.000000 is still going, in terms of system timestamp, from 1730069999 to 1730070000 (+1) (in seconds)?

And in this case the problem of removing one second from our time, in 2035, would be different than going back one hour like we are doing each year? For example, that it could cause us to assign: 2019682799 = 2034-12-31 23:59.59, 2019682800 = 2035-01-01 00:00.00, 2019682801 = 2035-01-01 00:00.00 2019682802 = 2035-01-01 00:00.01

But the problem here is that:

  1. Computers with corrected OS will meet computers without correction (too old or not updated OS), believing that: 2019682801 = 2035-01-01 00:00.01 and 2019682802 = 2035-01-01 00:00.02 when it's not the case. But this is the same problem if some OS still exist that don't know about summer and winter time changes.

  2. I guess that there aren't many examples of code like this around the world:

    timestamp = System.timeInMillis()
    millis = millis % 1000
    secondsSince1970 = millis / 1000
    seconds = secondsSince1970 % 60
    minutesSince1970 = ...
    

    because here, the seconds variable becomes wrong in 2035 if the second removal is applied, as 2019682802 should lead to 01 and not to 02.

  • I personally understood a bit more reading the original publication in Nature ( https://www.nature.com/articles/s41586-024-07170-0.epdf?sharing_token=ledYArCOT7o5zGgzgXyhCtRgN0jAjWel9jnR3ZoTv0M8eI6W1yLpWHEpE-RIkTCyYE2WIbRIkp2z3i1LfYhEU9EWU1gdRL5O7s1BqW5HWVLDQHxIKU0i0a9yYdQDVywZivyzp-pfPCjzj1PnIYVe88YvGPhe7RUqWCe4hxbBpvvWc5jwXEMFZ9icyNpq5EuaGbubewRYGzV3kkEUmXOrg0Rp7URjtxxRNd5Ovo6HgsU%3D&tracking_referrer=www.scientificamerican.com ) which I found, explains well why this would have actually caused troubles… in the past. – MC68020 Mar 31 '24 at 17:09

4 Answers4

43

In Linux, the operating system maintains a clock that runs fundamentally in UTC time, which does not have Daylight Saving time shifts.

The (usually) one-hour Daylight Saving Time is handled by not changing the clock, but changing the UTC offset applied to it when displaying local time.

As a result, in a Central European timezone for example, the timestamp

2024-10-27 02:59:59.999999 UTC+2  =  2024-10-27 00:59:59.999999 UTC = 1730001599

will be followed by timestamp

2024-10-27 02:00:00.000000 UTC+1  =  2024-10-27 01:00:00.000000 UTC = 1730001600

(Note that I'm not using POSIX timezone specifiers here: since the systems POSIX specifications were based on were developed mostly in America, and they reserved positive-integer timezone specifiers for themselves, the sign of POSIX timezone offset is inverted from what you might expect based on general understanding of UTC offset.)

This is why, if a program must ever store timestamps in local-time format, it should always include some UTC offset identifier together with the timestamp. If a program needs to avoid human-scale ambiguities at the end of Daylight Saving Time, it should always store timestamps internally in some UTC-equivalent format.


"Moving UTC time back one second" is equivalent to inserting one extra second in the timescale. This is not a new thing, and the UTC time standard already has a standard way to do it: leap seconds. At the end of every June and December (UTC), there is an opportunity to insert a leap second, so that e.g.

xxxx-12-31 23:59:59 UTC

will be followed by

xxxx-12-31 23:59:60 UTC

and then by

(xxxx+1)-01-01 00:00:00 UTC

Whether or not this is actually done depends on measurable imperfections on Earth's rotation, as decided by the international organization IERS.

The last time a leap second was inserted was at the end of year 2016:

https://hpiers.obspm.fr/eoppc/bul/bulc/UTC-TAI.history

Currently, there is no leap second insertion planned for the end of June 2024. The authoritative source for the next leap second insertion is IERS Bulletin C:

https://datacenter.iers.org/data/latestVersion/bulletinC.txt

The NTP time synchronization protocol has a leap second announcement feature to cover this. Linux date command will dutifully display a 23:59:60 UTC timestamp at the appropriate time to demonstrate that the OS is aware of what's happening, but obviously this means not all Unix timestamp seconds are equal in length: at leap second insertion, most OSs consider the Unix time second to be stretched to the length of two seconds.

(NTP also has a facility for a "negative leap second", but so far, this has never been required in practice, and is not expected to be needed in foreseeable future.)

A newer alternative pragmatic solution is leap smearing: the extra second is handled by slowing the system clock so that the extra second will be accounted for within a day or so. This is based on the idea that the uniformity of the length of each second is more important than the absolute accuracy of the timestamp at the +/- 1 second range. It can be a valid solution for most "general purpose" uses of time.

The leap second is obviously an issue for those that need sub-second timing accuracy at all times, or an exactly accurate count of seconds over all timespans longer than half a year or so. However, it turns out that people who need this kind of accuracy are mostly already aware of the fact and are already dealing with it.

If you need such high-precision timekeeping in a Linux/Unix system, you could set up a local time synchronization facility (e.g. a modified NTP server) that distributes TAI (International Atomic Time) instead of UTC. Then you could have your system clocks run in TAI instead of UTC, and use the right/ variants of the timezones in the IANA/Olson timezone database (i.e. right/Europe/Paris instead of just Europe/Paris: these will include leap seconds).

ilkkachu
  • 138,973
telcoM
  • 96,466
  • 3
    @ilkkachu I assume this means what's prioritised is the uniformity of the length of each "adjacent" second - in the end there will still be two instances of seconds differing in "duration" (at the start and end of the smearing period), but that difference (1:1+1/86400) will be much smaller than the 1:2 ratio around the leap second (where a specific second lasts twice as long). – N1ark Mar 31 '24 at 20:48
  • 9
    @ilkkachu eg. if you set a 5 second timer, you wouldn't want it to last 6 seconds, whereas it lasting ~5.00005787 seconds is probably acceptable – N1ark Mar 31 '24 at 20:50
12

Lots of good answers here already, but I didn't see the mention a far more prosaic fact: this kind of second-shifting happens all the time already on your computer. Sometimes the shift is even more than just one second - and it's OK.

You see, the hardware clocks in your regular PC/server are notoriously imprecise. Without frequent clock synchronization they will drift by entire minutes per year. Source: I've seen/had to deal with it on more than one occasion, both on home PCs and servers.

I don't know if it's too difficult/expensive to make a clock that would be stable all year round, but my guess is that NTP is such a simple and efficient solution that there's just no demand for more precise hardware clocks. At least not under normal circumstances.

And - as you might have noticed - all of our computers are completely fine with this. In fact, it's no different that when you change the clock on your computer manually. By the way - this is another operation which can mess with the clock wildly, yet never causes any problems.

The reality is that most programs just don't care about the clock moving around. If they need a stable monotonic clock (like a computer game calculating what to draw in a frame) there are separate functions in the OS for that. They don't return a timestamp but rather "nanoseconds since the computer booted" or something like that. Perfect for measuring elapsed time between two events.

Timestamps are usually used for something like logging or other places where precision isn't that big of a deal. It's very rare that you need an actual millisecond-precise timestamp all the time.

Added: Just remembered another scenario with wild clock jumps: when your computer goes into sleep or hibernate mode and wakes up afterwards. Now this actually is an operation that some applications are unable to cope with, but even then it's fairly rare. And obviously the OS is fine.

Added 2: TL;DR - the support for leap seconds at OS/application level is irrelevant. It's support for clock synchronization that matters. As long as the computer can successfully synchronize its clock to some external time source, it will pick up the leap seconds as a matter of course and won't even notice anything out of the ordinary. And clock synchronization today is bog standard on all devices and enabled by default.

Added 3: I'm not saying that this will NEVER cause any problems. Obviously this can be an issue under the right circumstances. It's just that in practice it's very rare.

Vilx-
  • 497
  • If you have a cellular radio or GPS receiver connected to your computer that can be used instead of NTP for more precise syncrhronisation. – Jasen Mar 31 '24 at 22:50
  • 2
    @Jasen - True. Though it wouldn't be any different from the OS/app point of view - just a periodic sync to an external clock. – Vilx- Mar 31 '24 at 23:20
6

I'm not adding anything new to the other answers, but I'll try to be more concise and clear.

Timezone

The actual hardware clock of the computer show the time (in seconds) since Epoch (1970-01-01 00:00:00 UTC). The system itself, through some glibc libraries (such as strftime(3)) knows how to convert it to a human readable time in the specific timezone.

For instance, in US/Pacific timezone, you can check when the timezone changes in year 2024 using the zdump command:

$ zdump -V -c 2024,2025 US/Pacific 
US/Pacific  Sun Mar 10 09:59:59 2024 UT = Sun Mar 10 01:59:59 2024 PST isdst=0 gmtoff=-28800
US/Pacific  Sun Mar 10 10:00:00 2024 UT = Sun Mar 10 03:00:00 2024 PDT isdst=1 gmtoff=-25200
US/Pacific  Sun Nov  3 08:59:59 2024 UT = Sun Nov  3 01:59:59 2024 PDT isdst=1 gmtoff=-25200
US/Pacific  Sun Nov  3 09:00:00 2024 UT = Sun Nov  3 01:00:00 2024 PST isdst=0 gmtoff=-28800

So, if we want to convert this to seconds since EPOCH, you can use the following command:

$ date --date='Sun Mar 10 01:59:59 PST 2024' +%s
1710064799

So it this specific time in PST, there would have been 1710064799 seconds that have passed since Epoch (1970-01-01 00:00:00 UTC).

Now, if you check the US/Pacific time at this exact second, you'll see:

$ TZ=US/Pacific date --date='@1710064799'
Sun Mar 10 01:59:59 PST 2024

This is still PST (Pacific Standard Time). But if you add just one second:

$ TZ=US/Pacific date --date='@1710064800'
Sun Mar 10 03:00:00 PDT 2024

You can see it "jumped" one hour, from 02:00 to 03:00, and the time zone switched from PST (Pacific Standard Time) to PDT (Pacific Daylight Time). The seconds in the hardware clock are still running the same way, only the human representation (that depends on your specific time zone) changed.

Leap second

How does your system gets the right time in the first place? It's uses the NTP (Network Time syncronisation Protocol) to poll correct time from some time servers (usually it's the router). It also uses different algorithms to sync the time when there's a difference between the local HW clock and the time polled from the time server. Then it's the job of the NTP server to add or remove the leap second. There are different approaches for that.

For instance, in Cisco routers, the leap second gets added or deleted to the last second of the month.

vl-7500-6#show clock
23:59:59.123 UTC Sun Dec 31 2006
vl-7500-6#show clock
23:59:59.627 UTC Sun Dec 31 2006
<< 59th second occurring twice
vl-7500-6#show clock
23:59:59.131 UTC Sun Dec 31 2006    
vl-7500-6#show clock
23:59:59.627 UTC Sun Dec 31 2006

Google use a Leap Smear approach, where the last day before the leap second, every second will go a bit slower or faster until the added/removed leaped second is completely added up at the end of those 24 hours.

In this example, we will suppose there is a leap second at the end of December 2022, although the actual schedule has not yet been announced.

The smear period starts at 2022-12-31 12:00:00 UTC and continues through 2023-01-01 12:00:00 UTC. Before and after this period, smeared clocks and time service agree with clocks that apply leap seconds.

During the smear, clocks run slightly slower than usual. Each second of time in the smeared timescale is about 11.6 μs longer than an SI second as realized in Terrestrial Time.

[...]

Over the 86,401 SI seconds of the smear, the stretch in the 86,400 indicated seconds adds up to the one additional SI second required by the leap.

aviro
  • 5,532
5

Timezone, daylight savings time, and leap seconds are only a human representation of time. Computers generally track time as elapsed time since a fixed reference such as the Unix epoch; this allows monotonic clocks to be made available for processes which don’t want to deal with such variations.

Changing to and from daylight savings time implies a change in the current timezone, so even human-readable timestamps are fine (as long as the timezone is represented in the output).

Things get more complex with human-generated timestamps, e.g. for job scheduling. Administrators generally schedule jobs based on local time, not a fixed timezone. There are a few general rules, for administrators, which help avoid problems:

  • avoid scheduling jobs during times than may be repeated or skipped (in Europe, 2-3am)
  • make jobs idempotent, and/or aware of their previous execution
  • if a job runs too soon after its previous run, skip it (e.g. a daily job which ran less than 23h ago)
  • make jobs take care of all the work accumulated since the previous run (if an hourly job runs at 1:15, then the 2:15 run is skipped because 2:15 doesn’t happen, the 3:15 job must use 1:15 in the previous timezone as its reference)

Another approach is to skew time: starting far enough in advance, time is slowed down or accelerated, in such a way that any given instant (in the desired frame of reference) occurs once and never repeats, but at some point the skewed time is aligned with the external reference again. Google does this for leap seconds (see “leap smearing” in telcoM’s answer), but I’ve seen it done more generally, if only conceptually. When skipping ahead (moving to daylight savings time), if a scheduler handles jobs defined in local time, and sees that from one minute to the next, an hour was lost, it can run all the jobs that would have been started during that hour.

Stephen Kitt
  • 434,908
  • And I'm not sure I understand the last part about the skewd time, time being slowed down or accelerated and being aligned with some "external reference". What kind of reference? How is it being aligned? I mean, I can understand if we're talking about leap month, for instance, but how can you skew it for a second? Or for an hour (especially if the hour is "compensated" every 6 months anyway)? – aviro Mar 31 '24 at 08:17
  • The only practical example I could think of is a leap year, where if, for instance, you want to run a job once a year, you'll run it every 365 days + 24/4 hours (to add the extra day every 4 years). Is that what you mean? – aviro Mar 31 '24 at 08:34
  • Do you mean that we will assign: 2019682799 = 2034-12-31 23:59.59, 2019682800 = 2035-01-01 00:00.00, 2019682801 = 2035-01-01 00:00.00 too and 2019682802 = 2035-01-01 00:00.01? – Marc Le Bihan Mar 31 '24 at 09:32
  • @aviro stretching time is what Google invented to deal with leap seconds, since POSIX time normally has issues with it. It's called leap smearing (and there ate likely numerous other sources discussing it too). I think it's also often used to adjust if the system time is off, but I don't think it's really relevant for daylight saving. Except for the xkcd joke: https://xkcd.com/2266/ – ilkkachu Mar 31 '24 at 10:09
  • 2
    @MarcLeBihan, if you're talking about seconds since the epoch as POSIX defines it, then no, see https://unix.stackexchange.com/a/758951/170373 and esp. the sources linked there. – ilkkachu Mar 31 '24 at 10:19
  • 1
    Thanks! Buf If the formula tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 + (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 - ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400 mentioned in the post cannot become false, how will you loose a second? – Marc Le Bihan Mar 31 '24 at 10:25
  • @MarcLeBihan, note how the equation makes days exactly 86400 seconds long, regardless of anything. When you go from 23:59:59 to the leap second 23:59:60, tm_sec and the time increases by one. Then, when you go to 00:00:00 on the following day, tm_sec + tm_min*60 + tm_hour*3600 decreases by 233600 + 5960 + 60 = 86400, and tm_yday*86400 increases by 86400, so the total count stays the same. The count of 2019682801 corresponds to 2035-01-01 00:00:01 and no leap second comes into it. – ilkkachu Mar 31 '24 at 18:37
  • 1
    @ilkkachu then the problem would only be to switch computers from normal working mode to a mode where they'll have to consider one specific minute being longer in "physical time" than the other ones? What's the Linux command to provoke this behavior (making a minute during 61 seconds)? I have a watch, I would enjoy controlling this. – Marc Le Bihan Mar 31 '24 at 18:53