22

Disclaimerish thingy:

I just went through the list oft StackExchange sites for about 20 minutes trying to figure out where to post this. If you know any site more suitable, please move this question there. I'm posting this here because unix time got me thinking.


So as we all know, there is unix time and there is UTC. Unix time just keeps on ticking, counting seconds – one second per second –, whereas UTC tries to keep time in the human-readable formats we use aligned with Earth's phase in its rotation. To do this, UTC inserts leap seconds from time to time.

Since time is relative to gravitational force the object experiencing time is exposed to, other kinds of acceleration, and relative speed, this leads to 2 questions. Let's get over the simple one first: Where is unix time measured? If Alice and Bob start out agreeing the current time is 1467932496.42732894722748 when they are at the same place (a second of course being defined as 9'192'631'770 cycles of radiation corresponding to the transition between two energy levels of the caesium-133 atom at rest and at 0 K), experience a twin paradox due to Alice living on sea level and Bob living high up in the mountains or Alice living at the north pole and Bob living at the equator, they won't agree any more. So how is unix time defined precisely?

You might not see the problem with UTC at first because surely everyone can agree on when earth completed an orbit (this is of course ignoring continental plate movement but I think we get that one figured out pretty well because with GPS it's possible to measure their movement very precisely and we can assume them to be on a set position in our model and not move as continental plates shift), no matter whether they are on a mountain, on sea level, on the equator, or at the north pole. There might be some time differences but they don't accumulate.

But a second is defined as 9'192'631'770 cycles of radiation corresponding to the transition between two energy levels of the caesium-133 atom at rest and at 0 K and caesium-133 atom don't care about earth's orbit. So UTC decides where to insert a leap second but there has to be a measured or predicted shift between the phase of earth's orbit and the time measured somewhere by an atomic clock. Where is that somewhere?

UTF-8
  • 3,237
  • 6
    "Unix time just keeps on ticking, counting seconds – one second per second" — actually, it doesn't. Things would be simpler if it did. – hobbs Jul 08 '16 at 05:16
  • 3
    The question I think you meant to ask would have been on topic on [physics.SE] - but it's a question about time standards, like UTC, and has nothing to do with UNIX time. See also this and this and other related questions. – David Z Jul 08 '16 at 07:53
  • 7
    I'm voting to close this question as off-topic because it is about physics, politics, and temporal standards, but not Unix. – Michael Homer Jul 08 '16 at 08:27
  • 3
    There is probably an on-topic question somewhere in this area that you could have asked, but I don't think this is it. It just has "... and how about Unix?" tossed in occasionally to an unrelated question, as the answers illustrate. – Michael Homer Jul 08 '16 at 08:29
  • @hobbs Thank you! I was under the impression it did because I was told so and quite liked the idea. I thought date functions were updated from time to time to adjust for changes in other time systems and that there is a functional relation between unix time and UTC but not the other way around. Apparently there isn't one in either direction. – UTF-8 Jul 08 '16 at 15:35
  • 1

3 Answers3

33

Your headline question doesn't have a real answer; Unix time isn't a real timescale, and isn't "measured" anywhere. It's a representation of UTC, albeit a poor one because there are moments in UTC that it can't represent. Unix time insists on there being 86,400 seconds in every day, but UTC deviates from that due to leap seconds.

As to your broader question, there are four important timescales of interest:

  1. UT1 (Universal Time), which is calculated by observatories around the world which measure the rotation of the Earth with respect to the fixed stars. With these observations and a little math, we get a more modern version of the old Greenwich Mean Time, which was based on the moment of solar noon at the Royal Observatory in Greenwich. Universal Time is calculated by An organization called the IERS (the International Earth Rotation and Reference Systems Service, formerly the International Earth Rotation Service).

  2. TAI (International Atomic Time), which is kept by hundreds of atomic clocks around the world, maintained by national standards bodies and such. The keepers of the clocks that contribute to TAI use time transfer techniques to steer their clocks towards each other, canceling out any small errors of individual clocks and creating an ensemble time; that ensemble is TAI, published by the International Bureau of Weights and Measures (BIPM), the stewards of the SI system of units. To answer your question about time dilation, TAI is defined to be atomic time at sea level (actually, at the geoid, which is a fancier version of the same idea), and each clock corrects for the effects of its own altitude.

  3. UTC (Coordinated Universal Time), which was set equal to ten seconds behind TAI on 1 January 1972, and since that date it ticks forwards at exactly the same rate as TAI, except when a leap second is added or subtracted. The IERS makes the decision to announce a leap second in order to keep the difference within 0.9 seconds (in practice, within about 0.6 seconds; an added leap second causes the difference to go from −0.6 to +0.4). In theory, leap seconds can be both positive and negative, but because the rotation of the earth is slowing down compared to the standard established by SI and TAI, a negative leap second has never been necessary and probably never will.

  4. Unix time, which does its best to represent UTC as a single number. Every Unix time that is a multiple of 86,400 corresponds to midnight UTC. Since not all UTC days are 86,400 seconds long, but all "Unix days" are, there is an irreconcilable difference that has to be patched over somehow. There's no Unix time corresponding to an added leap second. In practice, systems will either act as though the previous second occurred twice (with the unix timestamp jumping backwards one second, then proceeding forward again), or apply a technique like leap smearing that warps time for a longer period of time on either side of a leap second. In either case there's some inaccuracy, although at least the second one is monotonic. In both cases, the amount of time that passes between two distant Unix timestamps a and b isn't equal to ba; it's equal to ba plus the number of intervening leap seconds.

Since UT1, TAI, UTC, and the IERS are all worldwide, multinational efforts, there is no single "where", although IERS bulletins are published from the Paris Observatory and the BIPM is also based in Paris, that's one answer. An organization that requires precise, traceable time might state their timebase as something like "UTC(USNO)", which means that their timestamps are in UTC and that they're derived from the time at the US Naval Observatory, but given the problems that I mentioned with Unix time, it's basically incompatible with that level of precision—anyone dealing with really precise time will have an alternative to Unix time.

hobbs
  • 898
  • 6
  • 11
  • 1
    You've overlooked the existence of the right/ timezones in the Olson system, and how they regard time_t. – JdeBP Jul 08 '16 at 05:25
  • 1
    @JdeBP I actually hadn't heard of that. I think it's a bit dubious to call that Unix time when it clearly goes against both POSIX and long-standing convention, but it's valuable information anyway. Perhaps you can add an answer about it? – hobbs Jul 08 '16 at 05:30
  • 1
    The easiest way to get by a highly accurate time source for common people is a GPS receiver. The clocks on the satellites are synchronised to TAI and the signal is accurate to about 10⁻⁸ s (without corrections; with corrections it can be improved to 10⁻¹⁰). – Jan Hudec Jul 08 '16 at 08:05
  • Hmm, but date and xdaliclock clearly are aware of leap-seconds. Do they get this information through NTP? – gerrit Jul 08 '16 at 09:14
  • 1
    @JanHudec It's not like ordinary people can tell the difference between a clock accurate to 10⁻² or 10⁻¹⁰. – gerrit Jul 08 '16 at 09:17
  • 1
    Just a hint on why UNIX does not include leap second support. This has been discussed many times in the Austin Group teleconference and the result was that adding support for leap seconds would cause more problems than omitting support would cause. – schily Jul 08 '16 at 09:29
  • It's quite evident that there are several locations at which TAI is measured. However, I guess the time they agree on isn't just some statistical mean but has some physics background. The theory of relativity works pretty well and it should be easy given a precise time span to tell how much time passed in any location on the (idealized) planet. So a deviation between a clock at the equator and one TAI the north pole is quite expected. Is there any location for which the TAI guys try to make time spans work out more precisely according to the definition of the second than for other locations? – UTF-8 Jul 08 '16 at 15:48
  • @UTF-8 they're corrected for the geoid, which is what you get if you take the idea of "mean sea level" and define it as a surface of constant gravitational potential, instead of a sphere of constant distance from the center of the earth. So again, it's not a single location. – hobbs Jul 08 '16 at 16:15
  • I read that you wrote that but you also have to consider that we live in an accelerated system. If you're close to the equator, you travel faster than if you're farther away from it (and experience more centrifugal force). – UTF-8 Jul 08 '16 at 16:18
  • @UTF-8 the geoid already accounts for that as well. – hobbs Jul 08 '16 at 16:26
15

UNIX time is measured on your computer, running UNIX.

This answer is going to expect you to know what Coördinated Universal Time (UTC), International Atomic Time (TAI), and the SI second are. Explaining them is well beyond the scope of Unix and Linux Stack Exchange. This is not the Physics or Astronomy Stack Exchanges.

The hardware

Your computer contains various oscillators that drive clocks and timers. Exactly what it has varies from computer to computer depending on its architecture. But usually, and in very general terms:

  • There is a programmable interval timer (PIT) somewhere, that can be programmed to count a given number of oscillations and trigger an interrupt to the central processing unit.
  • There is a cycle counter on the central processor that simply counts 1 for each instruction cycle that is executed.

The theory of operation, in very broad terms

The operating system kernel makes use of the PIT to generate ticks. It sets up the PIT to free-run, counting the right number of oscillations for a time interval of, say, one hundredth of a second, generating an interrupt, and then automatically resetting the count to go again. There are variations on this, but in essence this causes a tick interrupt to be raised with a fixed frequency.

In software, the kernel increments a counter every tick. It knows the tick frequency, because it programmed the PIT in the first place. So it knows how many ticks make up a second. It can use this to know when to increment a counter that counts seconds. This latter is the kernel's idea of "UNIX Time". It does, indeed, simply count upwards at the rate of one per SI second if left to its own devices.

Four things complicate this, which I am going to present in very general terms.

Hardware isn't perfect. A PIT whose data sheet says that it has an oscillator frequency of N Hertz might instead have a frequency of (say) N.00002 Hertz, with the obvious consequences.

This scheme interoperates very poorly with power management, because the CPU is waking up hundreds of times per second to do little more than increment a number in a variable. So some operating systems have what are know as "tickless" designs. Instead of making the PIT send an interrupt for every tick, the kernel works out (from the low level scheduler) how many ticks are going to go by with no thread quanta running out, and programs the PIT to count for that many ticks into the future before issuing a tick interrupt. It knows that it then has to record the passage of N ticks at the next tick interrupt, instead of 1 tick.

Application software has the ability to change the kernel's current time. It can step the value or it can slew the value. Slewing involves adjusting the number of ticks that have to go by to increment the seconds counter. So the seconds counter does not necessarily count at the rate of one per SI second anyway, even assuming perfect oscillators. Stepping involves simply writing a new number in the seconds counter, which isn't usually going to happen until 1 SI second since the last second ticked over.

Modern kernels not only count seconds but also count nanoseconds. But it is ridiculous and often outright unfeasible to have a once-per-nanosecond tick interrupt. This is where things like the cycle counter come into play. The kernel remembers the cycle counter value at each second (or at each tick) and can work out, from the current value of the counter when something wants to know the time in nanoseconds, how many nanoseconds must have elapsed since the last second (or tick). Again, though, power and thermal management plays havoc with this as the instruction cycle frequency can change, so kernels do things like rely on additional hardware like (say) a High Precision Event Timer (HPET).

The C language and POSIX

The Standard library of the C language describes time in terms of an opaque type, time_t, a structure type tm with various specified fields, and various library functions like time(), mktime(), and localtime().

In brief: the C language itself merely guarantees that time_t is one of the available numeric data types and that the only reliable way to calculate time differences is the difftime() function. It is the POSIX standard that provides the stricter guarantees that time_t is in fact one of the integer types and that it counts seconds since the Epoch. It is also the POSIX standard that specifies the timespec structure type.

The time() function is sometimes described as a system call. In fact, it hasn't been the underlying system call on many systems for quite a long time, nowadays. On FreeBSD, for example, the underlying system call is clock_gettime(), which has various "clocks" available that measure in seconds or seconds+nanoseconds in various ways. It is this system call by which applications software reads UNIX Time from the kernel. (A matching clock_settime() system call allows them to step it and an adjtime() system call allows them to slew it.)

Many people wave the POSIX standard around with very definite and exact claims about what it prescribes. Such people have, more often than not, not actually read the POSIX standard. As its rationale sets out, the idea of counting "seconds since the Epoch", which is the phrase that the standard uses, intentionally doesn't specify that POSIX seconds are the same length as SI seconds, nor that the result of gmtime() is "necessarily UTC, despite its appearance". The POSIX standard is intentionally loose enough so that it allows for (say) a UNIX system where the administrator goes and manually fixes up leap second adjustments by re-setting the clock the week after they happen. Indeed, the rationale points out that it's intentionally loose enough to accommodate systems where the clock has been deliberately set wrong to some time other than the current UTC time.

UTC and TAI

The interpretation of UNIX Time obtained from the kernel is up to library routines running in applications. POSIX specifies an identity between the kernel's time and a "broken down time" in a struct tm. But, as Daniel J. Bernstein once pointed out, the 1997 edition of the standard got this identity embarrassingly wrong, messing up the Gregorian Calendar's leap year rule (something that schoolchildren learn) so that the calculation was in error from the year 2100 onwards. "More honour'd in the breach than the observance" is a phrase that comes readily to mind.

And indeed it is. Several systems nowadays base this interpretation upon library routines written by Arthur David Olson, that consult the infamous "Olson timezone database", usually encoded in database files under /usr/share/zoneinfo/. The Olson system had two modes:

  • The kernel's "seconds since the Epoch" is considered to count UTC seconds since 1970-01-01 00:00:00 UTC, except for leap seconds. This uses the posix/ set of Olson timezone database files. All days have 86400 kernel seconds and there are never 61 seconds in a minute, but they aren't always the length of an SI second and the kernel clock needs slewing or stepping when leap seconds occur.
  • The kernel's "seconds since the Epoch" is considered to count TAI seconds since 1970-01-01 00:00:10 TAI. This uses the right/ set of Olson timezone database files. Kernel seconds are 1 SI second long and the kernel clock never needs slewing or stepping to adjust for leap seconds, but broken down times can have values such as 23:59:60 and days are not always 86400 kernel seconds long.

M. Bernstein wrote several tools, including his daemontools toolset, that required right/ because they simply added 10 to time_t to get TAI seconds since 1970-01-01 00:00:00 TAI. He documented this in the manual page.

This requirement was (perhaps unknowingly) inherited by toolsets such as daemontools-encore and runit and by Felix von Leitner's libowfat. Use Bernstein multilog, Guenter multilog, or Pape svlogd with an Olson posix/ configuration, for example, and all of the TAI64N timestamps will be (at the time of writing this) 26 seconds behind the actual TAI second count since 1970-01-01 00:00:10 TAI.

Laurent Bercot and I addressed this in s6 and nosh, albeit that we took different approaches. M. Bercot's tai_from_sysclock() relies on a compile-time flag. nosh tools that deal in TAI64N look at the TZ and TZDIR environment variables to auto-detect posix/ and right/ if they can.

Interestingly, FreeBSD documents time2posix() and posix2time() functions that allow the equivalent of the Olson right/ mode with time_t as TAI seconds. They are not apparently enabled, however.

Once again…

UNIX time is measured on your computer running UNIX, by oscillators contained in your computer's hardware. It doesn't use SI seconds; it isn't UTC even though it may superficially resemble it; and it intentionally permits your clock to be wrong.

Further reading

JdeBP
  • 68,745
13

The adjustments to the clock are co-ordinated by the IERS. They schedule the insertion of a leap second into the time stream as required.

From The NTP Timescale and Leap Seconds

The International Earth Rotation Service (IERS) at the Paris Observatory uses astronomical observations provided by USNO and other observatories to determine the UT1 (navigator's) timescale corrected for irregular variations in Earth rotation.

To the best of my knowledge 23:59:60 (Leap Second) and 00:00:00 the next day are considered the same second in Unix Time.

BillThor
  • 8,965