41

Why is a signed integer used to represent timestamps? There is a clearly defined start at 1970 that's represented as 0, so why would we need numbers before that? Are negative timestamps used anywhere?

Michael Mrozek
  • 93,103
  • 40
  • 240
  • 233
Bakudan
  • 535
  • 4
    That's why Nostradamus couldn't use his computer to write his predictions for the years 3000+... it would cause an overflow and show his dates as negative. I think they called it the Y3K bug or something! – Jeach Dec 01 '11 at 18:48
  • 4
    The ancient Romans had an even worse problem when year numbers switched from negative to positive. They would have called it the Y0K problem if they'd had a way to express the number zero. 8-)} – Keith Thompson Jan 22 '12 at 23:06

2 Answers2

52

Early versions of C didn't have unsigned integers. (Some programmers used pointers when they needed unsigned arithmetic.) I don't know which came first, the time() function or unsigned types, but I suspect the representation was established before unsigned types were universally available. And 2038 was far enough in the future that it probably wasn't worth worrying about. I doubt that many people thought Unix would still exist by then.

Another advantage of a signed time_t is that extending it to 64 bits (which is already happening on some systems) lets you represent times several hundred billion years into the future without losing the ability to represent times before 1970. (That's why I oppose switching to a 32-bit unsigned time_t; we have enough time to transition to 64 bits.)

  • 9
    The time function is older than the epoch: Unix v1 (in 1971) counted in units of 1/60th of a second, from midnight on 1971/01/01. It was already a known bug that “The chronological--minded user will note that 2**32 sixtieths of asecond is only about 2.5 years.” unsigned was introduced by K&R in 1978, well after the 1970 epoch was established. – Gilles 'SO- stop being evil' Nov 26 '11 at 00:37
  • I did a quick test and on my 64-bit linux box. gmtime and localtime max out in the year 2147483647 (with the next second after giving -2147483648 as the year). So to get much past 55 bits of time somebody will have to update the output routine to use a 64-bit int for the year instead of an unsigned 32-bit int. Hopefully somebody will take care of that bug sometime in the next couple billion years. – freiheit Dec 02 '11 at 19:11
  • @freiheit: Interesting. The problem there is that the struct tm type has a member tm_year (representing years since 1900) which is of type int. 64-bit systems can easily have a 64-bit time_t, but they typically have a 32-bit int. (If char is 8 bits and int is 64 bits, then short can be either 16 or 32 bits, and there will be no predefined type for the other size.) But time() is probably the only function in <time.h> that really requires system-level support; you can write your own code to convert time_t values to human-readable strings. – Keith Thompson Dec 02 '11 at 20:18
19

It's to support timestamps and dates before January 1st, 1970.

amphetamachine
  • 5,517
  • 2
  • 35
  • 43
  • 3
    This makes only 68 years in the past - 1902. This seems quite a little. – Bakudan Nov 25 '11 at 20:43
  • 2
    POSIX doesn't require time_t to be only 32 bits; it's already 64 bits on many systems. – Keith Thompson Nov 25 '11 at 20:47
  • 1
    mktime() function returns -1 in case of error so it's probably impossible to distinguish between correct timestamps before 1970-01-01 and error ts. Apparenlty dates before 1970-01-01 are prohibited – DimG Apr 21 '17 at 10:51
  • 1
    @DimG: It's difficult to distinguish between an error and the specific timestamp 1969-12-31 23:59:59 UTC. A negative value other than -1 is unambiguous. – Keith Thompson May 18 '17 at 17:14
  • @KeithThompson that breaks the whole idea, no? – DimG May 18 '17 at 19:14
  • @DimG: How so? It is mildly unfortunate that mktime() uses in-band error signalling, but it only means that one particular moment, unlikely to appear in practice, cannot be easily distinguished from an error. With that one exception, mktime() works just fine for times before and after the epoch. If I were designing the interface today, I'd probably either use some other mechanism to indicate an error, or use a defined constant with an implementation-defined value rather than -1. – Keith Thompson May 18 '17 at 19:55
  • @DimG Set errno = 0 right before calling mktime and check the value of errno immediately after before calling any other library function. That's your actual error indicator, don't think of the -1 as indicating error, think of it as just being a garbage value because in the event of error the return value has to still contain something. – mtraceur May 23 '18 at 09:29
  • @KeithThompson See my above comment re: errno for out-of-band error indication on mktime. – mtraceur May 23 '18 at 09:30
  • 2
    @mtraceur: The C standard doesn't require a failing mktime() call to set errno. (POSIX does.) – Keith Thompson May 23 '18 at 21:50
  • @KeithThompson Thanks for pointing that out. I neglected to do due diligence and double-check C's spec before commenting. – mtraceur May 24 '18 at 22:17