How can a file have been 'created' in 1641?

75

9

A few years ago, I stumbled on this file on our fileserver.

Screenshot of a file properties dialog in Microsoft Windows

And I wonder how can a file say it's been created in 1641? As far as I know, time on pc is defined by the number of seconds since Jan 1st, 1970. If that index glitches out, you can get Dec 31, 1969 (the index probably says -1) but I'm stumped at this seemingly random date, that predates even the founding of the United States of America.

So how could a file be dated in 1641?

PS: Dates are in french. Février is February.

Fredy31

Posted 2018-03-27T14:33:25.123

Reputation: 1 005

29"As far as I know, time on pc is defined by the number of seconds since Jan 1st, 1970." - Even for systems where it is, dates before 1970 are easily represented by making that number (known as a unix timestamp) negative. For example, unix time -1000000000 corresponds to 1938-04-24 22:33:20. – marcelm – 2018-03-27T15:50:08.213

1@marcelm Yes, but the minimum possible date there is in 1901 due to the limited range of 32-bit integers. – slhck – 2018-03-27T15:53:58.663

14

@slhck: I think marcelm was assuming a 64-bit timestamp, because that's what current Unix / Linux filesystems, kernels, and user-space software use. See the clock_gettime(2) man page for a definition of struct timespec, which is what stat and other system calls use to pass timestamps between user-space and the kernel. It's a struct with a time_t in seconds, and a long tv_nsec nanoseconds. On 64-bit systems, both are 64-bit, so the whole timestamp is 128 bits (16 bytes). (Sorry for too much detail, I got carried away.)

– Peter Cordes – 2018-03-27T16:03:21.127

@PeterCordes I actually appreciate the level of detail! I know 64-bit systems use a different time_t now; in hindsight I probably shouldn't even have brought up the year 2038 issue. – slhck – 2018-03-27T16:14:33.053

3You got a good answer to how a date in the 1600s can be stamped to the file, now it is time to ponder how it happened. I'd look at the contents of that wp file very closely to see what might have been added, as that might shed light on how it happened. I'd look at the installed plugins and validate none are shady. I am thinking something modified that file and tried to manually stamp modified/created dates to hide that the file was modified but specified a unix time instead of a windows time. – Thomas Carlisle – 2018-03-28T12:35:38.440

1FYI, in linux you can backdate something with touch -d "20 Feb 1641" file. It's occasionally useful when testing code like build systems or source repos which use timestamps to determine some behavior. – Karl Bielefeldt – 2018-03-29T14:04:07.327

2King Louis XIII the Just was demonstrating the royal line's commitment to PHP? – Jesse Slicer – 2018-03-29T14:29:04.673

1It may have involved a Delorean. Just saying. – phyrfox – 2018-03-31T00:59:57.983

Answers

106

Why is a date from the 1600s possible?

Windows does not store file modification timestamps like Unix systems do. According to the Windows Dev Center (emphasis mine):

A file time is a 64-bit value that represents the number of 100-nanosecond intervals that have elapsed since 12:00 A.M. January 1, 1601 Coordinated Universal Time (UTC). The system records file times when applications create, access, and write to files.

So, by setting a wrong value here, you can easily get dates from the 1600s.

Of course, another important question is: how was this value set? What is the actual date? I think you'll never be able to find out, as that could have simply been a calculation error in the file system driver. Another answer hypothesizes that the date is actually a Unix timestamp interpreted as a Windows timestamp, but they're actually calculated on different intervals (seconds vs. nanoseconds).

How does this relate to the Year 2038 problem?

The use of a 64-bit data type means that Windows (generally) is not affected by the Year 2038 Problem that traditional Unix systems have, since Unix initially used a 32-bit integer, which overflows sooner than the 64-bit integer that Windows has. (This is despite Unix operating on seconds and Windows operating on micro/nanoseconds.)

Windows is still affected when using 32-bit programs that were compiled with old versions of Visual Studio, of course.

Newer Unix operating systems have already expanded the data type to 64 bits, thus avoiding the issue. (In fact, since Unix timestamps operate in seconds, the new wraparound date will be 292 billion years from now.)

What is the maximum date that can be set?

For the curious ones – here's how to calculate that:

  • The number of possible values in a 64-bit integer are 263 – 1 = 9223372036854775807.
  • Each tick represents 100 nanoseconds, which is 0.1 µs or 0.0000001 s.
  • The maximum time range would be 9223372036854775807 ⨉ 0.0000001 s, so hundreds of billions of seconds.
  • One hour has 3600 seconds, one day has 86400 seconds, and one year has 365 days, so there are 86400 ⨉ 365 s = 31536000 s in a year. This is, of course, only an average, ignoring leap years, leap seconds, or any calendar changes that future postapocalyptic regimes might dictate on the remaining earthlings.
  • 9223372036854775807 ⨉ 0.0000001 s / 31536000 s ≈ 29247 years
  • @corsiKa explains how we can subtract leap years: 29247 / 365 / 4 ≈ 20
  • So your maximum year is 1601 + 29247 – 20 = 30828.

Some folks have actually tried to set this and came up with the same year.

slhck

Posted 2018-03-27T14:33:25.123

Reputation: 182 472

7For the record, Unix (or at least Linux) has already solved the 2038 problem for the kernel / filesystems on 64-bit architectures where time_t is a 64-bit type, so hopefully applications use time_t instead of an integer type that's still 32-bit and would truncate timestamps... Anyway, in case anyone was curious about the status of the problem, it's mostly solved except for legacy 32-bit. IDK if 32-bit ARM will still be relevant in 2038, but this will force at least an ABI change. 32-bit x86 is pretty much gone in the Unix world; it's only Windows where 32-bit code is still common. – Peter Cordes – 2018-03-27T16:00:00.230

Comments are not for extended discussion; this conversation has been moved to chat.

– Journeyman Geek – 2018-03-29T03:24:26.747

1VMS time is "a 64-bit value that represents the number of 100-nanosecond intervals that have elapsed since" 17-Nov-1858. :) – RonJohn – 2018-03-29T14:08:33.530

Year 30k is ... surprisingly low – John Dvorak – 2018-03-29T18:21:40.300

Do you know why they chose 1601? There weren't any computers back then, so I don't really see the purpose of supporting dates that are that early. I'm curious to know why they made it that way. – Donald Duck – 2018-03-30T18:51:17.510

2

@DonaldDuck https://blogs.msdn.microsoft.com/oldnewthing/20090306-00/?p=18913/ and around 1600 people were still using Julian calendar making representation of any date before that more complicated.

– Maciej Piechotka – 2018-03-30T22:20:14.363

16

If you don't feel too bad about some guessing, let me offer an explanation. And I don't mean "someone set the value to nonsense", that's obviously always possible :)

Unix time usually uses the number of seconds since 1970. Windows, on the other hand, uses 1601 as its starting year. So if we assume (and that's a big assumption!) that the problem is wrong conversion between the two times, we can imagine that the date that was supposed to be represented is actually sometime in 2011 (1970 + 41), which got incorrectly converted to 1640 (1601 + 41). EDIT: Actually, I made a mistake in the Windows starting year. It's possible that the actual creation time was in 2010, or that there was another error involved (off-by-one errors are pretty common in software :D).

Given that this year happens to be another of the tracking dates associated with the file in question, I think it's a pretty plausible explanation :)

Luaan

Posted 2018-03-27T14:33:25.123

Reputation: 703

So it could be that the date was written in Unix, but is read in UTC? – Fredy31 – 2018-03-27T19:09:47.757

Windows uses 1601, not 1600. – Joey – 2018-03-27T21:14:44.683

You also have to assume that one UNIX time unit is the same as one Windows time unit. – Eric Duminil – 2018-03-27T22:40:49.010

3A few issues with that theory: According to the screenshot, the file would've been created 11 days after it was last accessed. Also, Windows timestamps are counted in 100 nanoseconds, while UNIX time is in seconds. – Nolonar – 2018-03-28T06:27:04.503

@Nolonar I'm not suggesting the same integer value was used for both - just that the conversion was done wrong. I've seen plenty of code that does something like "Add X days/seconds to 1970". As for the last access time, I ignored days for a reason - there's plenty of extra errors that can add up; even just leap years would make for a large error if ignored. – Luaan – 2018-03-28T08:29:00.067

@EricDuminil No, I don't. I'm assuming the conversion involved some library that understands dates (not just integers), but does the filetime conversion wrong. – Luaan – 2018-03-28T08:29:34.213

@Nolonar as per Joey's comment then it could have been created on 02/20/2010 and last accessed on 02/09/2011 (assuming Luaan's theory) – Rafalon – 2018-03-28T08:40:09.613

2@Joey Ugh, my mistake. That makes the fit a lot worse than at first glance :) I think the same basic idea should still work, but there's probably more errors involved than I assumed. Or, of course, the file was created in 2010, not 2011. – Luaan – 2018-03-28T08:40:21.473

2

Seconds between Windows epoch and shown file date/time is 1.266.705.294. When added to Unix epoch, this yields 2010-02-20 23:34:54 CEST, which was a Saturday.

– SQB – 2018-03-29T07:15:49.447

@Nolonar Sadly, such desyncs seem pretty common in Windows. Here's a screenshot of a file I have that was apparently created three years after it was last modified.

– Luke Sawczak – 2018-03-30T02:38:58.457

@LukeSawczak Most applications don't bother updating the accessed times, but happily update the creation times (and if you're lucky, last modified). Almost as if nobody bothered reading the documentation or thinking for five seconds :D – Luaan – 2018-03-30T08:29:21.950

@Nolonar SOME unix filesystems count seconds, like ext2/ext3/ReiserFS, while others, such as ext4/btrfs counts in nanoseconds – hanshenrik – 2018-03-30T20:07:43.560

5

As has been written by others, the Windows epoch is at 1601-01-01 00:00.

The number of seconds between that epoch and the filetime displayed, is 1.266.705.294.
If we add that to the Unix epoch, we arrive at 2010-02-20 23:34:54 CEST, a Saturday. This is about a year before the last access date, which makes it somewhat plausible. So it may have been a Unix timestamp interpreted against the wrong epoch.

SQB

Posted 2018-03-27T14:33:25.123

Reputation: 629

Strangely enough, the .NET epoch is 0001-01-01 00:00 UTC (which is 1600 years earlier). See https://msdn.microsoft.com/en-us/library/system.datetime.ticks(v=vs.110).aspx

– Nayuki – 2018-03-31T03:27:19.343

2

As usual for these types of questions, Raymond Chen's blog has an answer regarding this from the "Why is the Win32 epoch January 1, 1601?" entry from March 6, 2009:

The FILETIME structure records time in the form of 100-nanosecond intervals since January 1, 1601. Why was that date chosen?

The Gregorian calendar operates on a 400-year cycle, and 1601 is the first year of the cycle that was active at the time Windows NT was being designed. In other words, it was chosen to make the math come out nicely.

I actually have the email from Dave Cutler confirming this.

ErikF

Posted 2018-03-27T14:33:25.123

Reputation: 249