In the beginning, time was simple for computers. We just did everything in our local time. Sure, calendars were tricky, what with months with a different number of days, months that did not always have the same number of days, and so on.

But some places had this daylight saving thing, where the time changed twice a year, often by an hour in either direction. So, the intelligent among us decided to use UTC everywhere, and make adjustments to local time when needed.

However, real people use local time. For example, if I ask a customer when a problem happened, they will use local time. That meant that to convert from the computer time to local time, we needed to know which timezone was in use. So we used the combination (UTC, timezone name). But that did not solve the problem everywhere, because we need an up-to-date set of timezone data to calculate the local time; this is not always available, or is not always correct or up-to-date.

It then seemed that using (local time, timezone offset) or (UTC, timezone offset) would do it; these are semantically identical. I prefer, where possible, to use local time + timezone offset, for no justifiable reason.

That does just fine for times where the timezone offsets are correct – that is, for times in the past where the timezone database is correct. However, timezones are a political thing, and there are errors in everything, so the timezone database can change, either because a government decides to change the timezone rules, or to fix bugs in the database.

This means that you need to store 3 things about a timestamp

  • the actual time in either a local time or UTC
  • timezone offset from UTC at that time
  • the timezone name, in case the time is in the future, and the timezone offset changes, and we need to show the time in local time.