DevToolBox
Dates & Time 6 min read 2026-03-16

Unix Timestamp Explained Without the Usual Confusion

Understand Unix timestamps, seconds vs milliseconds, and how to convert between epoch values and readable dates.

Intro

Unix timestamps show up everywhere: logs, JWT claims, databases, caches, and analytics pipelines. They are compact and reliable for machines, but opaque for humans.

The most common source of confusion is not the epoch itself. It is whether a system is using seconds or milliseconds.

What is it?

A Unix timestamp is the number of elapsed seconds or milliseconds since January 1, 1970 UTC.

Computers use it as a neutral, sortable time representation that avoids locale-specific date formatting.

Why it matters

  • A timestamp converter is essential when debugging logs or token claims.
  • Understanding UTC and units prevents off-by-1000 mistakes.
  • Epoch values make cross-language and cross-system comparisons easier.

Examples

Seconds timestamp

A 10-digit epoch usually means seconds.

1710500000

Milliseconds timestamp

A 13-digit epoch usually means milliseconds.

1710500000000

Common mistakes

  • Treating milliseconds as seconds and ending up centuries away from the expected date.
  • Forgetting that Unix timestamps are UTC-based.
  • Comparing ISO dates and epoch values without normalizing timezone assumptions.
  • Using local time when you actually need deterministic UTC.
Use the tool

Ready to try Timestamp Converter?

Convert Unix timestamps to ISO dates and back.

Open full tool page

FAQ

How do I know if a timestamp is seconds or milliseconds?

As a quick heuristic, 10 digits usually means seconds and 13 digits usually means milliseconds. The source system should still be your real authority.

Is Unix time always UTC?

Yes. The timestamp itself is timezone-neutral and anchored to UTC.