๐Ÿ•’ Go, macOS, and the Illusion of Nanosecond Precision

By nevillecain on May 10, 2025

For a long time, I assumed calling time.Now() in Go provided nanosecond precision on my macOS machine.

That assumption fell apart recently while working on some unit tests. I noticed the output from time.Now() consistently showed zeros beyond the microsecond mark. However, when running the same code on a Linux machine, I observed nanosecond precision (though Iโ€™m not sure if those are real values or just noise).

This discovery was surprising (especially since I didn't notice this for years). Given that many developers code on Mac and deploy to Linux, this difference could lead to the famous quote:

"But it works on my machine!"

This motivated me to dive deeper into understanding what's happening behind the scenes. Initially, I wondered if this was an issue within Go or simply a limitation of macOS. Testing with other languages pointed towards a limitation on macOS, but let's check.

๐Ÿ”Ž Understanding time.Time Internals

First, let's define two key concepts:

  • Wall clock: The calendar time provided by the OS, affected by timezone, daylight saving, or manual adjustments.
  • Monotonic clock: A continuously increasing clock, ideal for measuring durations, typically based on system uptime.

Go's time.Time struct embeds both clocks, allowing accurate real-world timestamps and precise internal duration tracking:

type Time struct {
    wall uint64
    ext  int64
}

๐Ÿงฎ wall Field Layout (64 bits)

From high to low, the wall field in Go's struct is packed as follows:

[63]      [62............30]          [29..................0]
โ”Œโ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ 1 โ”‚ 33-bit seconds since  โ”‚ 30-bit nanoseconds           โ”‚
โ”‚   โ”‚ Jan 1, 1885           โ”‚ within the current second    โ”‚
โ””โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ†‘
hasMonotonic flag (1 bit)

๐Ÿ“ฆ ext Field (64 bits)

The ext field differs based on the hasMonotonic flag:

  • If hasMonotonic == 1:

  • Contains a monotonic timestamp (nanoseconds since process start).

  • If hasMonotonic == 0:

  • Contains the full signed wall time seconds since Jan 1, year 1.

This lets Go provide both wall and monotonic clocks as needed.

๐Ÿ”ฌ What's Happening on macOS?

On macOS, time.Now() relies on the runtime package to define those attributes.

Let's start with the wall clock.

(sys_darwin.go)

package runtime

//go:build !faketime && !windows && !(linux && amd64) && !plan9
...
//go:linkname time_now time.now
func time_now() (sec int64, nsec int32, mono int64) {
    sec, nsec = walltime()
    return sec, nsec, nanotime()
}

This implementation eventually calls clock_gettime(CLOCK_REALTIME) via assembly:

(sys_darwin_amd64.s)

TEXT runtimeยทwalltime_trampoline(SB),NOSPLIT,$0
    MOVD    R0, R1          // arg 2 timespec
    MOVW    $CLOCK_REALTIME, R0 // arg 1 clock_id
    BL  libc_clock_gettime(SB)
    RET

To confirm the behavior, let's simulate it in Go:

package main

/*
#include <time.h>
#include <stdint.h>

int64_t get_realtime_nsec() {
    struct timespec ts;
    clock_gettime(CLOCK_REALTIME, &ts);
    return ts.tv_nsec;
}
*/
import "C"
import "fmt"

func main() {
    nsec := C.get_realtime_nsec()
    fmt.Printf("Raw nanoseconds from clock_gettime: %d\n", nsec)
}

Output on macOS:

% go run main.go
Raw nanoseconds from clock_gettime: 840924000
# (always ends in 000 โ€” microsecond precision only)

Repeated runs confirm the last three digits are always zero, indicating microsecond precision only.

โœ… Nanotime & Monotonic Precision

Interestingly, Goโ€™s monotonic clock (nanotime()) on macOS uses mach_absolute_time() converted with a scaling factor from mach_timebase_info(), offering true nanosecond precision:

package main

/*
#include <mach/mach_time.h>
#include <stdint.h>

uint64_t get_mach_absolute_time() {
    return mach_absolute_time();
}

void get_timebase(uint32_t* numer, uint32_t* denom) {
    struct mach_timebase_info info;
    mach_timebase_info(&info);
    *numer = info.numer;
    *denom = info.denom;
}
*/
import "C"
import "fmt"

func main() {
    raw := C.get_mach_absolute_time()

    var numer, denom C.uint32_t
    C.get_timebase(&numer, &denom)

    nsec := uint64(raw) * uint64(numer) / uint64(denom)

    fmt.Printf("mach_absolute_time(): %d (raw ticks)\n", raw)
    fmt.Printf("Converted to nanoseconds: %d ns\n", nsec)
}

Output example:

% go run main.go
mach_absolute_time(): 1295314387163 (raw ticks)
Converted to nanoseconds: 53971432798458 ns

This confirms true nanosecond precision for monotonic durations.

โš ๏ธ The High Sierra Issue

The limitation of the wall-clock nanosecond precision on macOS began with the High Sierra update. Monotonic time still provides nanosecond precision; however, some articles suggest caution.

๐Ÿ“Œ Summary

| Clock Type        | time.Now() (wall) | time.Since() (monotonic) | 
|-------------------|----------------------|------------------------------|
| **macOS**         | Microsecond          | Nanosecond                   | 
| **Linux**         | Nanosecond           | Nanosecond                   |

Understanding these differences can prevent confusion (and maybe subtle bugs in your CI/CD).

golang macOS time-precision cross-platform
About the Author

Software developer specialized in backend systems.