Skip to content

TickCount64's Fixed ~15ms Resolution Hurts .NET's Semi-Realtime Responsiveness Despite timeBeginPeriod(1) #122680

@kdbotts

Description

@kdbotts

TickCount64's Fixed ~15ms Resolution Hurts .NET's Semi-Realtime Responsiveness Despite timeBeginPeriod(1)

(Written up by Claude, based on info and code from Karl.)

Summary

Environment.TickCount64 deliberately ignores system timer resolution changes made by timeBeginPeriod(), maintaining ~15ms granularity even when the OS supports 1ms resolution. Since the .NET BCL uses TickCount64 extensively for timeout checks in WaitHandle, Timer dispatch, and other blocking primitives, this creates a significant responsiveness bottleneck for semi-realtime applications.

Background

I've been writing trading industry network programs for 40 years..

The Problem

Performance Benchmark Data

Testing on Windows (Release build, December 2025), 200 million operations each:

ClocksSpeedTest: E=7.82 nTest=3 nOp=200000000=200m bench=TickRes sumTestE=7.13
  Label    Elapsed      PerS   PerS/b   b/PerS
TickRes    .513984      389m    1.000    1.000 :  nChg=33 chgFrac=.000000 chg/s=64.2 nGap=33 maxGap=6255304 muGap=6045922.3
HighRes   2.500514     80.0m     .206    4.865 :  nChg=24.8m chgFrac=.124 chg/s=9.92m nGap=24.8m maxGap=8 muGap=7.07
 LowRes   4.115232     48.6m     .125    8.007 :  nChg=40.8m chgFrac=.204 chg/s=9.91m nGap=40.8m maxGap=6 muGap=3.91

Where:

  • TickRes = Environment.TickCount64
  • HighRes = Stopwatch (QueryPerformanceCounter)
  • LowRes = DateTime.UtcNow

System Timer Resolution

My application sets the system timer resolution at startup:

static bool TrySetSystemTimerResolution()
{
    try {
        int rv = NtQueryTimerResolution(out _minSystemTimerRes, out _maxSystemTimerRes, out _prevSystemTimerRes);
        /* timeBeginPeriod(x) "requests" that the opsys timer interrupt frequency be set to <x> milliseconds.
        That is the actual kernel interrupt frequency, equivalent to HZ on Linux; it effects not only
        SystemTime resolution, but the frequency of kernel-scheduled context switches.
        It is remarkable that, apparently, this can be assigned from an unprivileged app!
        (It can be, apparently, because games and such need it: it is in "winmm.dll", which means "Multi-Media".)
        Actually, near as I can figure out, at any given time it will be the max frequency (min interval)
        that any currently running process has "requested". Microsoft docs "request" that any process that sets it
        also unset it with timeEndPeriod(); however, when a process exits, the opsys seems to reset it
        for the process (which I am depending on here: see comments for StaticExit(), below).
        The major concern about setting it faster seems to be power consumption on laptops.
        Note that DomainTime2, our NTP client software (which we affectionately call
        "pancakes" because that's what its "system tray" icon looks like) sets the frequency to 1 millisec.
        So on most machines, setting it here is redundant: but a few (including one of mine) do not have DomainTime.
           Later: That comment, and most of this file, were originally written around 2010 or so. KarlAtToad, 2025Dec.
        */
        if ( _newSystemTimerResMilliseconds > 0 ) {
            uint wasMs = timeBeginPeriod(_newSystemTimerResMilliseconds);
                if ( wasMs != _newSystemTimerResMilliseconds )
                    return true;
        }
    } catch(Exception) {}
    return false;
}

[DllImport("winmm.dll", EntryPoint="timeBeginPeriod")]
static extern uint timeBeginPeriod(uint uMilliseconds);

[DllImport("winmm.dll", EntryPoint="timeEndPeriod")]
static extern uint timeEndPeriod(uint uMilliseconds);

[DllImport("ntdll.dll", SetLastError=true)]
static extern int NtQueryTimerResolution(out int minRes, out int maxRes, out int actualRes);

Log output shows the timer resolution change:

SystemTimerInterruptFrequency: curr=1002.707310 prev=64.014749 min=64.000000 max=2000.000000

The system timer went from ~64 Hz to ~1000 Hz (1ms resolution).

The Issue: TickCount64 Ignores Timer Resolution

Despite the system timer running at 1ms resolution:

  • DateTime.UtcNow updates ~1000 times/second (responds to timeBeginPeriod)
  • Stopwatch/QueryPerformanceCounter updates at microsecond resolution
  • Environment.TickCount64 still only updates ~64 times/second

According to Microsoft documentation and investigation by Bruce Dawson:

  • GetTickCount64 "simulates the worst-case timer interval"
  • It is "not affected by adjustments made by GetSystemTimeAdjustment"
  • This appears to be a deliberate backwards compatibility decision

Impact on .NET BCL

Looking at the .NET Core reference source, TickCount64 is used extensively throughout the BCL for timeout checks:

  • WaitHandle.Wait...() methods
  • Timer dispatch and scheduling
  • Many other blocking/async primitives
  • Various timeout mechanisms

This means:

  • Timeout granularity is ~15ms even when the OS supports 1ms
  • Timers can overshoot by up to 15ms before being detected
  • In environments where timeBeginPeriod(1) is active (games, multimedia apps, NTP clients like DomainTime), the OS supports 1ms responsiveness but .NET doesn't utilize it

Why This Is A Problem

  1. Wasted Potential: The OS supports better resolution (especially when applications request it via timeBeginPeriod), but .NET ignores it

  2. Inconsistent Behavior: Native Windows APIs benefit from increased timer resolution, but .NET managed code does not

  3. Semi-Realtime Scenarios Suffer:

    • High-frequency trading systems
    • Game loops and frame timing
    • Audio/video processing
    • Industrial control systems
    • Network protocol implementations
    • Any scenario requiring sub-10ms responsiveness
  4. The "Consistency" Argument Is Weak: Applications that care about low latency are already calling timeBeginPeriod(1). The ship has sailed on "consistent worst-case" behavior.

Proposed Solution

Replace TickCount64 usage in BCL timeout mechanisms with QueryPerformanceCounter (Stopwatch.GetTimestamp()).

Performance comparison from benchmark:

  • TickCount64: 389M ops/sec, ~15ms resolution
  • QueryPerformanceCounter: 80M ops/sec, microsecond resolution

QueryPerformanceCounter is:

  • Still very fast (5x slower than TickCount64, but 80M ops/sec is plenty for timeout checks)
  • High resolution (microsecond-level, responds to actual system capabilities)
  • More appropriate for timing decisions in modern applications
  • Already used successfully in Stopwatch

Alternative Solutions

  1. Add a new API: Environment.TickCount64HighRes that respects timer resolution
  2. Make it configurable: Runtime flag or AppContext switch to use QPC for timeouts
  3. Use different mechanisms for different scenarios:
    • Keep TickCount64 for coarse timeouts (>100ms)
    • Use QPC for fine-grained timeouts (<100ms)

Related Issues

This relates to previous discussions about .NET timer resolution:

Environment

  • .NET: Core/5.0+ (issue exists across versions)
  • OS: Windows (where timeBeginPeriod is relevant)
  • Application Type: Any requiring semi-realtime responsiveness

Additional Context

The timeBeginPeriod(1) scenario is extremely common in real-world Windows environments:

  • Games routinely set 1ms resolution
  • Multimedia applications (audio/video) set 1ms resolution
  • NTP clients (like DomainTime) set 1ms resolution
  • Chrome browser sets high resolution when playing media

On most developer and user machines, the system is already running at 1ms resolution due to these applications. .NET's decision to ignore this and maintain ~15ms granularity for internal timeouts is increasingly anachronistic.


Questions for .NET Team

  1. Is there a technical reason beyond "backwards compatibility" why TickCount64 couldn't respect timer resolution?
  2. Would the team consider using QueryPerformanceCounter for timeout checks in performance-critical paths?
  3. Is there concern about the 5x performance difference, or is 80M ops/sec sufficient for BCL timeout checking?
  4. Would a configurable approach (AppContext switch) be acceptable as a compromise?

References


Author's Note: I have 40 years of experience writing trading industry network programs where sub-10ms response times are critical. This issue represents a significant limitation in .NET's ability to compete in semi-realtime scenarios. I'm happy to provide additional benchmark data or clarification as needed.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions