Skip to content

low precision of nanoTimestamp on windows #22460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
BeanHeaDza opened this issue Jan 10, 2025 · 0 comments · Fixed by #22871
Closed

low precision of nanoTimestamp on windows #22460

BeanHeaDza opened this issue Jan 10, 2025 · 0 comments · Fixed by #22871
Labels
bug Observed behavior contradicts documented or intended behavior contributor friendly This issue is limited in scope and/or knowledge of Zig internals. os-windows standard library This issue involves writing Zig code for the standard library.
Milestone

Comments

@BeanHeaDza
Copy link

BeanHeaDza commented Jan 10, 2025

Zig Version

0.13.0

Steps to Reproduce and Observed Behavior

nanoTimestamp on windows currently uses GetSystemTimeAsFileTime, however there is a more precise windows api GetSystemTimePreciseAsFileTime. I tried searching for a discussion on why GetSystemTimeAsFileTime is used over GetSystemTimePreciseAsFileTime, but couldn't find anything on the topic. The MSDN documentation states that GetSystemTimePreciseAsFileTime has been available since Windows 8, and based on some issue searching it seems that Windows 7 isn't supported (see this).

So based on that I'd recommend that nanoTimestamp use GetSystemTimePreciseAsFileTime instead. I've written an example program (sorry in 0.13.0 not master) to show that GetSystemTimePreciseAsFileTime does give a better precision for nanoTimestamp on windows.

const std = @import("std");
const builtin = @import("builtin");

pub extern "kernel32" fn GetSystemTimePreciseAsFileTime(*std.os.windows.FILETIME) callconv(std.os.windows.WINAPI) void;

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    var allocator = gpa.allocator();

    const start = std.time.nanoTimestamp();
    const startPrec = nanoTimestamp();

    const heap = try allocator.alloc(usize, 10000);
    defer allocator.free(heap);

    const end = std.time.nanoTimestamp();
    const endPrec = nanoTimestamp();

    std.debug.print("The heap alloc took {d}ns\n", .{end - start});
    std.debug.print("The heap alloc took {d}ns (precise)\n", .{endPrec - startPrec});
}

fn nanoTimestamp() i128 {
    if (builtin.os.tag != .windows) {
        return std.time.nanoTimestamp();
    }

    // FileTime has a granularity of 100 nanoseconds and uses the NTFS/Windows epoch,
    // which is 1601-01-01.
    const epoch_adj = std.time.epoch.windows * (std.time.ns_per_s / 100);
    var ft: std.os.windows.FILETIME = undefined;
    GetSystemTimePreciseAsFileTime(&ft);
    const ft64 = (@as(u64, ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
    return @as(i128, @as(i64, @bitCast(ft64)) + epoch_adj) * 100;
}

output:

The heap alloc took 0ns
The heap alloc took 13600ns (precise)

Expected Behavior

nanoTimestamp on Windows has better precision.

@BeanHeaDza BeanHeaDza added the bug Observed behavior contradicts documented or intended behavior label Jan 10, 2025
@andrewrk andrewrk added standard library This issue involves writing Zig code for the standard library. os-windows labels Jan 25, 2025
@andrewrk andrewrk added this to the 0.16.0 milestone Jan 25, 2025
@andrewrk andrewrk added the contributor friendly This issue is limited in scope and/or knowledge of Zig internals. label Jan 25, 2025
@jacobly0 jacobly0 modified the milestones: 0.16.0, 0.14.0 Feb 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Observed behavior contradicts documented or intended behavior contributor friendly This issue is limited in scope and/or knowledge of Zig internals. os-windows standard library This issue involves writing Zig code for the standard library.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants