Skip to content

Wildly different results from simply swapping the order of benchmarks #2004

@space-alien

Description

@space-alien

Faced with seemingly inconsistent and inscrutable benchmark results, I followed a hunch and swapped the order of appearance of my two benchmark methods.

The following results are consistently repeatable.

A then B:

|           Method |     Mean |     Error |    StdDev | Ratio |
|----------------- |---------:|----------:|----------:|------:|
|          MethodA | 1.541 ns | 0.0033 ns | 0.0029 ns |  1.00 |
|          MethodB | 3.451 ns | 0.0079 ns | 0.0074 ns |  2.24 |

Swap the order:

|           Method |      Mean |     Error |    StdDev | Ratio | RatioSD |
|----------------- |----------:|----------:|----------:|------:|--------:|
|          MethodB | 4.9731 ns | 0.0951 ns | 0.1057 ns | 21.52 |    0.64 |
|          MethodA | 0.2302 ns | 0.0024 ns | 0.0020 ns |  1.00 |    0.00 |

Nothing has changed between these two runs, aside from swapping the order of the two benchmark methods on the benchmark class.

How should I diagnose what is going on here?

Environment info:
BenchmarkDotNet=v0.13.1
AMD Ryzen 9 5900HX with Radeon Graphics, 1 CPU, 16 logical and 8 physical cores
.NET SDK=6.0.202
  [Host]     : .NET 6.0.4 (6.0.422.16404), X64 RyuJIT
  Job-MPHSCA : .NET 6.0.4 (6.0.422.16404), X64 RyuJIT

Runtime=.NET 6.0

I'm running Windows 11, but the output reports Windows 10, so I have stripped that.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions