Mastering Performance Testing in .NET with BenchmarkDotNet

In modern software development, delivering functional features is only part of the equation — ensuring your code performs well under load, uses memory efficiently, and scales as your users grow is equally critical. That’s where benchmarking comes in. For .NET developers, the BenchmarkDotNet library simplifies the process of measuring and analysing performance of your code rather than just trusting intuition.

In this article we’ll explore what BenchmarkDotNet is, why you might use it, how to get started, how to interpret the results, and some best-practices to make your micro-benchmarks meaningful.

What is BenchmarkDotNet?

BenchmarkDotNet is an open-source benchmarking library for .NET (including .NET Framework, .NET Core / .NET, Mono, and other runtimes) which allows you to mark methods as benchmarks, run them in a controlled environment, collect timing, memory and other diagnostics, and produce structured reports.

With BenchmarkDotNet, you can transform ordinary methods into benchmarks, have them executed under different runtime configurations or versions, and generate results with meaningful statistical information. The library takes care of much of the “boilerplate” and pitfalls of benchmarking — warming up the code, running multiple iterations, isolating overhead, etc.

Why use BenchmarkDotNet?

Here are several compelling reasons:

  • Reliable measurements: BenchmarkDotNet handles warm-up, repeat iterations, and statistical processing so you get more trustworthy numbers than ad-hoc stopwatch code.

  • Multiple runtimes/environments: You can compare performance across .NET versions, runtimes (x86/x64/ARM), JIT vs AOT, etc.

  • Memory/GC insights: You can analyse not only how fast code executes but how much memory is allocated or how often GC runs.

  • Easy to integrate: It uses a simple attribute-based model; in many cases you can set up a benchmark in minutes.

  • Reports and exports: Output can be generated in human-readable tables, markdown, HTML, CSV, plots, and more.

  • Avoid premature optimisation myths: With real measurements you can validate whether one implementation truly outperforms another — rather than guessing.

Getting Started: A Basic Workflow

Here is a typical flow for using BenchmarkDotNet:

  1. Create a console project
    Create a new .NET console application (so that you have full control over execution).

  2. Install the NuGet package
    Use the command line or IDE to add BenchmarkDotNet to your project (e.g. dotnet add package BenchmarkDotNet).

  3. Write your benchmark class
    Create a public class with public methods you want to benchmark. Mark each such method with the [Benchmark] attribute. Example:
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    public class StringConcatBenchmarks
    {
        private const int N = 1000;
    
        [Benchmark]
        public string UsingStringPlus()
        {
            string s = "";
            for (int i = 0; i < N; i++)
              s += "x";
            return s;
        }
    
        [Benchmark]
        public string UsingStringBuilder()
        {
            var sb = new System.Text.StringBuilder();
            for (int i = 0; i < N; i++)
              sb.Append("x");
            return sb.ToString();
        }
    }
    
    public static class Program
    {
        public static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<StringConcatBenchmarks>();
        }
    }
    
  4. Build in Release mode
    For accurate results, compile your code in Release configuration (optimised). BenchmarkDotNet may refuse to run in Debug mode because debugging overhead distorts results.
  5. Run your benchmarks
    From the command line, run your application (e.g. dotnet run -c Release). BenchmarkDotNet will execute many iterations, gather timings, and print a summary.
  6. Interpret the output
    The summary will show for each benchmark method things like: mean execution time, error margin, standard deviation, ratio compared to a baseline, memory allocations, GC counts (if configured).
  7. Optional: Add attributes for advanced features

    • Use [GlobalSetup] and [GlobalCleanup] to prepare or clean up state before/after benchmarks.

    • Use [Params] to run a benchmark across multiple input-sizes or parameter values.

    • Use [MemoryDiagnoser] to capture allocation/GC information.

    • Use job attributes (e.g. [SimpleJob(...)]) to specify runtimes or configuration combinations.

Example: Comparison of Two Approaches

Suppose we want to compare two methods of converting a list of strings to uppercase: method A uses a foreach loop, method B uses LINQ. We can write something like:

[MemoryDiagnoser]
public class UppercaseBenchmarks
{
    private List<string> data;

    [Params(1000, 10000)]
    public int Size;

    [GlobalSetup]
    public void Setup()
    {
        data = Enumerable.Range(0, Size)
                 .Select(i => Guid.NewGuid().ToString())
                 .ToList();
    }

    [Benchmark]
    public List<string> ForLoop()
    {
        var list = new List<string>(data.Count);
        foreach (var s in data)
            list.Add(s.ToUpperInvariant());
        return list;
    }

    [Benchmark]
    public List<string> LinqSelect()
    {
        return data.Select(s => s.ToUpperInvariant()).ToList();
    }
}

When you run this, BenchmarkDotNet will produce a result table showing how runtime and memory usage compare for each method at each size. If one method is consistently faster and/or uses less memory, you have evidence to pick the “better” implementation in this context.

Interpreting Results & Tips

  • Focus on the Mean (average) execution time, but also observe StdDev (standard deviation) to see how stable the measurements are.

  • If you mark one method with Baseline = true (in its [Benchmark] attribute), then other methods will have a Ratio column that shows how many times slower/faster they are relative to the baseline.

  • MemoryDiagnoser will report how many bytes were allocated and how many garbage-collections (GC) happened in Gen0/Gen1/Gen2.

  • Beware of “small N” misleading results: if the workload is too trivial, overhead from runtimes, JIT/GC will dominate.

  • Always run in Release build, ensure your machine is reasonably idle, avoid other heavy CPU/GPU load during benchmarking.

  • Warm-up iterations happen automatically; avoid interpreting the first run as “cold”.

  • Use parameterised benchmarks ([Params]) or multiple job configurations to test different scenarios (small vs large input, different frameworks, etc).

  • Export results when you want to share or archive: BenchmarkDotNet supports Markdown, CSV, HTML, etc.

Best Practices & Common Pitfalls

Best practices

  • Benchmark only what matters — target hot-paths, algorithms in your application that might actually influence performance.

  • Use input sizes that reflect real-world scenarios (not “toy sizes”).

  • Compare different implementations side-by-side in the same benchmark class so they share same environment.

  • Document what you are measuring and why; include these benchmarks as part of your development workflow if performance is a concern.

  • Keep baseline results, so when you change code later you can compare “before” vs “after”.

  • If memory allocation is important, include [MemoryDiagnoser] and check that your implementation doesn’t allocate excessively.

Common pitfalls

  • Running in Debug mode: results will be unreliable.

  • Not isolating the benchmark environment: other background work, VM, system activity can skew results.

  • Using extremely small workloads: overhead of measuring might dwarf the actual work.

  • Forgetting to test for correctness: if one implementation is faster but wrong, it doesn’t matter.

  • Comparing apples to oranges: ensure the benchmarked methods really do the same work.

  • Optimiser/elimination effects: if the work inside your method is too trivial, the compiler or JIT might optimise it away.

When Should You Use BenchmarkDotNet?

If you are working on a .NET application or library and you care about performance (speed, memory, GC, scalability), BenchmarkDotNet is an excellent choice when you want to:

  • Compare two or more implementations of a function to pick the faster one.

  • Measure how your code behaves under different frameworks or runtimes (e.g., .NET 6 vs .NET 7).

  • Identify bottlenecks in frequently-called code paths.

  • Validate performance regressions after a code change.

  • Create reproducible, shareable performance results that your team can review.

If performance is not a concern (e.g., one-off scripts, non-critical code paths), then simple profiling or logging might suffice without full micro-benchmarking.

Summary

BenchmarkDotNet is a powerful, mature library for .NET performance testing that helps you write clear benchmarks, execute them in a controlled way, and interpret results to make data-driven decisions about your code’s performance. By properly setting up benchmarks, running them in Release mode, and interpreting the results, you’ll avoid guesswork and ensure your code performs as well as it should.

Comments Add
No comments yet.