Microbenchmark instrumentation arguments

Configure the behavior of Microbenchmark with the following instrumentation arguments. You can either add these to your Gradle configuration or apply them directly when running instrumentation from the command line. To set these arguments for all Android Studio and command line test runs, add them to testInstrumentationRunnerArguments:

android {
    defaultConfig {
        // ...
        testInstrumentationRunnerArguments["androidx.benchmark.dryRunMode.enable"] = "true"
    }
}

You can also set up instrumentation arguments when running the benchmarks from Android Studio. To change the arguments, do the following:

  1. Edit the run configuration by clicking Edit and selecting the configuration you want to edit.
    Figure 1. Edit the run configuration.
  2. Edit instrumentation arguments by clicking next to the Instrumentation arguments field.
    Figure 2. Edit the instrumentation argument.
  3. Click and add the required instrumentation argument.
    Figure 3. Add the instrumentation argument.

If you're running the benchmark from the command line, use -P android.testInstrumentationRunnerArguments.[name of the argument]:

./gradlew :benchmark:connectedAndroidTest -P android.testInstrumentationRunnerArguments.androidx.benchmark.profiling.mode=StackSampling

If you're invoking am instrument command directly (which may be the case in CI testing environments), pass the argument to am instrument with -e:

adb shell am instrument -e androidx.benchmark.profiling.mode StackSampling -w com.example.macrobenchmark/androidx.benchmark.junit4.AndroidBenchmarkRunner

For more information about configuring benchmarks in CI, see Benchmarking in CI

additionalTestOutputDir

Configures where JSON benchmark reports and profiling results are saved on device.

  • Argument type: file path string
  • Defaults to: test APK's external directory

androidx.benchmark.dryRunMode.enable

Lets you run benchmarks in single loop to verify that they work properly.

This means:

  • Configuration errors aren't enforced (for example, to make it easier to run with regular correctness tests on emulators)
  • Benchmark runs only a single loop, with no warmup
  • Measurements and traces aren't captured to reduce runtime

This optimizes for test throughput and validating benchmark logic over build and measurement correctness.

  • Argument type: boolean
  • Defaults to: false

androidx.benchmark.iterations

Overrides time-driven target iteration counts to help ensure a consistent amount of work. This is typically only useful with profiling enabled to help ensure consistent quantity of work is performed within a profiling trace when comparing different implementations or runs. In other scenarios, this likely reduces accuracy or stability of measurements.

  • Argument type: integer
  • Defaults to: not specified

androidx.benchmark.junit4.SideEffectRunListener

You might get inconsistent benchmark results if unrelated background work gets executed while the benchmark is running.

To disable background work during benchmarking set the listener instrumentation argument type to androidx.benchmark.junit4.SideEffectRunListener.

  • Argument type: string
  • Available options:
    • androidx.benchmark.junit4.SideEffectRunListener
  • Defaults to: not specified

androidx.benchmark.output.enable

Enables writing the result JSON file to external storage.

  • Argument type: boolean
  • Defaults to: true

androidx.benchmark.profiling.mode

Allows capturing trace files while running the benchmarks. See Profile a Microbenchmark for available options.

  • Argument type: string
  • Available options:
    • MethodTracing
    • StackSampling
    • None
  • Defaults to: None

androidx.benchmark.suppressErrors

Accepts a comma-separated list of errors to turn into warnings.

  • Argument type: list of strings
  • Available options:
    • DEBUGGABLE
    • LOW-BATTERY
    • EMULATOR
    • CODE-COVERAGE
    • UNLOCKED
    • SIMPLEPERF
    • ACTIVITY-MISSING
  • Defaults to: an empty list

androidx.benchmark.startupMode.enable (Deprecated)

Reconfigures looping behavior to support benchmarking code during startup. The benchmarks are executed without warmup looping for 10 measurements. To minimize overhead in microbenchmarks, loop averaging is disabled.

  • Argument type: boolean
  • Defaults to: false