Skip to content

Most visited

Recently visited

navigation

AAudio

AAudio is a new Android C API introduced in the Android O release. It is designed for high-performance audio applications that require low latency. Apps communicate with AAudio by reading and writing data to streams.

The AAudio API is minimal by design, it doesn't perform these functions:

Audio streams

AAudio moves audio data between your app and the audio inputs and outputs on the device that's running your app. Your app passes data in and out by reading from and writing to audio streams, represented by the structure AAudioStream. The read/write calls can be blocking or non-blocking.

A stream is defined by the following:

Audio device

Each stream is attached to a single audio device.

An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. (Don't confuse an audio device with the Android device that is running your app. They are two different things.)

You can use the AudioManager method getDevices() to discover the audio devices that are available on your Android device. The method returns information about the type of each device.

Each audio device has a unique ID on the Android device. You can use the ID to bind an audio stream to a specific audio device. However, in most cases you can let AAudio choose the default primary device rather than specifying one yourself.

The audio device attached to a stream determines whether the stream is for input or output. A stream can only move data in one direction. When you define a stream you also set its direction. When you open a stream Android checks to ensure that the audio device and stream direction agree.

Sharing mode

Each stream must specify its sharing mode:

Audio data

The data passed through a stream has the usual digital audio attributes, which you must specify when you define a stream. These are as follows:

AAudio permits four audio data formats:

aaudio_audio_format_t C data type Notes
AAUDIO__FORMAT_PCM_I16 int16_t common 16-bit samples, Q0.15 format
AAUDIO_FORMAT_PCM_I8_24 int32_t Q9.23 format
AAUDIO_FORMAT_PCM_I32 int32_t Q1.31 format
AAUDIO_FORMAT_PCM_FLOAT float -1.0 to +1.0

AAudio might perform sample conversion on its own. For example, if an app is writing FLOAT data but the HAL uses PCM_I16, AAudio might convert the samples automatically. Conversion can happen in either direction. If your app processes audio input, it is wise to verify the input format and be prepared to convert data if necessary, as in this example:

aaudio_audio_format_t dataFormat = AAudioStream_getDataFormat(stream);
//... later
if (dataFormat == AAUDIO_AUDIO_FORMAT_PCM_I16) {
     convertFloatToPcm16(...)
}

Creating an audio stream

The AAudio library follows a builder design pattern and provides AAudioStreamBuilder.

  1. Create an AAudioStreamBuilder:

    AAudioStreamBuilder *builder;
    aaudio_result_t result = AAudio_createStreamBuilder(&builder);
    

  2. Set the audio stream configuration in the builder, using the builder functions that correspond to the stream parameters. These set functions are available:

    AAudioStreamBuilder_setDeviceId(builder, deviceId);
    AAudioStreamBuilder_setDirection(builder, direction);
    AAudioStreamBuilder_setSharingMode(builder, mode);
    AAudioStreamBuilder_setSampleRate(builder, sampleRate);
    AAudioStreamBuilder_setSamplesPerFrame(builder, spf);
    AAudioStreamBuilder_setFormat(builder, format);
    AAudioStreamBuilder_setBufferCapacityInFrames(builder, frames);
    

    Note that these methods don't report errors, such as an undefined constant or value out of range. To be safe, check the state of the audio stream when you create it, which is explained in step 4.

    If you don't specify any properties, the builder defaults to an endpoint-specific channel count, data format, and sample rate on the primary output device. Be sure to check the default properties as described in step 4, below.

  3. When the AAudioStreamBuilder is configured, use it to create a stream:

    AAudioStream *stream;
    result = AAudioStreamBuilder_openStream(builder, &stream);
    

  4. After creating the stream, verify its configuration. The actual configuration of the created stream depends on the capabilities of the audio device to which it's attached and the Android device on which it's running. AAudio does its best to provide the settings you specify, but if a setting is not available, it tries to assign another valid value. As a matter of good defensive programming, you should check the stream's configuration before using it. There are functions to retrieve the stream setting that corresponds to each builder setting:

    AAudioStreamBuilder_setDeviceId() AAudioStream_getDeviceId()
    AAudioStreamBuilder_setDirection() AAudioStream_getDirection()
    AAudioStreamBuilder_setSharingMode() AAudioStream_getSharingMode()
    AAudioStreamBuilder_setSampleRate() AAudioStream_getSampleRate()
    AAudioStreamBuilder_setSamplesPerFrame() AAudioStream_getSamplesPerFrame()
    AAudioStreamBuilder_setFormat() AAudioStream_getFormat()
    AAudioStreamBuilder_setBufferCapacityInFrames() AAudioStream_getBufferCapacityInFrames()
  5. You can save the builder and reuse it in the future to make more streams. But if you don't plan to use it any more, you should delete it.

    AAudioStreamBuilder_delete(builder);
    

Using an audio stream

State transitions

An AAudio stream is usually in one of five stable states (the error state, Disconnected, is described at the end of this section):

Data only flows through a stream when the stream is in the Started state. To move a stream between states, use one of the four functions that request a state transition:

aaudio_result_t result;
result = AAudioStream_requestStart(stream);
result = AAudioStream_requestStop(stream);
result = AAudioStream_requestPause(stream);
result = AAudioStream_requestFlush(stream);

While these functions are asynchronous, the state change doesn't happen immediately. When you request a state change, the stream moves one of the corresponding transient states:

The state diagram below shows the stable states as rounded rectangles, and the transient states as dotted rectangles:

AAudio Lifecycle

AAudio doesn't provide callbacks to alert you to state changes. One special function, AAudioStream_waitForStateChange() can be used to wait after issuing a state change request. For example, after requesting to pause, a stream passes through the transient state, Pausing, and arrive at the Paused state. Instead of waiting for the Paused state (which may not occur) you should wait for any state other than Pausing. Here's how that's done:

aaudio_stream_state_t inputState = AAUDIO_STREAM_STATE_PAUSING;
aaudio_stream_state_t nextState = AAUDIO_STREAM_STATE_UNINITIALIZED;
int64_t timeoutNanos = 100 * AAUDIO_NANOS_PER_MILLISECOND;
result = AAudioStream_waitForStateChange(stream,
            inputState, &nextState, timeoutNanos);

If the stream's state is not inputState, the function returns immediately. Otherwise, it blocks until the state is no longer inputState or the timeout expires. When the function returns, check nextState to determine the current state of the stream.

You can use this same technique after calling request start, stop, or flush, using the corresponding transient state as the inputState.

Reading and writing to an audio stream

After the stream is started you can read or write to it:

result = AAudioStream_write(stream, buffer, numFrames, timeoutNanos);
result = AAudioStream_read(stream, buffer, numFrames, timeoutNanos);

For a blocking read or write that transfers the specified number of frames, set timeoutNanos greater than zero. For a non-blocking call, set timeoutNanos to zero. In this case the result is the actual number of frames transferred.

You can prime the stream's buffer before starting the stream by writing data or silence into it. This must be done in a non-blocking call with timeoutNanos set to zero.

The data in the buffer must match the data format returned by AAudioStream_getDataFormat().

Closing an audio stream

When you are finished using a stream, close it:

AAudioStream_close(stream);

Disconnected audio stream

An audio stream can become disconnected at any time if one of these events happens:

When a stream is disconnected, it has the state "Disconnected" and any attempts to execute write() or other functions return AAUDIO_ERROR_DISCONNECTED. When a stream is disconnected, all you can do is close it.

Optimizing performance

You can optimize the performance of an audio application by adjusting its internal buffers and by using special high-priority threads.

Tuning buffers to minimize latency

AAudio passes data in and out of internal buffers that it maintains, one for each audio device.

The buffer's capacity is the total amount of data a buffer can hold. Capacity is limited by the hardware device. You can call AAudioStreamBuilder_setBufferCapacityInFrames() to set the capacity. The method limits the capacity you can allocate to the maximum value that the device permits. Use AAudioStreamBuilder_getBufferCapacityInFrames() to verify the actual capacity of the buffer.

An app doesn't have to use the entire capacity of a buffer. AAudio fills a buffer up to a size which you can set. The size of a buffer can be no larger than its capacity, and it is often smaller. By controlling the buffer size you determine the number of bursts needed to fill it, and thus control latency. Use the methods AAudioStreamBuilder_setBufferSizeInFrames() and AAudioStreamBuilder_getBufferSizeInFrames() to work with the buffer size.

When an application plays audio out, it writes to a buffer and blocks until the write is complete. AAudio reads from the buffer in discrete bursts. Each burst contains a multiple number of audio frames and is usually smaller than the size of the buffer being read. The system controls burst size and rate. Though you can't change the size of a burst or the burst rate, you can set the size of the internal buffer according to the number of bursts it contains. Generally, you get the lowest latency if you match the reported burst size, which you can determine by calling AAudioStream_getFramesPerBurst().

AAudio Buffering

One way to optimize the buffer size is to start with a large buffer and gradually lower it until underruns begin, then nudge it back up. Alternatively, you can start with a small buffer size and if that produces underruns, increase the buffer size until the output flows cleanly again.

This tuning process can take place very quickly, possibly before the user plays the first sound. You may want to do the initial tuning using silence so that the user won't hear any audible glitches.

Here is an example of a buffer optimization loop:

int32_t previousUnderrunCount = 0;
int32_t framesPerBurst = AAudioStream_getFramesPerBurst(stream);
int32_t bufferSize = AAudioStream_getBufferSizeInFrames(stream);

int32_t bufferCapacity = AAudioStream_getBufferCapacityInFrames(stream);

while (go) {
    result = writeSomeData();
    if (result < 0) break;

    // Are we getting underruns?
    if (bufferSize < bufferCapacity) {
        int32_t underrunCount = AAudioStream_getXRunCount(stream);
        if (underrunCount > previousUnderrunCount) {
            previousUnderrunCount = underrunCount;
            // Try increasing the buffer size by one burst
            bufferSize += framesPerBurst;
            bufferSize = AAudioStream_setBufferSize(stream, bufferSize);
        }
    }
}

You can use the same technique to optimize the buffer size for an input stream. In that case your code should be looking for overruns rather than underruns.

Using a high priority thread

AAudio provides a special high priority thread for running streams with low latency. This thread has better scheduling performance than a normal application thread.

To run in a high-priority thread, define a thread function according to this prototype:

void * (*aaudio_thread_function)(void *); // Link to definition

For example:

std::atomic<bool> s_audioEnabled;
s_audioEnabled.store(true);
void * myAAudioThreadProc(void *arg) {
    MyData *data = (MyData *) arg;
    aaudio_result_t result = AAUDIO_OK;
    // Create a stream
    . . .
    // Play audio in a loop.
    while (s_audioEnabled.load() && result == AAUDIO_OK) {
        getMidiFromFifo();
        synthesizeAudio(buffer, numFrames);
        result = AAudioStream_write(
                stream, buffer, numFrames, timeoutNanos);
    }
    // Clean up
    . . .
    return NULL;
}

The following example shows how to create and start a high-priority thread:

static MyAAudioThreadData myAAudioThreadData = {0};
int64_t nanosPerWakeup = AAUDIO_NANOS_PER_SECOND *
                         burstsPerWakeup * framesPerBurst / framesPerSecond;
result = AAudioStream_createThread(stream,
                                nanosPerWakeup,
                                myAAudioThreadProc,
                                &myAAudioThreadData);

The nanosPerWakeup is an estimate of the wakeup period. It is a hint that allows the thread scheduler to optimize the thread priority.

The framesPerSecond term is the same as sample rate. By calling it framesPerSecond you can do a unit analysis of the calculation, showing that all the units cancel except nanosPerWakup.

To stop the thread, have your function terminate gracefully. The joinThread() function waits for the thread to exit:

s_audioEnabled.store(false); // tell thread loop to exit
void *returnArg = NULL;
int64_t timeoutNanoseconds = AAUDIO_NANOS_PER_SECOND / 2;
result = AAudioStream_joinThread(stream,
                              &returnArg,
                              timeoutNanoseconds);

Thread safety

The AAudio API is not completely thread safe. This is because AAudio avoids using mutexes, which can cause thread preemption and glitches.

To be safe, don't call AAudioStream_waitForStateChange() or read or write to a stream from two different threads. Similarly, don't close a stream in one thread while reading or writing to it in another thread.

Calls that return stream settings, like AAudioStream_getSampleRate() and AAudioStream_getSamplesPerFrame(), are thread safe.

These calls are also thread safe:

Code samples

Two small AAudio demo apps are available on our GitHub page:

Known issues

This site uses cookies to store your preferences for site-specific language and display options.

Hooray!

This class requires API level or higher

This doc is hidden because your selected API level for the documentation is . You can change the documentation API level with the selector above the left navigation.

For more information about specifying the API level your app requires, read Supporting Different Platform Versions.

Take a one-minute survey?
Help us improve Android tools and documentation.