Attribution Reporting API: integration guide

As you read through the Privacy Sandbox on Android documentation, use the Developer Preview or Beta button to select the program version that you're working with, as instructions may vary.


The Attribution Reporting API is designed to support key use cases for attribution and conversion measurement across apps and the web without reliance on cross-party user identifiers. Compared to common designs today, the Attribution Reporting API implementers should factor in some important high-level considerations:

  • Event-level reports include low-fidelity conversion data. A small number of conversion values work well.
  • Aggregatable reports include higher-fidelity conversion data. Your solutions should design aggregation keys based on your business requirements and the 128-bit limit.
  • Your solution's data models and processing should factor in rate limits for available triggers, time delays for sending trigger events, and noise applied by the API.

To help you with integration planning, this guide provides a comprehensive view, which may include features that are not yet implemented at the current stage of the Privacy Sandbox on Android Developer Preview. In these cases, timeline guidance is provided.

On this page, we use source to represent either a click or a view, and trigger to represent a conversion.

The chart below displays the different workflow options for attribution integration. Sections listed in the same column (circled in green) can be worked on in parallel; for example, partner engagement can be done at the same time as app-to-app event-level attribution.

Attribution integration workflow diagram

Figure 1. The attribution integration workflow.

Prerequisites and setup

Complete the steps in this section to improve your understanding of the Attribution Reporting API. These steps will set you up to gather meaningful results when using the API in the ad tech ecosystem.

Familiarize with the API

  1. Read the design proposal to familiarize yourself with the Attribution Reporting API and its capabilities.
  2. Read the developer guide to learn how to incorporate the code and API calls that you will need for your use cases.
  3. Submit any feedback you have for the documentation, especially regarding the open questions.
  4. Sign up to receive updates on the Attribution Reporting API. This will help you stay current on new features that are introduced in future releases.

Set up and test the sample app

  1. Once you are ready to begin your integration, get yourself set up with the latest Developer Preview in Android Studio.
  2. Set up mock server endpoints for event registrations and report deliveries. We have provided mocks that you can use in tandem with tools available online.
  3. Download and run the code in our sample app to familiarize yourself with registering sources and triggers.
    1. Set the time window for sending reports. The API supports windows of 2 days, 7 days, or a custom period between 2 and 30 days.
    2. Once you have registered sources and triggers by running and using the sample app, and that the set time period has passed, verify that you have received an event-level report and an encrypted aggregatable report. If you need to debug reports, you can generate them more quickly by force-running reporting jobs.
    3. Review the results for app-to-app attribution. Confirm that the data in these results is as expected for both last-touch and post-install cases.

  4. After you have a feel for how the client API and server work together, use the sample app as an example to guide your own integration. Set up your own production server and add event registration calls to your apps.

Pre-integration

Enroll your organization with the Privacy Sandbox on Android. This enrollment is designed to prevent unnecessary duplication of ad tech platforms, which would allow access to more information than necessary on the user's activities.

Partner engagement

Ad tech partners (MMP/SSP/DSP) often create integrated attribution solutions. The steps in this section help you prepare for success in engaging with your adtech partners.

  1. Schedule a discussion with your top measurement partners to discuss testing and adoption of the Attribution Reporting API. Measurement partners can include ad tech networks, SSPs, DSPs, advertisers, or any other partner that you currently work with or would like to work with.
  2. Collaborate with your measurement partners to define timelines for integration, from initial testing to adoption.
  3. Clarify with your measurement partners which areas each of you will cover in attribution design.
  4. Establish channels of communication between measurement partners to sync on timelines and end-to-end testing.
  5. Design high-level data flows across measurement partners. Key considerations include the following:
    • How will measurement partners register attribution sources with the Attribution Reporting API?
    • How will ad tech networks register triggers with the Attribution Reporting API?
    • How will each ad tech validate API requests and return responses to complete source and trigger registrations?
    • Are there any reports that need to be shared across partners outside of the Attribution Reporting API?
    • Are there any other integration points or alignment needed across partners? For example, do you and your partners need to work on deduplicating conversions, or align on aggregation keys?
  6. If app-to-web attribution is applicable, schedule a discussion with measurement partners on web to discuss design, testing, and adoption of the Attribution Reporting API. Refer to the questions in the previous step as you begin conversations with web partners.

Prototype app-to-app event-level attribution

This section helps you set up a basic app-to-app attribution with event-level reports in your app or SDK. Completion of this section is required before you can begin prototyping aggregation server attribution.

  1. Set up a collection server for event records. You can do this by using the provided spec to generate a mock server, or set up your own server with the sample server code.
  2. Add register source event calls to your SDK or app when ads are shown.
    • Critical considerations include the following:
      • Ensure that source event IDs are available and passed correctly to the source registration API calls.
      • Make sure you can also pass in an `InputEvent` to register click sources.
      • Determine how you will configure source priority for different types of events. For example, assign a high priority to events that are considered high value, such as clicks over views.
      • The default value for expiry is OK for testing. Alternatively, different expiration windows can be configured.
      • Filters and attribution windows can be left as defaults for testing.
    • Optional considerations include the following:
      • Design aggregation keys if you are ready for them.
      • Consider your redirect strategy when you establish how you want to work with other measurement partners.
  3. Add register trigger events to your SDK or app to record conversion events.
    • Critical considerations include the following:
      • Define trigger data, considering the limited fidelity returned: How are you going to reduce the number of conversion types your advertisers need for the 3 bits available for clicks, and the 1 bit available for views?
      • Limits on available triggers in event reports: How do you plan to reduce the number of total conversions per source you can receive in event reports?
    • Optional considerations include the following:
      • Skip creating deduplication keys until you are doing accuracy tests.
      • Skip creating aggregation keys and values until simulation testing support is ready.
      • Skip redirects until you establish how you want to work with other measurement partners.
      • Trigger priority is not essential for testing.
      • Filters can likely be ignored for initial testing.
  4. Test that source events are being generated for ads, and that triggers are leading for the creation of event reports.

Simulation testing

This section will walk you through testing the impact that moving your current conversions to event and aggregatable reports is likely to have on reporting and optimization systems. This will allow you to start impact testing before finishing your integration.

Testing is done by simulating the generation of event and aggregatable reports based on historical conversion records you have, and then getting the aggregated results from a simulated aggregation server. These results can be compared with historical conversion numbers to see how reporting accuracy would change.

Optimization models, such as predicted conversion rate calculations, can be trained on these reports to compare the accuracy of these models compared with the ones built on current data. This is also a chance to experiment with different aggregation key structures and their impact on results.

  1. Set up the Measurement Simulation Library on a local machine.
  2. Read the spec on how your conversion data must be formatted to be compatible with the simulated report generator.
  3. Design your aggregation keys based on business requirements.
    • Critical considerations include the following:
      • Consider the critical dimensions your clients or partners need to aggregate and focus your evaluation on those.
      • Determine the minimum number of aggregate dimensions and cardinalities needed for your requirements.
      • Ensure that source- and trigger-side key pieces don't exceed 128 bits.
      • If your solutions involve contributing to multiple values per trigger event, be sure to scale the values against the maximum contribution budget, L1. This will help minimize the impact of noise.
      • Here is an example that details setting a key to collect aggregate conversion counts at a campaign level, and a key to collect aggregate purchase values at a geo level.
  4. Run the report generator to create event and aggregatable reports.
  5. Run the aggregatable reports through the simulated aggregation servers to get summary reports.
  6. Perform utility experiments:
    • Compare conversion totals from event-level and summary reports with historical conversion data to determine conversion reporting accuracy. For best results, run the reporting tests and comparisons on a broad, representative portion of the advertiser base.
    • Retrain your models based on event-level report data, and potentially summary report data. Compare accuracy with models built on historical training data.
    • Try different batching strategies and see how they impact your results.
      • Critical considerations include the following:
      • Timeliness of summary reports for adjusting bids.
      • Average frequencies of attributable events on the device. For example, lapsed users coming back based on historical purchase events data.
      • Noise level. More batches means smaller aggregation, and smaller aggregation means more noise is applied.

Prototype aggregation server attribution: Setup

These steps will ensure you are able to receive aggregatable reports of your source and trigger events.

  1. Set up your aggregation server:
  2. Design your aggregation keys based on business requirements. If you have already completed this task in the app-to-app event-level section, you may skip this step.
  3. Set up a collection server for aggregatable reports. If you have already created one in the app-to-app event-level section, you may reuse it.

Prototype aggregation server attribution: Integration

To proceed past this point, you must have completed the Prototype aggregation server attribution: Setup section, or the Prototype App to App Event-Level Attribution section**.

  1. Add aggregation key data to your source and trigger events. This will likely require passing more data about the ad event, such as the campaign ID, into your SDK or app to include in the aggregation key.
  2. Collect app-to-app aggregatable reports from the source and trigger events that you registered with aggregation key data.
  3. Test different batching strategies as you run these aggregatable reports through the aggregation server, and see how they impact your results.

Iterate design with optional features

The following are additional features that you can include in your measurement solution.

  1. Setting a debug key will allow you to receive an unaltered report of a source or trigger event along with the reports generated by the Attribution Reporting API. You can use debug keys to compare reports and find bugs during integration.

Customize attribution behaviors

  1. Attribution for post install triggers
    • This feature can be used in the case where post-install triggers need to be attributed to the same attribution source that drove the install, even if there are other eligible attribution sources that occurred more recently.
    • For example, there may be a case where a user clicks an ad that drives an install. Once installed, the user clicks another ad and makes a purchase. In this case, the ad tech company may want the purchase to be attributed to the first click rather than the re-engagement click.
  2. Use filters to fine tune the data in your event-level reports
    • Conversion filters can be set to ignore selected triggers and exclude them from event reports. Because there are limits on the number of triggers per attribution source, the filters allow you to only include the triggers that provide the most useful information in your event reports.
    • Filters can also be used to selectively filter out some triggers, effectively ignoring them. For example, if you have a campaign targeting app installs, you may want to filter out post-install triggers from being attributed to sources from that campaign.
    • Filters can also be used to customize trigger data based on source data. For example, a source can specify "product" : ["1234"] where product is the filter key and 1234 is the value. Any trigger with a filter key of "product" that has a value other than "1234" is ignored.
  3. Customized source and trigger priority
    • In the case that multiple attribution sources can be associated with a trigger, or multiple triggers can be attributed to a source, you can use a signed 64-bit integer to prioritize certain source/trigger attributions over others.

Working with MMPs and others

  1. Redirects to other third parties for source and trigger events
    • You can set redirect URLs to allow multiple ad tech platforms to register a request. This can be used to enable cross-network deduplication in attribution.
  2. Deduplication keys
    • When an advertiser uses multiple ad tech platforms to register the same trigger event, a deduplication key can be used to disambiguate these repeated reports. If no deduplication key is provided, duplicate triggers may be reported back to each ad tech platform as unique.

Working with cross-platform measurement

  1. Cross app and web attribution (available in late Q4)
    • Supports use cases where a user sees an ad in an app, then converts in a mobile or app browser, or vice-versa.