Android 6.0 (M) offers new features for users and app developers. This document provides an introduction to the most notable APIs.
To better optimize your app for devices running Android ,
"23", install your app on an Android
system image, test it, then publish the updated app with
You can use Android APIs while also supporting older
versions by adding conditions to your code that check for the system API level
before executing APIs not supported by your
To learn more about maintaining backward compatibility, read Supporting
Different Platform Versions.
For more information about how API levels work, read What is API Level?
This release offers new APIs to let you authenticate users by using their fingerprint scans on supported devices, Use these APIs in conjunction with the Android Keystore system.
To authenticate users via fingerprint scan, get an instance of the new
FingerprintManager class and call the
method. Your app must be running on a compatible
device with a fingerprint sensor. You must implement the user interface for the fingerprint
authentication flow on your app, and use the standard Android fingerprint icon in your UI.
The Android fingerprint icon (
c_fp_40px.png) is included in the
Fingerprint Dialog sample. If you are
developing multiple apps that use fingerprint authentication, note that each app must authenticate
the user’s fingerprint independently.
To use this feature in your app, first add the
USE_FINGERPRINT permission in your manifest.
<uses-permission android:name="android.permission.USE_FINGERPRINT" />
To see an app implementation of fingerprint authentication, refer to the Fingerprint Dialog sample. For a demonstration of how you can use these authentication APIs in conjunction with other Android APIs, see the video Fingerprint and Payment APIs.
If you are testing this feature, follow these steps:
adb -e emu finger touch <finger_id>
On Windows, you may have to run
telnet 127.0.0.1 <emulator-id> followed by
finger touch <finger_id>.
Your app can authenticate users based on how recently they last unlocked their device. This feature frees users from having to remember additional app-specific passwords, and avoids the need for you to implement your own authentication user interface. Your app should use this feature in conjunction with a public or secret key implementation for user authentication.
To set the timeout duration for which the same key can be re-used after a user is successfully
authenticated, call the new
method when you set up a
Avoid showing the re-authentication dialog excessively -- your apps should try using the
cryptographic object first and if the the timeout expires, use the
method to re-authenticate the user within your app.
To see an app implementation of this feature, refer to the Confirm Credential sample.
This release enhances Android’s intent system by providing more powerful app linking. This feature allows you to associate an app with a web domain you own. Based on this association, the platform can determine the default app to use to handle a particular web link and skip prompting users to select an app. To learn how to implement this feature, see Handling App Links.
The system now performs automatic full data backup and restore for apps. Your app must target Android 6.0 (API level 23) to enable this behavior; you do not need to add any additional code. If users delete their Google accounts, their backup data is deleted as well. To learn how this feature works and how to configure what to back up on the file system, see Configuring Auto Backup for Apps.
This release provides you with APIs to make sharing intuitive and quick for users. You can now define direct share targets that launch a specific activity in your app. These direct share targets are exposed to users via the Share menu. This feature allows users to share content to targets, such as contacts, within other apps. For example, the direct share target might launch an activity in another social network app, which lets the user share content directly to a specific friend or community in that app.
To enable direct share targets you must define a class that extends the
ChooserTargetService class. Declare your
service in the manifest. Within that declaration, specify the
BIND_CHOOSER_TARGET_SERVICE permission and an
intent filter using the
The following example shows how you might declare the
ChooserTargetService in your manifest.
<service android:name=".ChooserTargetService" android:label="@string/service_name" android:permission="android.permission.BIND_CHOOSER_TARGET_SERVICE"> <intent-filter> <action android:name="android.service.chooser.ChooserTargetService" /> </intent-filter> </service>
For each activity that you want to expose to
ChooserTargetService, add a
<meta-data> element with the name
"android.service.chooser.chooser_target_service" in your app manifest.
<activity android:name=".MyShareActivity” android:label="@string/share_activity_label"> <intent-filter> <action android:name="android.intent.action.SEND" /> </intent-filter> <meta-data android:name="android.service.chooser.chooser_target_service" android:value=".ChooserTargetService" /> </activity>
This release provides a new voice interaction API which, together with
allows you to build conversational voice experiences into your apps. Call the
isVoiceInteraction() method to determine if a voice action triggered
your activity. If so, your app can use the
VoiceInteractor class to request a voice confirmation from the user, select
from a list of options, and more.
Most voice interactions originate from a user voice action. A voice interaction activity can
also, however, start without user input. For example, another app launched through a voice
interaction can also send an intent to launch a voice interaction. To determine if your activity
launched from a user voice query or from another voice interaction app, call the
isVoiceInteractionRoot() method. If another app launched your
activity, the method returns
false. Your app may then prompt the user to confirm that
they intended this action.
To learn more about implementing voice actions, see the Voice Actions developer site.
This release offers a new way for users to engage with your apps through an assistant. To use this feature, the user must enable the assistant to use the current context. Once enabled, the user can summon the assistant within any app, by long-pressing on the Home button.
Your app can elect to not share the current context with the assistant by setting the
FLAG_SECURE flag. In addition to the
standard set of information that the platform passes to the assistant, your app can share
additional information by using the new
To provide the assistant with additional context from your app, follow these steps:
onProvideAssistData()callback and, optionally, the new
With this release, users can adopt external storage devices such as SD cards. Adopting an
external storage device encrypts and formats the device to behave like internal storage. This
feature allows users to move both apps and private data of those apps between storage devices. When
moving apps, the system respects the
preference in the manifest.
If your app accesses the following APIs or fields, be aware that the file paths they return will dynamically change when the app is moved between internal and external storage devices. When building file paths, it is strongly recommended that you always call these APIs dynamically. Don’t use hardcoded file paths or persist fully-qualified file paths that were built previously.
To debug this feature, you can enable adoption of a USB drive that is connected to an Android device through a USB On-The-Go (OTG) cable, by running this command:
$ adb shell sm set-force-adoptable true
This release adds the following API changes for notifications:
INTERRUPTION_FILTER_ALARMSfilter level that corresponds to the new Alarms only do not disturb mode.
CATEGORY_REMINDERcategory value that is used to distinguish user-scheduled reminders from other events (
CATEGORY_EVENT) and alarms (
Iconclass that you can attach to your notifications via the
setLargeIcon()methods. Similarly, the
addAction()method now accepts an
Iconobject instead of a drawable resource ID.
getActiveNotifications()method that allows your apps to find out which of their notifications are currently alive. To see an app implementation that uses this feature, see the Active Notifications sample.
This release provides improved support for user input using a Bluetooth stylus. Users can pair
and connect a compatible Bluetooth stylus with their phone or tablet. While connected, position
information from the touch screen is fused with pressure and button information from the stylus to
provide a greater range of expression than with the touch screen alone. Your app can listen for
stylus button presses and perform secondary actions, by registering
GestureDetector.OnContextClickListener objects in your activity.
MotionEvent methods and constants to detect stylus button
BUTTON_STYLUS_PRIMARYwhen the user presses the primary stylus button. If the stylus has a second button, the same method returns
BUTTON_STYLUS_SECONDARYwhen the user presses it. If the user presses both buttons simultaneously, the method returns both values OR'ed together (
BUTTON_SECONDARY(for primary stylus button press),
BUTTON_TERTIARY(for secondary stylus button press), or both.
If your app performs performs Bluetooth Low Energy scans, use the new
method to specify that you want the system to notify callbacks when it first finds, or sees after a
long time, an advertisement packet matching the set
approach to scanning is more power-efficient than what’s provided in the previous platform version.
This release adds support for the Hotspot 2.0 Release 1 spec on Nexus 6 and Nexus 9 devices. To
provision Hotspot 2.0 credentials in your app, use the new methods of the
WifiEnterpriseConfig class, such as
setRealm(). In the
WifiConfiguration object, you can set the
FQDN and the
isPasspointNetwork() method indicates if a detected
network represents a Hotspot 2.0 access point.
The platform now allows apps to request that the display resolution be upgraded to 4K rendering
on compatible hardware. To query the current physical resolution, use the new
Display.Mode APIs. If the UI is drawn at a lower logical resolution and is
upscaled to a larger physical resolution, be aware that the physical resolution the
getPhysicalWidth() method returns may differ from the logical
resolution reported by
You can request the system to change the physical resolution in your app as it runs, by setting
preferredDisplayModeId property of your app’s
window. This feature is useful if you want to switch to 4K display resolution. While in 4K display
mode, the UI continues to be rendered at the original resolution (such as 1080p) and is upscaled to
SurfaceView objects may show content at the native resolution.
Theme attributes are now supported in
ColorStateList for devices running on Android 6.0 (API level 23). The
Resources.getColor() methods have been
deprecated. If you are calling these APIs, call the new
Context.getColor() methods instead. These methods are
also available in the v4 appcompat library via
This release adds enhancements to audio processing on Android, including:
android.media.midiAPIs. Use these APIs to send and receive MIDI events.
AudioTrack.Builderclasses to create digital audio capture and playback objects respectively, and configure audio source and sink properties to override the system defaults.
onSearchRequested()callback when the user starts a search. To determine if the user's input device has a built-in microphone, retrieve the
InputDeviceobject from that callback, then call the new
getDevices()method which lets you retrieve a list of all audio devices currently connected to the system. You can also register an
AudioDeviceCallbackobject if you want the system to notify your app when an audio device connects or disconnects.
This release adds new capabilities to the video processing APIs, including:
MediaSyncclass which helps applications to synchronously render audio and video streams. The audio buffers are submitted in non-blocking fashion and are returned via a callback. It also supports dynamic playback rate.
EVENT_SESSION_RECLAIMEDevent, which indicates that a session opened by the app has been reclaimed by the resource manager. If your app uses DRM sessions, you should handle this event and make sure not to use a reclaimed session.
ERROR_RECLAIMEDerror code, which indicates that the resource manager reclaimed the media resource used by the codec. With this exception, the codec must be released, as it has moved to terminal state.
getMaxSupportedInstances()interface to get a hint for the max number of the supported concurrent codec instances.
setPlaybackParams()method to set the media playback rate for fast or slow motion playback. It also stretches or speeds up the audio playback automatically in conjunction with the video.
This release includes the following new APIs for accessing the camera’s flashlight and for camera reprocessing of images:
If a camera device has a flash unit, you can call the
method to switch the flash unit’s torch mode on or off without opening the camera device. The app
does not have exclusive ownership of the flash unit or the camera device. The torch mode is turned
off and becomes unavailable whenever the camera device becomes unavailable, or when other camera
resources keeping the torch on become unavailable. Other apps can also call
to turn off the torch mode. When the last app that turned on the torch mode is closed, the torch
mode is turned off.
You can register a callback to be notified about torch mode status by calling the
method. The first time the callback is registered, it is immediately called with the torch mode
status of all currently known camera devices with a flash unit. If the torch mode is turned on or
off successfully, the
method is invoked.
Camera2 API is extended to support YUV and private
opaque format image reprocessing. To determine if these reprocessing capabilities are available,
getCameraCharacteristics() and check for the
REPROCESS_MAX_CAPTURE_STALL key. If a
device supports reprocessing, you can create a reprocessable camera capture session by calling
and create requests for input buffer reprocessing.
ImageWriter class to connect the input buffer flow to the camera
reprocessing input. To get an empty buffer, follow this programming model:
If you are using a
ImageWriter object together with an
PRIVATE image, your app cannot access the image
data directly. Instead, pass the
PRIVATE image directly to the
ImageWriter by calling the
without any buffer copy.
ImageReader class now supports
PRIVATE format image streams. This support allows your app to
maintain a circular image queue of
ImageReader output images, select one or
more images, and send them to the
ImageWriter for camera reprocessing.
This release includes the following new APIs for Android for Work:
PackageInstallerAPIs, independent of Google Play for Work. You can now provision devices through a Device Owner that fetches and installs apps without user interaction. This feature is useful for enabling one-touch provisioning of kiosks or other such devices without activating a Google account.
choosePrivateKeyAlias(), prior to the user being prompted to select a certificate, the Profile or Device Owner can now call the
onChoosePrivateKeyAlias()method to provide the alias silently to the requesting application. This feature lets you grant managed apps access to certificates without user interaction.
setSystemUpdatePolicy(), a Device Owner can now auto-accept a system update, for instance in the case of a kiosk device, or postpone the update and prevent it being taken by the user for up to 30 days. Furthermore, an administrator can set a daily time window in which an update must be taken, for example during the hours when a kiosk device is not in use. When a system update is available, the system checks if the device policy controller app has set a system update policy, and behaves accordingly.
DevicePolicyManagercertificate management APIs:
NetworkStatsManagermethods. Profile Owners are automatically granted permission to query data on the profile they manage, while Device Owners get access to usage data of the managed primary user.
A Profile or Device Owner can set a permission policy
for all runtime requests of all applications using
setPermissionPolicy(), to either prompt the user to grant the permission or automatically grant or
deny the permission silently. If the latter policy is set, the user cannot
modify the selection made by the Profile or Device Owner within the app’s permissions screen in