Machine learning (ML) lets you supercharge your app and add features to process images, sound, and text.

You can add machine learning features to your app, whether you are a seasoned developer or just getting started.

Unlock new user experiences by processing text, audio, and video in real-time.
Perform inference locally without sending user data to the cloud.
No need for a network connection or running a service in the cloud.
Reduce your cloud bill by running your ML features on-device.

Supercharge your Android app with Gemini

Gemini API
The new Gemini API lets you run the model inference on Google's servers. You can either call the API from your backend or directly integrate the new Google AI SDK, a client SDK for Android.
Android AICore
Starting with Android 14, Android AICore is a new system capability that enables running foundation models, such as Gemini Nano, directly on-device.

Ready-to-use or custom ML?

ML Kit provides production-ready solutions to common problems and requires no ML expertise. Models are built-in and optimized for mobile. ML Kit is easy to use and lets you focus on feature development rather than model training and optimization.
If you want more control or to deploy your own ML models, Android provides a custom ML stack built on top of TensorFlow Lite and Google Play services, covering essentials needed to deploy high performance ML features.

ML Kit SDKs: Ready-to-use, for common user flows

ML Kit provides access to production-ready ML models on-device. ML Kit APIs are optimized for mobile and don’t require ML expertise. Examples of ML Kit APIs include:
Detect if a picture has a face and how many faces are present, in real time and on-device.
Recognize text in Chinese, Devanagari, Japanese, Korean, or any Latin-character language.
Read data encoded in barcodes for the most popular linear and 2D (QR code) formats.
ML Kit offers 10+ vision and language APIs, such as image labeling, pose detection, translation, smart reply, and more.

Android’s custom ML stack: high performance ML

Essentials for deploying high performance, custom ML features into your Android app.

TensorFlow Lite for ML runtime: Use TensorFlow Lite via Google Play services, Android’s official ML inference runtime, to run high-performance ML inference in your app. Learn more

Hardware Acceleration with TensorFlow Lite Delegates: Use TensorFlow Lite Delegates distributed via Google Play services to run accelerated ML on specialized hardware such as GPU, NPU, or DSP. This may help you deliver more fluid, lower latency user experiences to your users by accessing advanced on-device compute capabilities.

We currently provide support for GPU and NNAPI delegates and we’re working with partners to provide access to their custom delegates via Google play services, to support advanced use cases. Learn more

Enabled by Google Play services: Use Play services to access the TensorFlow Lite runtime and delegates. This ensures use of latest stable versions while minimizing impact to your app’s binary size. Learn more

Review the TensorFlow Lite Android code samples and test ML features on your device.
Download the example code and get started with TensorFlow Lite and Android.
A new API that allows you to safely pick the optimal hardware acceleration configuration at runtime, without having to worry about the underlying device hardware and drivers.

Latest news