Create smarter apps
On-device machine learning (ML) lets you supercharge your app and add features to process images, sound, and text.
You can add on-device machine learning features to your app, whether you are a seasoned developer or just getting started.
Low latency
Keep data on-device
Works offline
Cost saving
Supercharge your Android app with Gemini
Run Gemini on the server
Run Gemini on-device
Ready-to-use or custom ML?
Common user flows with ML Kit SDKs
High performance custom ML
ML Kit SDKs: Ready-to-use, for common user flows
Face detection
Text recognition
Barcode scanning
More ML APIs
Android’s custom ML stack: high performance ML
TensorFlow Lite for ML runtime: Use TensorFlow Lite via Google Play services, Android’s official ML inference runtime, to run high-performance ML inference in your app. Learn more
Hardware Acceleration with TensorFlow Lite Delegates: Use TensorFlow Lite Delegates distributed via Google Play services to run accelerated ML on specialized hardware such as GPU, NPU, or DSP. This may help you deliver more fluid, lower latency user experiences to your users by accessing advanced on-device compute capabilities.
We currently provide support for GPU and NNAPI delegates and we’re working with partners to provide access to their custom delegates via Google play services, to support advanced use cases. Learn more
Enabled by Google Play services: Use Play services to access the TensorFlow Lite runtime and delegates. This ensures use of latest stable versions while minimizing impact to your app’s binary size. Learn more