The Google AI client SDK lets you call the Gemini API and use the Gemini family of models directly from your Android app.
A free-of-charge tier lets you experiment at no cost. For other pricing details, see the pricing guide.
Getting started
Before you interact with the Gemini API directly from your app, you'll need to do a few things first, including getting familiar with prompting as well as generating an API key and setting up your app to use the SDK.
Experiment with prompts
Start by prototyping your prompt in Google AI Studio.
Google AI Studio is an IDE for prompt design and prototyping. It lets you upload files to test prompts with text and images and save a prompt to revisit it later.
Creating the right prompt for your use-case is more art than science, which makes experimentation critical. You can learn more about prompting in the official Google AI documentation.
To learn more about the advanced capabilities of Google AI Studio, see the Google AI Studio quickstart.
Generate your API key
Once satisfied with your prompt, click Get API key to generate your Gemini API key. The key will be bundled with your application, which is OK for experimentation and prototyping but not recommended for production use cases.
Also, to prevent your API key from being committed to your source code repository, use the Secrets gradle plugin.
Add the Gradle dependency
Add the dependency for the Google AI client SDK to your app:
Kotlin
dependencies { [...] implementation("com.google.ai.client.generativeai:generativeai:0.7.0") }
Java
dependencies { [...] implementation("com.google.ai.client.generativeai:generativeai:0.7.0") // Required to use `ListenableFuture` from Guava Android for one-shot generation implementation("com.google.guava:guava:31.0.1-android") // Required to use `Publisher` from Reactive Streams for streaming operations implementation("org.reactivestreams:reactive-streams:1.0.4") }
Create a GenerativeModel
Start by instantiating a GenerativeModel
by providing the following:
- The model name:
gemini-1.5-flash
,gemini-1.5-pro
, orgemini-1.0-pro
- Your API key generated with Google AI Studio.
You can optionally define the model parameters and provide values for the temperature, topK, topP, and the maximum output tokens.
You can also define the safety features for the following topics:
HARASSMENT
HATE_SPEECH
SEXUALLY_EXPLICIT
DANGEROUS_CONTENT
Kotlin
val model = GenerativeModel( model = "gemini-1.5-flash-001", apiKey = BuildConfig.apikey, generationConfig = generationConfig { temperature = 0.15f topK = 32 topP = 1f maxOutputTokens = 4096 }, safetySettings = listOf( SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.MEDIUM_AND_ABOVE), SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE), SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.MEDIUM_AND_ABOVE), SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.MEDIUM_AND_ABOVE), ) )
Java
GenerationConfig.Builder configBuilder = new GenerationConfig.Builder(); configBuilder.temperature = 0.15f; configBuilder.topK = 32; configBuilder.topP = 1f; configBuilder.maxOutputTokens = 4096; ArrayList<SafetySetting> safetySettings = new ArrayList(); safetySettings.add(new SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.MEDIUM_AND_ABOVE)); safetySettings.add(new SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE)); safetySettings.add(new SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.MEDIUM_AND_ABOVE)); safetySettings.add(new SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.MEDIUM_AND_ABOVE)); GenerativeModel gm = new GenerativeModel( "gemini-1.5-flash-001", BuildConfig.apiKey, configBuilder.build(), safetySettings );
Use the Google AI client SDK in your app
Now that you've got an API key and set up your app to use the SDK, you're ready to interact with the Gemini API.
Generate text
To generate a text response, call generateContent()
with your prompt.
Kotlin
scope.launch { val response = model.generateContent("Write a story about a green robot.") }
Java
// In Java, create a GenerativeModelFutures from the GenerativeModel. // generateContent() returns a ListenableFuture. // Learn more: // https://developer.android.com/develop/background-work/background-tasks/asynchronous/listenablefuture GenerativeModelFutures model = GenerativeModelFutures.from(gm); Content content = new Content.Builder() .addText("Write a story about a green robot.") .build(); Executor executor = // ... ListenableFuture<GenerateContentResponse> response = model.generateContent(content); Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() { @Override public void onSuccess(GenerateContentResponse result) { String resultText = result.getText(); } @Override public void onFailure(Throwable t) { t.printStackTrace(); } }, executor);
Note that generateContent()
is a suspend
function, which integrates
well with existing Kotlin code.
Generate text from images and other media
You can also generate text from a prompt that includes text plus images or other
media. When you call generateContent()
, you can pass the media as inline data
(as shown in the example below).
Kotlin
scope.launch { val response = model.generateContent( content { image(bitmap) text("What is the object in this picture?") } ) }
Java
Content content = new Content.Builder() .addImage(bitmap) .addText("What is the object in this picture?") .build(); Executor executor = // ... ListenableFuture<GenerateContentResponse> response = model.generateContent(content); Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() { @Override public void onSuccess(GenerateContentResponse result) { String resultText = result.getText(); } @Override public void onFailure(Throwable t) { t.printStackTrace(); } }, executor);
Multi-turn chat
You can also support multi-turn conversations. Initialize a chat with the
startChat()
function. You can optionally provide a message history. Then
call the sendMessage()
function to send chat messages.
Kotlin
val chat = model.startChat( history = listOf( content(role = "user") { text("Hello, I have 2 dogs in my house.") }, content(role = "model") { text("Great to meet you. What would you like to know?") } ) ) scope.launch { val response = chat.sendMessage("How many paws are in my house?") }
Java
Content.Builder userContentBuilder = new Content.Builder(); userContentBuilder.setRole("user"); userContentBuilder.addText("Hello, I have 2 dogs in my house."); Content userContent = userContentBuilder.build(); // (Optional) create message history Content.Builder modelContentBuilder = new Content.Builder(); modelContentBuilder.setRole("model"); modelContentBuilder.addText("Great to meet you. What would you like to know?"); Content modelContent = userContentBuilder.build(); List<Content> history = Arrays.asList(userContent, modelContent); // Initialize the chat ChatFutures chat = model.startChat(history); Content.Builder userMessageBuilder = new Content.Builder(); userMessageBuilder.setRole("user"); userMessageBuilder.addText("How many paws are in my house?"); Content userMessage = userMessageBuilder.build(); Executor executor = // ... ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage); Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() { @Override public void onSuccess(GenerateContentResponse result) { String resultText = result.getText(); } @Override public void onFailure(Throwable t) { t.printStackTrace(); } }, executor);
Stream the response
You can achieve faster interactions by not waiting for the entire result from
the model generation, and instead use streaming to handle partial results. Use
generateContentStream()
to stream a response.
Kotlin
someScope.launch { var outputContent = "" generativeModel.generateContentStream(inputContent) .collect { response -> outputContent += response.text } }
Java
// In Java, the method generateContentStream() returns a Publisher // from the Reactive Streams library. // https://www.reactive-streams.org/ Publisher<GenerateContentResponse> streamingResponse = model.generateContentStream(content); StringBuilder outputContent = new StringBuilder(); streamingResponse.subscribe(new Subscriber<GenerateContentResponse>() { @Override public void onNext(GenerateContentResponse generateContentResponse) { String chunk = generateContentResponse.getText(); outputContent.append(chunk); } @Override public void onComplete() { // ... } @Override public void onError(Throwable t) { t.printStackTrace(); } @Override public void onSubscribe(Subscription s) { s.request(Long.MAX_VALUE); // Request all messages } });
Android Studio
Android Studio provides additional tools to help you get started.
- Gemini API starter template: This starter template helps you create an API key directly from Android Studio and generates a project that includes the necessary Android dependencies to use the Gemini APIs.
- Generative AI sample: This sample lets you import the Google AI client SDK for Android sample app in Android Studio.
Next steps
- Review the Google AI client SDK for Android sample app on GitHub.
- Learn more about the Google AI client SDK in the official Google AI documentation.