If you are new to the Gemini API, the Gemini Developer API is the recommended API provider for Android Developers. But if you have specific data location requirements or you are already embedded in the Vertex AI or Google Cloud environment, you can use the Vertex AI Gemini API.
Migration from Vertex AI in Firebase
If you originally integrated the Gemini Flash and Pro models using Vertex AI in Firebase, you can migrate to and continue using Vertex AI as an API provider. Read the Firebase documentation for a detailed migration guide.
Getting started
Before you interact with the Vertex AI Gemini API directly from your app, you can experiment with prompts in Vertex AI Studio.
Set up a Firebase project and connect your app to Firebase
Once you're ready to call the Vertex AI Gemini API from your app, follow the instructions in the "Step 1" Firebase AI Logic getting started guide to set up Firebase and the SDK in your app.
Add the Gradle dependency
Add the following Gradle dependency to your app module:
dependencies {
// ... other androidx dependencies
// Import the BoM for the Firebase platform
implementation(platform("com.google.firebase:firebase-bom:33.13.0"))
// Add the dependency for the Firebase AI Logic library. When using the BoM,
// you don't specify versions in Firebase library dependencies
implementation("com.google.firebase:firebase-ai")
}
Initialize the generative model
Start by instantiating a GenerativeModel
and specifying the model name:
Kotlin
val model = Firebase.ai(backend = GenerativeBackend.vertexAI())
.generativeModel("gemini-2.0-flash")
Java
GenerativeModel firebaseAI = FirebaseAI.getInstance(GenerativeBackend.vertexAI())
.generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(firebaseAI);
In the Firebase documentation, you can learn more about the available models for use with the Gemini Developer API. You can also learn about configuring model parameters.
Generate text
To generate a text response, call generateContent()
with your prompt.
Kotlin
kotlin
// Note: generateContent() is a suspend function, which integrates well
// with existing Kotlin code.
scope.launch {
val response = model.generateContent("Write a story about a magic backpack.")
}
Java
Content prompt = new Content.Builder()
.addText("Write a story about a magic backpack.")
.build();
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
@Override
public void onSuccess(GenerateContentResponse result) {
String resultText = result.getText();
[...]
}
@Override
public void onFailure(Throwable t) {
t.printStackTrace();
}
}, executor);
Similar to the Gemini Developer API, you can also pass images, audio, video and files with your text prompt (see "Interact with the Gemini Developer API from your app").
To learn more about Firebase AI Logic SDK, read the Firebase documentation.