Data and privacy

When you use Gemini, the AI-powered coding assistant that's integrated with Android Studio, Google handles your data in accordance with our Privacy Policy and the Gemini Privacy Notice. Keep reading to learn more about how Gemini conforms to Google's commitment to AI principles. "Gemini" refers to the chatbot, AI code completion, and other AI functionality in Android Studio unless mentioned otherwise.

Note that in order to access Gemini, you need to be logged in to Android Studio and accept the Gemini terms and conditions.

Data submitted and received

Here are the different types of data submitted to and received from Gemini:

  • Prompts and responses: the questions that you ask Gemini, including any input information or code that you submit to Gemini to analyze or complete, are called prompts. The answers or code completions that you receive from Gemini are called responses.
  • Feedback signals: thumbs up and down votes and any other feedback that you provide.
  • Context (optional): Gemini might send additional information from your codebase such as pieces of your code, file types, and any other information that might be necessary to provide context to the Large language Model (LLM). This helps Gemini provide higher quality and relevant responses. This also lets Gemini provide additional experimental capabilities such as AI code completion.
  • Usage statistics: to see and edit the usage statistics that you're sharing with the Android Studio team, see Android Studio > Settings > Appearance & Behavior > Data Sharing.

Block context sharing with .aiexclude files

Gemini is designed with privacy in mind. We've heard feedback from our users and understand that the privacy of your codebase is a primary concern. That's why we're adding an additional layer of control with the ability to explicitly block context sharing using .aiexclude files.

.aiexclude files limit which files are shared with the backend servers. Much like a .gitignore file, an .aiexclude file consists of a series of patterns. Files or directories matching the patterns in the .aiexclude files are never used as context for the AI models. AI features that operate in the editor, like intention actions and code completion, are also disabled in files covered by .aiexclude rules.

How to write .aiexclude files

.aiexclude follows the same syntax as a .gitignore file with the following differences:

  • Unlike an empty .gitignore file, an empty .aiexclude file blocks all files in its directory and all sub-directories, recursively. This is the same as if the file contained just * or **.
  • .aiexclude files don't support negation (prefixing patterns with !).

Here are some example patterns:

  • KEYS blocks all files with the name "KEYS" with no file extension at or below the .aiexclude directory.
  • KEYS.* blocks all files called "KEYS" with any file extension at or below the .aiexclude directory.
  • *.kt blocks all Kotlin files at or below the .aiexclude directory.
  • /*.kt blocks all Kotlin files in the.aiexclude directory, but not below.
  • my/sensitive/dir/ blocks all files in the my/sensitive/dir directory and below, relative to the .aiexclude directory.
  • my/sensitive/dir/**/*.txt blocks all .txt files in or under my/sensitive/dir.
  • my/sensitive/dir/*.txt blocks all .txt files in my/sensitive/dir, but not in sub-directories.

FAQ

What data is collected? How is it used?

By default, your code stays private. Gemini cannot see the code in the editor window and only uses the prompts and conversation history in the chatbot to respond.

However, you can opt in to sharing context from your codebase to enable higher quality responses and access to experimental features such as AI code completion. This is available at Android Studio > Settings > Gemini > Augment responses with information from your codebase. To block context sharing for certain portions of your codebase, see Block context sharing with .aiexclude files.

Your feedback data, such as thumbs up and down signals, and context (if opted in) might be used to fine tune the models. Google uses this data to provide, improve, and develop our products and services, including enterprise products such as Google Cloud. To help with quality and improve Gemini, human reviewers might read, annotate, and process your prompts, generated output, feature usage information, and feedback.

The data is stored in a way where Google can't tell who provided it, and it's not possible to delete upon request. The data is retained for up to 18 months. For more information, see the Gemini Privacy Notice.

Is my code used to train Gemini?

No, your code isn't used for training generative models. Your feedback data, such as thumbs up and down signals, and context (if opted in) might be used to fine tune the models.

Note that the Ask Gemini feature only sends code you explicitly authorize. If you opt in to use the AI code completion feature, we use context from your codebase to provide higher quality responses.

How and when does Gemini cite sources in its responses?

AI coding in Android Studio, like some other standalone LLM experiences, is intended to generate original content and not replicate existing content at length. We've designed our systems to limit the chances of this occurring, and we will continue to improve how these systems function. If Gemini directly quotes at length from a source, it cites that source.

Can I access Gemini without sharing context?

Yes. By default, Gemini can't see the code in the editor window and only uses the prompts and conversation history in the chatbot to respond. However, you can opt in to sharing context from your codebase to enable higher quality responses and access to experimental features such as AI code completion.

How can I give feedback about a specific AI response?

To help us improve, rate the generated output with a thumbs up or down. If you get an AI response that you feel is unsafe, not helpful, inaccurate, or bad for any other reason, let us know by submitting feedback.