Skip to content

Conversation

@lethargicpanda
Copy link
Collaborator

TEST PR

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lethargicpanda, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the AI sample catalog by adding comprehensive documentation for a wide array of new AI-powered features. The additions cover real-time voice interactions, multimodal input processing, advanced video analysis, on-device AI capabilities with Gemini Nano, and creative image generation and editing with Imagen. The overall goal is to provide clearer, more diverse examples of how to integrate and utilize Google's AI models in various applications, making it easier for developers to understand and implement these technologies.

Highlights

  • New Sample Documentation: This pull request introduces a significant number of new README.md files, providing detailed documentation for various AI samples. These new samples showcase a broad spectrum of AI capabilities, from real-time interactions to advanced image and video processing.
  • Gemini Live API Integration: A new sample for the Gemini Live API is added, demonstrating real-time, voice-based interactions for applications like a ToDo list, highlighting conversational AI capabilities.
  • Multimodal and Video AI Samples: New samples are included for Gemini's multimodal capabilities (processing both image and text inputs) and for video analysis, covering tasks such as video summarization and metadata generation using the Gemini 2.5 Flash model.
  • On-Device AI with Gemini Nano: Several new samples leverage Gemini Nano for on-device AI, including image description, text summarization, and writing assistance (proofreading and rewriting), showcasing efficient local AI processing.
  • Imagen Image Generation and Editing: Documentation for new Imagen samples is added, demonstrating text-to-image generation and advanced image editing features like inpainting, expanding the creative AI offerings.
  • Hybrid AI Applications: A 'Magic Selfie' sample is introduced, illustrating how different AI technologies (ML Kit Subject Segmentation and Imagen API) can be combined to create innovative applications, such as replacing image backgrounds.
  • Documentation Link Updates: Minor updates to existing README.md files ensure that documentation links are current and point to the most relevant resources for the Gemini API.
Ignored Files
  • Ignored by pattern: .gemini/** (1)
    • .gemini/styleguide.md
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great initiative to improve the project's documentation by adding individual README files for each sample. The new files are well-structured and informative. I've identified a few areas for improvement, mainly concerning link consistency and correctness. There are some broken links, absolute URLs that should be relative, and one link pointing to the wrong API documentation. Addressing these points will enhance the quality and accuracy of the documentation.


The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.

Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This link to the ViewModel is broken due to an extra . at the beginning of the path. Please remove it to fix the link.

Suggested change
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):

}
```

Read more about the [Gemini API](https://developer.android.com/ai/gemini) in the Android Documentation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This sample uses the Imagen API to generate the background, not the Gemini API. The link should point to the Imagen documentation for accuracy.

Suggested change
Read more about the [Gemini API](https://developer.android.com/ai/gemini) in the Android Documentation.
Read more about [Imagen](https://developer.android.com/ai/imagen) in the Android Documentation.

}
```

Read more about [getting started with Gemini](https://developer.android.com/ai/gemini) in the Android Documentation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other documentation changes in this pull request (like in gemini-chatbot/README.md), it would be better to use "the Gemini API" as the link text instead of "getting started with Gemini".

Suggested change
Read more about [getting started with Gemini](https://developer.android.com/ai/gemini) in the Android Documentation.
Read more about [the Gemini API](https://developer.android.com/ai/gemini) in the Android Documentation.


## How it works

The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and portability (e.g., in forks), it's better to use a relative path for the link to the source file instead of an absolute GitHub URL.

Suggested change
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](./src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.


## How it works

The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and portability, it's better to use a relative path for the link to the source file instead of an absolute GitHub URL.

Suggested change
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.


## How it works

The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This link uses an absolute GitHub URL. For consistency with other links in this file and across the project, please use a relative path.

Suggested change
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](./src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.

@lethargicpanda
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great addition, adding individual README files for each sample to improve documentation and user guidance. My review focuses on ensuring consistency and correctness in these new files. I've pointed out a few instances of absolute URLs that should be relative for better portability, and a broken link that needs fixing. Additionally, I've included a suggestion to improve code maintainability in one of the Kotlin files by replacing a hardcoded string with a constant.


The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.

Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This relative link to the ViewModel is broken. It's missing a leading ./ which makes the path invalid.

Suggested change
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):

private val generativeModel by lazy {
Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
"gemini-2.5-flash",
"gemini-2.0-flash",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and consistency, consider extracting the hardcoded model name "gemini-2.0-flash" into a constant within a companion object. This makes it easier to update the model name in the future and aligns with practices seen elsewhere in the project (e.g., ImagenEditingDataSource.kt).

Example:

private companion object {
    private const val GEMINI_MODEL = "gemini-2.0-flash"
}

// ...

generativeModel(
    GEMINI_MODEL,
    // ...
)


## How it works

The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The link to the ViewModel is an absolute GitHub URL. For consistency with other README files in this repository and to ensure links work correctly in forks, please use a relative path instead.

Suggested change
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](./src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.


## How it works

The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This link to the ViewModel is an absolute GitHub URL. Please use a relative path for consistency and to ensure the link works in forks of this repository.

Suggested change
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.


## How it works

The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This link is an absolute GitHub URL. For consistency and robustness, please change it to a relative path.

Suggested change
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](./src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.

@lethargicpanda
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds individual README.md files to several samples, which is a great improvement for documentation. The changes are mostly good, but I've identified a few areas for improvement. Specifically, there's a potential model name inconsistency in GeminiChatbotViewModel.kt that should be reviewed. Additionally, several of the new README files use absolute GitHub URLs instead of relative paths to link to source files, and one README contains a broken link. Using relative paths would improve consistency and ensure the links work correctly in various environments.

private val generativeModel by lazy {
Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
"gemini-2.5-flash",
"gemini-2.0-flash",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The model name has been changed to gemini-2.0-flash, which appears to be a downgrade or a typo, as other samples in this repository use gemini-2.5-flash. Please verify if this change is intentional. If not, it should be updated to maintain consistency and use the intended model version.

Suggested change
"gemini-2.0-flash",
"gemini-2.5-flash",


## How it works

The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The link to GenAIImageDescriptionViewModel.kt is an absolute GitHub URL. For consistency with other README files in the repository and to ensure links work correctly in different environments (like local clones), it's better to use a relative path.

Suggested change
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.
The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](./src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description.

Comment on lines +15 to +17
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.

Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are two issues with the links in this section:

  1. The link on line 15 to GenAIWritingAssistanceViewModel.kt is an absolute GitHub URL. It should be a relative path for consistency.
  2. The link on line 17 to the same file has a broken relative path (.src/... instead of ./src/...).

Please use a consistent, correct relative path for linking to source files.

Suggested change
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):
The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text.
Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](./src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt):


## How it works

The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The link to ImagenEditingDataSource.kt is an absolute GitHub URL. For consistency and to ensure the link works correctly in different environments (like local clones), it's better to use a relative path.

Suggested change
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.
The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](./src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting.

@lethargicpanda lethargicpanda deleted the te/test-samples-readmes branch November 14, 2025 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant