-
Notifications
You must be signed in to change notification settings - Fork 131
[TEST PR] Add individual README.md files to each sample #135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -50,7 +50,7 @@ class GeminiChatbotViewModel @Inject constructor() : ViewModel() { | |||||
|
|
||||||
| private val generativeModel by lazy { | ||||||
| Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( | ||||||
| "gemini-2.5-flash", | ||||||
| "gemini-2.0-flash", | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The model name has been changed to
Suggested change
|
||||||
| generationConfig = generationConfig { | ||||||
| temperature = 0.9f | ||||||
| topK = 32 | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,39 @@ | ||
| # Gemini Live Todo Sample | ||
|
|
||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||
|
|
||
| ## Description | ||
|
|
||
| This sample demonstrates how to use the Gemini Live API for real-time, voice-based interactions in a simple ToDo application. Users can add, remove, and update tasks by speaking to the app, showcasing a hands-free, conversational user experience powered by the Gemini API. | ||
|
|
||
| <div style="text-align: center;"> | ||
| <img width="320" alt="Gemini Live Todo in action" src="gemini_live_todo.png" /> | ||
| </div> | ||
|
|
||
| ## How it works | ||
|
|
||
| The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `gemini-2.0-flash-live-preview-04-09` model. The core logic is in the [`TodoScreenViewModel.kt`](./src/main/java/com/android/ai/samples/geminilivetodo/ui/TodoScreenViewModel.kt) file. A `liveModel` is initialized with a set of function declarations (`addTodo`, `removeTodo`, `toggleTodoStatus`, `getTodoList`) that allow the model to interact with the ToDo list. When the user starts a voice conversation, the model processes the spoken commands and executes the corresponding functions to manage the tasks. | ||
|
|
||
| Here is the key snippet of code that initializes the model and connects to a live session: | ||
|
|
||
| ```kotlin | ||
| val generativeModel = Firebase.ai(backend = GenerativeBackend.vertexAI()).liveModel( | ||
| "gemini-2.0-flash-live-preview-04-09", | ||
| generationConfig = liveGenerationConfig, | ||
| systemInstruction = systemInstruction, | ||
| tools = listOf( | ||
| Tool.functionDeclarations( | ||
| listOf(getTodoList, addTodo, removeTodo, toggleTodoStatus), | ||
| ), | ||
| ), | ||
| ) | ||
|
|
||
| try { | ||
| session = generativeModel.connect() | ||
| } catch (e: Exception) { | ||
| Log.e(TAG, "Error connecting to the model", e) | ||
| liveSessionState.value = LiveSessionState.Error | ||
| } | ||
| ``` | ||
|
|
||
| Read more about the [Gemini Live API](https://developer.android.com/ai/gemini/live) in the Android Documentation. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,53 @@ | ||
| # Gemini Multimodal Sample | ||
|
|
||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||
|
|
||
| ## Description | ||
|
|
||
| This sample demonstrates a multimodal (image and text) prompt, using the `Gemini 2.5 Flash` model. Users can select an image and provide a text prompt, and the generative model will respond based on both inputs. This showcases how to build a simple, yet powerful, multimodal AI with the Gemini API. | ||
|
|
||
| <div style="text-align: center;"> | ||
| <img width="320" alt="Gemini Multimodal in action" src="gemini_multimodal.png" /> | ||
| </div> | ||
|
|
||
| ## How it works | ||
|
|
||
| The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `gemini-2.5-flash` model. The core logic is in the [`GeminiDataSource.kt`](./src/main/java/com/android/ai/samples/geminimultimodal/data/GeminiDataSource.kt) file. A `generativeModel` is initialized, and then a `chat` session is started from it. When a user provides an image and a text prompt, they are combined into a multimodal prompt and sent to the model, which then generates a text response. | ||
|
|
||
| Here is the key snippet of code that initializes the generative model: | ||
|
|
||
| ```kotlin | ||
| private val generativeModel by lazy { | ||
| Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( | ||
| "gemini-2.5-flash", | ||
| generationConfig = generationConfig { | ||
| temperature = 0.9f | ||
| topK = 32 | ||
| topP = 1f | ||
| maxOutputTokens = 4096 | ||
| }, | ||
| safetySettings = listOf( | ||
| SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.MEDIUM_AND_ABOVE), | ||
| SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.MEDIUM_AND_ABOVE), | ||
| SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.MEDIUM_AND_ABOVE), | ||
| SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.MEDIUM_AND_ABOVE), | ||
| ), | ||
| ) | ||
| } | ||
| ``` | ||
|
|
||
| Here is the key snippet of code that calls the [`generateText`](./src/main/java/com/android/ai/samples/geminimultimodal/data/GeminiDataSource.kt) function: | ||
|
|
||
| ```kotlin | ||
| suspend fun generateText(bitmap: Bitmap, prompt: String): String { | ||
| val multimodalPrompt = content { | ||
| image(bitmap) | ||
| text(prompt) | ||
| } | ||
| val result = generativeModel.generateContent(multimodalPrompt) | ||
| return result.text ?: "" | ||
| } | ||
| ``` | ||
|
|
||
| Read more about [the Gemini API](https://developer.android.com/ai/gemini) in the Android Documentation. | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,50 @@ | ||
| # Gemini Video Metadata Creation Sample | ||
|
|
||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||
|
|
||
| ## Description | ||
|
|
||
| This sample demonstrates how to generate various types of video metadata (description, hashtags, chapters, account tags, links, and thumbnails) using the `Gemini 2.5 Flash` model. Users can select a video, and the generative model will analyze its content to provide relevant metadata, showcasing how to enrich video content with AI-powered insights. | ||
|
|
||
| <div style="text-align: center;"> | ||
| <img width="320" alt="Gemini Video Metadata Creation in action" src="gemini_video_metadata.png" /> | ||
| </div> | ||
|
|
||
| ## How it works | ||
|
|
||
| The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `gemini-2.5-flash` model. The core logic involves several functions (e.g., [`generateDescription`](./src/main/java/com/android/ai/samples/geminivideometadatacreation/GenerateDescription.kt), `generateHashtags`, `generateChapters`, `generateAccountTags`, `generateLinks`, `generateThumbnails`) that send video content to the Gemini API for analysis. The model processes the video and returns structured metadata based on the specific prompt. | ||
|
|
||
| Here is a key snippet of code that generates a video description: | ||
|
|
||
| ```kotlin | ||
| suspend fun generateDescription(videoUri: Uri): @Composable () -> Unit { | ||
| val response = Firebase.ai(backend = GenerativeBackend.vertexAI()) | ||
| .generativeModel(modelName = "gemini-2.5-flash") | ||
| .generateContent( | ||
| content { | ||
| fileData(videoUri.toString(), "video/mp4") | ||
| text( | ||
| """ | ||
| Provide a compelling and concise description for this video in less than 100 words. | ||
| Don't assume if you don't know. | ||
| The description should be engaging and accurately reflect the video's content. | ||
| You should output your responses in HTML format. Use styling sparingly. You can use the following tags: | ||
| * Bold: <b> | ||
| * Italic: <i> | ||
| * Underline: <u> | ||
| * Bullet points: <ul>, <li> | ||
| """.trimIndent(), | ||
| ) | ||
| }, | ||
| ) | ||
|
|
||
| val responseText = response.text | ||
| return if (responseText != null) { | ||
| { DescriptionUi(responseText) } | ||
| } else { | ||
| { ErrorUi(response.promptFeedback?.blockReasonMessage) } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| Read more about [the Gemini API](https://developer.android.com/ai/gemini) in the Android Documentation. |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,34 @@ | ||||||
| # Gemini Video Summarization Sample | ||||||
|
|
||||||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||||||
|
|
||||||
| ## Description | ||||||
|
|
||||||
| This sample demonstrates how to generate a text summary of a video using the `Gemini 2.5 Flash` model. Users can select a video, and the generative model will analyze its content to provide a concise summary, showcasing how to extract key information from video content with the Gemini API. | ||||||
|
|
||||||
| <div style="text-align: center;"> | ||||||
| <img width="320" alt="Gemini Video Summarization in action" src="gemini_video_summarization.png" /> | ||||||
| </div> | ||||||
|
|
||||||
| ## How it works | ||||||
|
|
||||||
| The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `gemini-2.5-flash` model. The core logic is in the [`VideoSummarizationViewModel.kt`](./src/main/java/com/android/ai/samples/geminivideosummary/viewmodel/VideoSummarizationViewModel.kt) file. A `generativeModel` is initialized. When a user requests a summary, the video content and a text prompt are sent to the model, which then generates a text summary. | ||||||
|
|
||||||
| Here is the key snippet of code that calls the generative model: | ||||||
|
|
||||||
| ```kotlin | ||||||
| val generativeModel = | ||||||
| Firebase.ai(backend = GenerativeBackend.vertexAI()) | ||||||
| .generativeModel("gemini-2.5-flash") | ||||||
|
|
||||||
| val requestContent = content { | ||||||
| fileData(videoSource.toString(), "video/mp4") | ||||||
| text(promptData) | ||||||
| } | ||||||
| val outputStringBuilder = StringBuilder() | ||||||
| generativeModel.generateContentStream(requestContent).collect { response -> | ||||||
| outputStringBuilder.append(response.text) | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| Read more about [getting started with Gemini](https://developer.android.com/ai/gemini) in the Android Documentation. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For consistency with other documentation changes in this pull request (like in
Suggested change
|
||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,39 @@ | ||||||||||||||
| # Image Description with Nano Sample | ||||||||||||||
|
|
||||||||||||||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||||||||||||||
|
|
||||||||||||||
| ## Description | ||||||||||||||
|
|
||||||||||||||
| This sample demonstrates how to generate short descriptions of images on-device using the GenAI API powered by Gemini Nano. Users can select an image, and the model will generate a descriptive text, showcasing the power of on-device multimodal AI. | ||||||||||||||
|
|
||||||||||||||
| <div style="text-align: center;"> | ||||||||||||||
| <img width="320" alt="Image Description with Nano in action" src="nano_image_description.png" /> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| ## How it works | ||||||||||||||
|
|
||||||||||||||
| The application uses the ML Kit GenAI Image Description API to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIImageDescriptionViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-image-description/src/main/java/com/android/ai/samples/genai_image_description/GenAIImageDescriptionViewModel.kt) file. An `ImageDescriber` client is initialized. When a user provides an image, it's converted to a bitmap and sent to the `runInference` method, which streams back the generated description. | ||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For consistency and portability (e.g., in forks), it's better to use a relative path for the link to the source file instead of an absolute GitHub URL.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The link to the ViewModel is an absolute GitHub URL. For consistency with other README files in this repository and to ensure links work correctly in forks, please use a relative path instead.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The link to
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| Here is the key snippet of code that calls the generative model: | ||||||||||||||
|
|
||||||||||||||
| ```kotlin | ||||||||||||||
| private var imageDescriber: ImageDescriber = ImageDescription.getClient( | ||||||||||||||
| ImageDescriberOptions.builder(context).build(), | ||||||||||||||
| ) | ||||||||||||||
| //... | ||||||||||||||
|
|
||||||||||||||
| private suspend fun generateImageDescription(imageUri: Uri) { | ||||||||||||||
| _uiState.value = GenAIImageDescriptionUiState.Generating("") | ||||||||||||||
| val bitmap = MediaStore.Images.Media.getBitmap(context.contentResolver, imageUri) | ||||||||||||||
| val request = ImageDescriptionRequest.builder(bitmap).build() | ||||||||||||||
|
|
||||||||||||||
| imageDescriber.runInference(request) { newText -> | ||||||||||||||
| _uiState.update { | ||||||||||||||
| (it as? GenAIImageDescriptionUiState.Generating)?.copy(partialOutput = it.partialOutput + newText) ?: it | ||||||||||||||
| } | ||||||||||||||
| }.await() | ||||||||||||||
| // ... | ||||||||||||||
| } | ||||||||||||||
| ``` | ||||||||||||||
|
|
||||||||||||||
| Read more about [Gemini Nano](https://developer.android.com/ai/gemini-nano/ml-kit-genai) in the Android Documentation. | ||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,38 @@ | ||
| # Summarization with Nano Sample | ||
|
|
||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||
|
|
||
| ## Description | ||
|
|
||
| This sample demonstrates how to summarize articles and conversations on-device using the GenAI API powered by Gemini Nano. Users can input text, and the model will generate a concise summary, showcasing the power of on-device text processing with AI. | ||
|
|
||
| <div style="text-align: center;"> | ||
| <img width="320" alt="Summarization with Nano in action" src="nano_summarization.png" /> | ||
| </div> | ||
|
|
||
| ## How it works | ||
|
|
||
| The application uses the ML Kit GenAI Summarization API to interact with the on-device Gemini Nano model. The core logic is in the `GenAISummarizationViewModel.kt` file. A `Summarizer` client is initialized. When a user provides text, it's passed to the `runInference` method, which streams back the generated summary. | ||
|
|
||
| Here is the key snippet of code that calls the generative model from [`GenAISummarizationViewModel.kt`](./src/main/java/com/android/ai/samples/genai_summarization/GenAISummarizationViewModel.kt): | ||
|
|
||
| ```kotlin | ||
| private suspend fun generateSummarization(summarizer: Summarizer, textToSummarize: String) { | ||
| _uiState.value = GenAISummarizationUiState.Generating("") | ||
| val summarizationRequest = SummarizationRequest.builder(textToSummarize).build() | ||
|
|
||
| try { | ||
| // Instead of using await() here, alternatively you can attach a FutureCallback<SummarizationResult> | ||
| summarizer.runInference(summarizationRequest) { newText -> | ||
| (_uiState.value as? GenAISummarizationUiState.Generating)?.let { generatingState -> | ||
| _uiState.value = generatingState.copy(generatedOutput = generatingState.generatedOutput + newText) | ||
| } | ||
| }.await() | ||
| } catch (genAiException: GenAiException) { | ||
| // ... | ||
| } | ||
| // ... | ||
| } | ||
| ``` | ||
|
|
||
| Read more about [Gemini Nano](https://developer.android.com/ai/gemini-nano/ml-kit-genai) in the Android Documentation. |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,31 @@ | ||||||||||||||||||||||
| # Writing Assistance with Nano Sample | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| ## Description | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| This sample demonstrates how to proofread and rewrite short content on-device using the ML Kit GenAI APIs powered by Gemini Nano. Users can input text and choose to either proofread it for grammar and spelling errors or rewrite it in various styles, showcasing on-device text manipulation with AI. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| <div style="text-align: center;"> | ||||||||||||||||||||||
| <img width="320" alt="Writing Assistance with Nano in action" src="nano_rewrite.png" /> | ||||||||||||||||||||||
| </div> | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| ## How it works | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| The application uses the ML Kit GenAI Proofreading and Rewriting APIs to interact with the on-device Gemini Nano model. The core logic is in the [`GenAIWritingAssistanceViewModel.kt`](https://github.com/android/ai-samples/blob/main/samples/genai-writing-assistance/src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt) file. `Proofreader` and `Rewriter` clients are initialized. When a user provides text, it's passed to either the `runProofreadingInference` or `runRewritingInference` method, which then returns the polished text. | ||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For consistency and portability, it's better to use a relative path for the link to the source file instead of an absolute GitHub URL.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This link to the ViewModel is an absolute GitHub URL. Please use a relative path for consistency and to ensure the link works in forks of this repository.
Suggested change
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| Here is the key snippet of code that runs the proofreading inference from [`GenAIWritingAssistanceViewModel.kt`](.src/main/java/com/android/ai/samples/genai_writing_assistance/GenAIWritingAssistanceViewModel.kt): | ||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This link to the ViewModel is broken due to an extra
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This relative link to the ViewModel is broken. It's missing a leading
Suggested change
Comment on lines
+15
to
+17
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are two issues with the links in this section:
Please use a consistent, correct relative path for linking to source files.
Suggested change
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| ```kotlin | ||||||||||||||||||||||
| private suspend fun runProofreadingInference(textToProofread: String) { | ||||||||||||||||||||||
| val proofreadRequest = ProofreadingRequest.builder(textToProofread).build() | ||||||||||||||||||||||
| // More than 1 result may be generated. Results are returned in descending order of | ||||||||||||||||||||||
| // quality of confidence. Here we use the first result which has the highest quality | ||||||||||||||||||||||
| // of confidence. | ||||||||||||||||||||||
| _uiState.value = GenAIWritingAssistanceUiState.Generating | ||||||||||||||||||||||
| val results = proofreader.runInference(proofreadRequest).await() | ||||||||||||||||||||||
| _uiState.value = GenAIWritingAssistanceUiState.Success(results.results[0].text) | ||||||||||||||||||||||
| } | ||||||||||||||||||||||
| ``` | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| Read more about [Gemini Nano](https://developer.android.com/ai/gemini-nano/ml-kit-genai) in the Android Documentation. | ||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,37 @@ | ||||||||||||||
| # Imagen Image Editing Sample | ||||||||||||||
|
|
||||||||||||||
| This sample is part of the [AI Sample Catalog](../../). To build and run this sample, you should clone the entire repository. | ||||||||||||||
|
|
||||||||||||||
| ## Description | ||||||||||||||
|
|
||||||||||||||
| This sample demonstrates how to edit images using the Imagen editing model. Users can generate an image, then draw a mask on it and provide a text prompt to inpaint (fill in) the masked area, showcasing advanced image manipulation capabilities with the Imagen API. | ||||||||||||||
|
|
||||||||||||||
| <div style="text-align: center;"> | ||||||||||||||
| <img width="320" alt="Imagen Image Editing in action" src="imagen_editing.png" /> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| ## How it works | ||||||||||||||
|
|
||||||||||||||
| The application uses the Firebase AI SDK (see [How to run](../../#how-to-run)) for Android to interact with the `imagen-4.0-ultra-generate-001` and `imagen-3.0-capability-001` models. The core logic is in the [`ImagenEditingDataSource.kt`](https://github.com/android/ai-samples/blob/main/samples/imagen-editing/src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt) file. It first generates a base image using the generation model. Then, for editing, it takes the source image, a user-drawn mask, and a text prompt, and sends them to the editing model's `editImage` method to perform inpainting. | ||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This link uses an absolute GitHub URL. For consistency with other links in this file and across the project, please use a relative path.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This link is an absolute GitHub URL. For consistency and robustness, please change it to a relative path.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The link to
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| Here is the key snippet of code that performs inpainting from [`ImagenEditingDataSource.kt`](./src/main/java/com/android/ai/samples/imagenediting/data/ImagenEditingDataSource.kt): | ||||||||||||||
|
|
||||||||||||||
| ```kotlin | ||||||||||||||
| @OptIn(PublicPreviewAPI::class) | ||||||||||||||
| suspend fun inpaintImageWithMask(sourceImage: Bitmap, maskImage: Bitmap, prompt: String, editSteps: Int = DEFAULT_EDIT_STEPS): Bitmap { | ||||||||||||||
| val imageResponse = editingModel.editImage( | ||||||||||||||
| referenceImages = listOf( | ||||||||||||||
| ImagenRawImage(sourceImage.toImagenInlineImage()), | ||||||||||||||
| ImagenRawMask(maskImage.toImagenInlineImage()), | ||||||||||||||
| ), | ||||||||||||||
| prompt = prompt, | ||||||||||||||
| config = ImagenEditingConfig( | ||||||||||||||
| editMode = ImagenEditMode.INPAINT_INSERTION, | ||||||||||||||
| editSteps = editSteps, | ||||||||||||||
| ), | ||||||||||||||
| ) | ||||||||||||||
| return imageResponse.images.first().asBitmap() | ||||||||||||||
| } | ||||||||||||||
| ``` | ||||||||||||||
|
|
||||||||||||||
| Read more about [Imagen](https://developer.android.com/ai/imagen) in the Android Documentation. | ||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better maintainability and consistency, consider extracting the hardcoded model name
"gemini-2.0-flash"into a constant within acompanion object. This makes it easier to update the model name in the future and aligns with practices seen elsewhere in the project (e.g.,ImagenEditingDataSource.kt).Example: