Skip to content

feat(completions): add native gRPC stages for /v1/completions#772

Draft
vschandramourya wants to merge 2 commits intofeat/completions-response-infrafrom
feat/completions-native-stages
Draft

feat(completions): add native gRPC stages for /v1/completions#772
vschandramourya wants to merge 2 commits intofeat/completions-response-infrafrom
feat/completions-native-stages

Conversation

@vschandramourya
Copy link
Collaborator

Description

Problem

This PR completes the native /v1/completions rollout by wiring
completion-specific stages into the regular gRPC router.

After adding the request contract, native pipeline typing, backend request
builders, and completion-aware response infrastructure, the regular gRPC
router still needed endpoint-specific completion stages and final router
wiring to make /v1/completions operational end to end.

Solution

Add completion-specific stages to the regular gRPC router and wire them into
the existing stage delegators and router entrypoint.

This makes /v1/completions a true native gRPC pipeline endpoint with
the same overall stage architecture as chat:
preparation, worker selection, client acquisition, request building,
dispatch metadata, request execution, and response processing.

Changes

  • Add completion stage module exports
  • Add CompletionPreparationStage
  • Add CompletionRequestBuildingStage
  • Add CompletionResponseProcessingStage
  • Wire completion preparation, request-building, and response-processing into the regular stage delegators
  • Update the regular gRPC router completion entry behavior
  • Finalize the fully wired completion native gRPC pipeline /v1/completions path.

Test Plan

  • cargo test -p smg completion --quiet
  • cargo test -p smg --quiet
  • cargo clippy -p smg --all-targets --all-features -- -D warnings
Checklist
  • cargo +nightly fmt passes
  • cargo clippy --all-targets --all-features -- -D warnings passes
  • (Optional) Documentation updated
  • (Optional) Please join us on Slack #sig-smg to discuss, review, and merge PRs

Signed-off-by: VS Chandra Mourya <msrinivasa@together.ai>
(cherry picked from commit b3b8e531d5831dd74b8c35764083038f6f831ee9)
@github-actions github-actions bot added grpc gRPC client and router changes model-gateway Model gateway crate changes labels Mar 16, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 35417c11-6b93-45d0-86b8-4febcb6fa6ab

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/completions-native-stages
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request completes the integration of native /v1/completions functionality into the gRPC router. It establishes a full pipeline for completion requests, mirroring the architecture of existing chat and generate endpoints. The changes enable end-to-end operation of the /v1/completions endpoint by adding and wiring necessary stages for request preparation, building, and response processing, ensuring a consistent and robust handling of completion requests.

Highlights

  • Completion Stage Module Exports Added: New module exports were introduced for completion-specific pipeline stages.
  • Completion Preparation Stage Implemented: A dedicated CompletionPreparationStage was added to handle prompt resolution, tokenization, and stop decoder creation for completion requests.
  • Completion Request Building Stage Implemented: A CompletionRequestBuildingStage was added to construct backend proto GenerateRequest objects from CompletionRequest data.
  • Completion Response Processing Stage Implemented: A CompletionResponseProcessingStage was introduced to manage both streaming and non-streaming completion response processing.
  • Pipeline Integration: The new completion preparation, request-building, and response-processing stages were wired into the regular gRPC stage delegators.
  • gRPC Router Update: The main gRPC router was updated to include a route_completion_impl and route_completion entry point, finalizing the native gRPC pipeline for /v1/completions.
  • Dead Code Expectations Removed: Several #[expect(dead_code)] attributes were removed from RequestContext and RequestPipeline as the completion-related accessors and execution paths are now actively used.
Changelog
  • model_gateway/src/routers/grpc/context.rs
    • Removed #[expect(dead_code)] attribute from completion_request_arc accessor, indicating its active use.
    • Removed #[expect(dead_code)] attribute from FinalResponse enum, signifying the Completion variant is now produced.
  • model_gateway/src/routers/grpc/pipeline.rs
    • Removed #[expect(dead_code)] attribute from execute_completion method, confirming its integration into the router.
  • model_gateway/src/routers/grpc/regular/stages/completion/mod.rs
    • Added new module to export CompletionPreparationStage, CompletionRequestBuildingStage, and CompletionResponseProcessingStage.
  • model_gateway/src/routers/grpc/regular/stages/completion/preparation.rs
    • Added CompletionPreparationStage for handling prompt tokenization and stop decoder creation for completion requests.
  • model_gateway/src/routers/grpc/regular/stages/completion/request_building.rs
    • Added CompletionRequestBuildingStage responsible for converting CompletionRequest into the backend proto format.
  • model_gateway/src/routers/grpc/regular/stages/completion/response_processing.rs
    • Added CompletionResponseProcessingStage to manage both streaming and non-streaming completion response processing.
  • model_gateway/src/routers/grpc/regular/stages/mod.rs
    • Added completion module to the list of regular gRPC stages.
  • model_gateway/src/routers/grpc/regular/stages/preparation.rs
    • Imported CompletionPreparationStage.
    • Instantiated CompletionPreparationStage in PreparationStage::new.
    • Added a match arm to delegate RequestType::Completion to completion_stage.execute.
  • model_gateway/src/routers/grpc/regular/stages/request_building.rs
    • Imported CompletionRequestBuildingStage.
    • Instantiated CompletionRequestBuildingStage in RequestBuildingStage::new.
    • Added a match arm to delegate RequestType::Completion to completion_stage.execute and removed RequestType::Completion from the unhandled request types.
  • model_gateway/src/routers/grpc/regular/stages/response_processing.rs
    • Imported CompletionResponseProcessingStage.
    • Instantiated CompletionResponseProcessingStage in ResponseProcessingStage::new.
    • Added a match arm to delegate RequestType::Completion to completion_stage.execute and removed RequestType::Completion from the unhandled request types.
  • model_gateway/src/routers/grpc/router.rs
    • Imported StringOrArray and CompletionRequest from openai_protocol.
    • Imported error module.
    • Added route_completion_impl asynchronous function to handle completion requests, including validation for batch prompts, streaming logprobs, and Harmony models, and retry logic.
    • Implemented route_completion method for the RouterTrait to call route_completion_impl.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds native gRPC support for /v1/completions by introducing and integrating completion-specific pipeline stages. The implementation follows the existing architecture for other endpoints, ensuring consistency. The changes are well-structured and logical. I have one suggestion to enhance the robustness of the request building stage by making the contract between pipeline stages more explicit, which will help in preventing potential silent failures.

Comment on lines +63 to +80
let mut proto_request = builder_client
.build_completion_request(
request_id,
&completion_request,
prep.original_text.clone().unwrap_or_default(),
prep.token_ids.clone(),
)
.map_err(|e| {
error!(
function = "CompletionRequestBuildingStage::execute",
error = %e,
"Failed to build completion request"
);
error::bad_request(
"invalid_request_parameters",
format!("Invalid request parameters: {e}"),
)
})?;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The CompletionPreparationStage always sets original_text, so prep.original_text should not be None at this point. Using unwrap_or_default() can hide a potential logic error if it were ever None, which could lead to silent failures where an empty string is used as the prompt. It would be more robust to explicitly handle the None case as an internal error. This makes the contract between stages explicit and ensures the system fails fast if an invariant is broken.

        let original_text = prep.original_text.as_ref().ok_or_else(|| {
            error!(
                function = "CompletionRequestBuildingStage::execute",
                "original_text not found in preparation output for completion request"
            );
            error::internal_error(
                "missing_preparation_output",
                "original_text not found in preparation output for completion request",
            )
        })?;

        let mut proto_request = builder_client
            .build_completion_request(
                request_id,
                &completion_request,
                original_text.clone(),
                prep.token_ids.clone(),
            )
            .map_err(|e| {
                error!(
                    function = "CompletionRequestBuildingStage::execute",
                    error = %e,
                    "Failed to build completion request"
                );
                error::bad_request(
                    "invalid_request_parameters",
                    format!("Invalid request parameters: {e}"),
                )
            })?;

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for /v1/completions to the native gRPC router by introducing completion-specific pipeline stages. The changes are well-structured and follow existing patterns for other endpoints like chat and generate. I've identified a couple of areas with code duplication in the main router logic that could be refactored for better maintainability. Specifically, there's a duplicated validation check and repeated retry logic that has been copied for this new endpoint.

Comment on lines +298 to +302
if matches!(body.prompt, StringOrArray::Array(_)) {
return error::bad_request(
"batch_prompts_not_supported",
"Batched prompt arrays are not supported for gRPC /v1/completions yet",
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This validation for batch prompts appears to be duplicated. The same check exists in CompletionPreparationStage. To improve maintainability and avoid logic drift, consider removing this check from the router and letting the preparation stage be the single source of truth for this validation. This would centralize request validation logic within the pipeline stages.

References
  1. Extract duplicated logic into a shared helper function to improve maintainability and reduce redundancy.

Comment on lines +286 to +354
async fn route_completion_impl(
&self,
headers: Option<&HeaderMap>,
body: &CompletionRequest,
model_id: Option<&str>,
) -> Response {
debug!(
"Processing completion request for model: {}, stream={}",
model_id.unwrap_or(UNKNOWN_MODEL_ID),
body.stream
);

if matches!(body.prompt, StringOrArray::Array(_)) {
return error::bad_request(
"batch_prompts_not_supported",
"Batched prompt arrays are not supported for gRPC /v1/completions yet",
);
}

if body.stream && body.logprobs.is_some() {
return error::bad_request(
"streaming_logprobs_not_supported",
"Streaming logprobs are not supported for gRPC /v1/completions",
);
}

if HarmonyDetector::is_harmony_model_in_registry(&self.worker_registry, &body.model) {
return error::bad_request(
"harmony_completion_not_supported",
"Completion requests are not supported with Harmony models".to_string(),
);
}

let request = Arc::new(body.clone());
let headers_cloned = headers.cloned();
let model_id_cloned = model_id.map(|s| s.to_string());
let components = self.shared_components.clone();
let pipeline = &self.pipeline;

RetryExecutor::execute_response_with_retry(
&self.retry_config,
|_attempt| {
let request = Arc::clone(&request);
let headers = headers_cloned.clone();
let model_id = model_id_cloned.clone();
let components = Arc::clone(&components);
async move {
pipeline
.execute_completion(request, headers, model_id, components)
.await
}
},
|res, _attempt| is_retryable_status(res.status()),
|delay, attempt| {
Metrics::record_worker_retry(
metrics_labels::WORKER_REGULAR,
metrics_labels::ENDPOINT_COMPLETIONS,
);
Metrics::record_worker_retry_backoff(attempt, delay);
},
|| {
Metrics::record_worker_retries_exhausted(
metrics_labels::WORKER_REGULAR,
metrics_labels::ENDPOINT_COMPLETIONS,
);
},
)
.await
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation of route_completion_impl is very similar to other route_*_impl functions, especially the RetryExecutor logic. This PR adds to the existing code duplication. Consider refactoring this common retry logic into a generic helper function or macro to improve maintainability and reduce boilerplate.

References
  1. Extract duplicated logic into a shared helper function to improve maintainability and reduce redundancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

grpc gRPC client and router changes model-gateway Model gateway crate changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant