feat(render): implement VllmRender gRPC service for GPU-less rendering#784
feat(render): implement VllmRender gRPC service for GPU-less rendering#784hyeongyun0916 wants to merge 20 commits intolightseekorg:mainfrom
Conversation
…ing RPCs Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a GPU-less vLLM render gRPC surface: new vllm_render proto, client build/export updates, RenderGrpcServicer implementation, proto↔Pydantic conversion utilities and field transforms, packaging/dependency bumps, and comprehensive unit tests for the new functionality. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant RenderServicer as RenderGrpcServicer
participant ProtoUtils as ProtoUtils
participant Pydantic as PydanticModel
participant Renderer as vLLM_Render
Client->>RenderServicer: RenderChat(RenderChatRequest proto)
RenderServicer->>ProtoUtils: from_proto(proto, transforms)
ProtoUtils->>ProtoUtils: MessageToDict + _apply_transforms
ProtoUtils->>Pydantic: construct request model
Pydantic-->>RenderServicer: request instance
RenderServicer->>Renderer: render_chat_request(request)
Renderer-->>RenderServicer: GenerateRequest (Pydantic)
RenderServicer->>ProtoUtils: pydantic_to_proto(GenerateRequest)
ProtoUtils-->>RenderServicer: GenerateRequestProto
RenderServicer-->>Client: RenderChatResponse(GenerateRequestProto)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a crucial Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
There was a problem hiding this comment.
Code Review
This pull request introduces a new VllmRender gRPC service for GPU-less rendering. The implementation is well-structured, separating concerns into protobuf definitions, conversion utilities, and the service logic. The code is clean, robustly handles errors, and is accompanied by a comprehensive test suite. I have one minor suggestion to improve code clarity in the servicer implementation by removing some unreachable code. Overall, this is an excellent contribution.
| grpc.StatusCode.UNIMPLEMENTED, | ||
| "RenderChat is not configured on this server.", | ||
| ) | ||
| return |
There was a problem hiding this comment.
The return statement here is unreachable because context.abort() raises a grpc.aio.AbortError exception, which terminates the method's execution. Removing this unreachable code improves clarity.
This same pattern of an unreachable return after context.abort() also occurs on lines 77, 95, and 109. All of them can be removed.
a7c5574 to
4da7a0b
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a7c5574205
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| FIELD_TRANSFORMS: dict[str, tuple[str, Any]] = { | ||
| "parameters_json": ("parameters", json.loads), | ||
| "content_parts": ("content", None), | ||
| "prompt": ("prompt", flatten_completion_prompt), | ||
| "messages": ("messages", _ensure_message_content), |
There was a problem hiding this comment.
Parse structured tool_choice before building ChatCompletionRequest
RenderChatRequest documents that tool_choice may be JSON ("none"/"auto"/"required" or JSON), but the transform table never deserializes that field, so from_proto(...) forwards raw JSON text into ChatCompletionRequest instead of an object. Requests that force a specific tool (for example a named function choice) will therefore be interpreted as a plain string and fail validation or behave incorrectly, which breaks tool-calling render flows.
Useful? React with 👍 / 👎.
| except Exception as e: | ||
| await context.abort(grpc.StatusCode.INTERNAL, str(e)) |
There was a problem hiding this comment.
Map request-decoding failures to INVALID_ARGUMENT
The broad except Exception path converts all parsing/validation failures into INTERNAL, so malformed client payloads (for example bad JSON in parameters_json during proto→dict transforms) are reported as server faults instead of request errors. This misclassifies user input bugs as 500s, can trigger unnecessary retries/alerts, and makes debugging client-side request issues harder.
Useful? React with 👍 / 👎.
…AbortError handling Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…ethods Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d86106a2db
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return vllm_engine_pb2.GetModelInfoResponse( | ||
| model_path=model_config.model, | ||
| is_generation=model_config.runner_type == "generate", | ||
| max_context_length=model_config.max_model_len, | ||
| vocab_size=model_config.get_vocab_size(), |
There was a problem hiding this comment.
Populate served_model_name in render model info
GetModelInfo builds a vllm_engine_pb2.GetModelInfoResponse but never sets served_model_name, so this RPC always reports an empty alias even when the server is configured with a custom served model name. In deployments where served_model_name differs from model_path, downstream discovery/routing that prioritizes this label (for example model ID selection in model_gateway/src/core/steps/worker/local/create_worker.rs) can pick the wrong identifier and misroute traffic; this should mirror the engine servicer behavior by filling served_model_name from model config.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Already addressed in 9cd6eb2 — GetModelInfo now sets served_model_name from model_config.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/grpc_client/proto/vllm_render.proto`:
- Around line 43-49: The ContentPart message currently allows multiple payload
fields to be set simultaneously; change its definition to enforce exclusivity by
wrapping text, image_url, input_audio, and video_url inside a oneof (e.g., oneof
payload) so only one variant can be present at a time; update the ContentPart
message (and any generated/consuming code expectations) to use the oneof payload
for the fields referenced as text, image_url (ImageUrlContent), input_audio
(InputAudioContent), and video_url (VideoUrlContent) to match the Rust enum
semantics.
In `@grpc_servicer/pyproject.toml`:
- Around line 33-35: The dev extra is missing the vllm dependency which causes
pip install -e .[dev] to fail because tests import smg_grpc_servicer.vllm.*
(top-level imports from vllm like vllm.logger, vllm.outputs); update the dev
extra (the "dev" entry in pyproject.toml) to include vllm (e.g., add
"vllm>=0.16.0" to the list or reference the vllm extra via ".[vllm]") so
installing the dev extras pulls in vllm.
In `@grpc_servicer/smg_grpc_servicer/vllm/field_transforms.py`:
- Line 41: Replace the silent "return None" in the CompletionPrompt handling
code with an explicit ValueError so malformed prompt dicts fail fast; locate the
branch that checks/handles CompletionPrompt shapes in field_transforms.py (the
code that currently returns None for unknown prompt dicts) and raise
ValueError("Unsupported CompletionPrompt shape") (or a similarly descriptive
message including the offending value) instead of returning None so the caller
can map it to an INVALID_ARGUMENT error.
In `@grpc_servicer/smg_grpc_servicer/vllm/proto_utils.py`:
- Around line 47-50: The pydantic_to_proto function currently calls
ParseDict(..., ignore_unknown_fields=True) which silently drops unknown fields;
change it to fail-fast or explicitly whitelist fields: either remove
ignore_unknown_fields=True so ParseDict raises on unknown keys, or derive an
allowlist from the target proto (e.g., use message_class.DESCRIPTOR.fields to
get allowed field names) and filter the dict returned by
model.model_dump(mode="json", exclude_none=True) to only those keys before
calling ParseDict; reference the pydantic_to_proto function, the message_class
parameter, and ParseDict when making the change.
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py`:
- Around line 40-46: The GetModelInfoResponse / GetServerInfoResponse currently
rely on proto defaults for shared fields; explicitly set served_model_name,
active_requests, is_paused, kv_connector, and kv_role when constructing
responses in render_servicer.py (the GetModelInfoResponse return and the
analogous GetServerInfoResponse around lines 49-53) so consumers aren’t left
with ambiguous defaults—use the appropriate values from model_config or server
state (e.g., served model identifier from model_config, current active request
count, paused state flag, and KV connector/role info) and fall back to explicit
zero/empty values only if the source is absent, then run the
request_verification mentioned to ensure no consumer expects implicit defaults.
- Around line 86-87: Replace the direct exposure of internal exception text in
the except blocks that call await context.abort(grpc.StatusCode.INTERNAL,
str(e)) (occurrences around the context.abort calls at lines referenced) by
logging the full exception server-side (use logger.exception(...) or create
module logger = logging.getLogger(__name__) and call
logger.exception("render_servicer error")) and then aborting with a generic
message such as await context.abort(grpc.StatusCode.INTERNAL, "Internal server
error"); update both places that use str(e) (the except blocks referencing
variable e) to follow this pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4f4cb66a-2f59-41a7-b57f-d246aa9f1b0f
📒 Files selected for processing (13)
crates/grpc_client/proto/vllm_render.protocrates/grpc_client/python/pyproject.tomlcrates/grpc_client/python/smg_grpc_proto/__init__.pygrpc_servicer/pyproject.tomlgrpc_servicer/smg_grpc_servicer/vllm/__init__.pygrpc_servicer/smg_grpc_servicer/vllm/field_transforms.pygrpc_servicer/smg_grpc_servicer/vllm/proto_utils.pygrpc_servicer/smg_grpc_servicer/vllm/render_servicer.pygrpc_servicer/tests/__init__.pygrpc_servicer/tests/conftest.pygrpc_servicer/tests/test_field_transforms.pygrpc_servicer/tests/test_proto_utils.pygrpc_servicer/tests/test_render_servicer.py
…ypes Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…r empty dict and unknown keys Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…s parameter Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…de served_model_name and additional server info Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…letion methods Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/grpc_client/proto/vllm_render.proto`:
- Around line 119-121: The TokenIdSequence.message and GenerateRequestProto use
different scalar types for token IDs (TokenIdSequence.token_ids is uint32 while
GenerateRequestProto.token_ids is int32); pick one consistent scalar (either
change TokenIdSequence.token_ids to int32 or change
GenerateRequestProto.token_ids to uint32), update both message definitions to
the chosen type, and then regenerate protobuf artifacts and update any code that
relies on TokenIdSequence or GenerateRequestProto to the unified type to avoid
type mismatches.
In `@grpc_servicer/smg_grpc_servicer/vllm/proto_utils.py`:
- Line 20: The FieldTransforms type alias currently uses Any for the transform
which weakens static checking; update FieldTransforms to use a callable type
such as Callable[[Any], Any] | None (e.g. FieldTransforms = dict[str, tuple[str,
Callable[[Any], Any] | None]]) and add the necessary import for Callable from
typing so editors and type-checkers can validate transform functions used by the
code.
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py`:
- Around line 9-24: Move the module-level logger initialization so all imports
are grouped above it: relocate the line defining logger =
logging.getLogger(__name__) to after the import block (after the last import,
e.g. after the import of ErrorResponse) so no imports are interleaved with
module-level code in render_servicer.py; ensure any code relying on the logger
still references the same symbol name (logger).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 401e2da5-518d-4628-8277-b476a83f9857
📒 Files selected for processing (9)
crates/grpc_client/proto/vllm_render.protogrpc_servicer/pyproject.tomlgrpc_servicer/smg_grpc_servicer/vllm/field_transforms.pygrpc_servicer/smg_grpc_servicer/vllm/proto_utils.pygrpc_servicer/smg_grpc_servicer/vllm/render_servicer.pygrpc_servicer/tests/conftest.pygrpc_servicer/tests/test_field_transforms.pygrpc_servicer/tests/test_proto_utils.pygrpc_servicer/tests/test_render_servicer.py
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9ee771ad64
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| repeated ChatCompletionTool tools = 3; | ||
| optional string tool_choice = 4; // "none"/"auto"/"required" or JSON | ||
| optional string chat_template = 5; | ||
| bool add_generation_prompt = 6; |
There was a problem hiding this comment.
Make render bool overrides presence-aware
RenderChatRequest.add_generation_prompt (and similarly RenderCompletionRequest.add_special_tokens) is defined as a plain proto3 bool, but request decoding goes through MessageToDict in from_proto, which drops false-valued scalar fields without presence. That means a client explicitly sending false cannot be distinguished from “unset”, so downstream request defaults are used instead of the caller’s override; in render flows this can flip prompt templating/tokenization behavior for valid inputs that require disabling these flags.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Changed add_generation_prompt, continue_final_message, and add_special_tokens to optional bool for presence awareness.
| new_key, fn = transforms[key] | ||
| result[new_key] = fn(value) if fn else value |
There was a problem hiding this comment.
Reject transformed-key collisions during proto mapping
_apply_transforms silently overwrites existing keys when a rename targets a key already present (for example content_parts -> content while content is also set in the same chat message). Because RenderChatRequest allows both fields (not a oneof), contradictory payloads are currently accepted with lossy, order-dependent data selection instead of returning INVALID_ARGUMENT, which can corrupt rendered prompts in malformed-but-possible client requests.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Current FIELD_TRANSFORMS mappings don't have source/target key overlap with proto field names, so collision can't occur. Will add a guard if the transform table grows.
…tProto Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…Request to optional Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ace448ac24
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if "text" in prompt_dict: | ||
| return prompt_dict["text"] | ||
| if "texts" in prompt_dict: | ||
| return list(prompt_dict["texts"]["texts"]) |
There was a problem hiding this comment.
Guard empty completion prompt branches before indexing
flatten_completion_prompt assumes nested keys like texts.texts always exist, but MessageToDict drops empty repeated fields, so a request that selects the texts oneof with an empty list can deserialize as {"texts": {}} and trigger KeyError here. That exception is then surfaced as INTERNAL by RenderCompletion (it only maps ValueError/TypeError), misclassifying a client-side payload issue and breaking empty-prompt edge cases.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Guarded nested key access in flatten_completion_prompt with .get() to handle empty repeated fields dropped by MessageToDict.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/grpc_client/proto/vllm_render.proto`:
- Around line 82-85: Wrap content and content_parts in a proto oneof so they are
mutually exclusive: in ChatCompletionMessage inside
crates/grpc_client/proto/vllm_render.proto replace the two standalone fields
content and content_parts with a oneof (e.g. oneof content_oneof { string
content = 2; repeated ContentPart content_parts = 3; }) so gRPC/protobuf
enforces exclusivity; after regenerating protos, update any code paths that
reference the fields (notably ChatCompletionMessage handling and the remapping
logic in grpc_servicer/smg_grpc_servicer/vllm/field_transforms.py) to read from
the oneof (check which oneof case is set) before mapping to
ChatCompletionRequest.
- Around line 1-20: The build script currently only compiles common.proto,
sglang_scheduler.proto, vllm_engine.proto, and trtllm_service.proto, so the
VllmRender service and its messages (VllmRender, RenderChatRequest/Response,
RenderCompletionRequest/Response) are not being generated; update
crates/grpc_client/build.rs to include "proto/vllm_render.proto" in the list
passed to tonic_build/prost_build (or whichever proto compiler is used), ensure
the include path that contains vllm_render.proto is present, and re-run the
build to generate bindings for the VllmRender service and its RPCs (RenderChat
and RenderCompletion).
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py`:
- Around line 70-96: The ValueError/TypeError handler currently wraps both input
parsing and response serialization in RenderChat, causing server-side
serialization errors to return INVALID_ARGUMENT; refactor the handler so input
validation (from_proto and render_chat_request call) is inside a narrow try that
catches ValueError/TypeError and aborts with INVALID_ARGUMENT, then perform
pydantic_to_proto(...) and response assembly in a separate block so any
serialization errors bubble to the broader Exception handler (which logs and
aborts with INTERNAL); apply the same split to the RenderCompletion handler and
preserve the existing grpc.aio.AbortError re-raise behavior and ErrorResponse
handling (references: from_proto, render_chat_request, pydantic_to_proto,
RenderChat and RenderCompletion handlers).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: f658d408-ad63-45fd-987e-562ba8c45d2c
📒 Files selected for processing (2)
crates/grpc_client/proto/vllm_render.protogrpc_servicer/smg_grpc_servicer/vllm/render_servicer.py
Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…ilures in RenderChat and RenderCompletion Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
…ty fields and batches Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
|
Hi @hyeongyun0916, this PR has merge conflicts that must be resolved before it can be merged. Please rebase your branch: git fetch origin main
git rebase origin/main
# resolve any conflicts, then:
git push --force-with-lease |
…m-render Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
|
@hyeongyun0916 i get what this is trying to do |
|
reading more into the design, i think i have a better understanding now
|
Thanks for the detailed suggestion! One concern: vLLM uses custom tokenizer implementations for several models (e.g., I'm not sure how many of these cases SMG currently covers, and whether the resulting token_ids would match vLLM's output for all of them. |
|
thanks for the reply |
…ple files Signed-off-by: HyunKyun Moon <mhg5303@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py`:
- Around line 98-103: The RenderCompletion method may continue executing after
calling context.abort(), so add an explicit return immediately after the await
context.abort(...) call inside RenderCompletion to ensure the method exits when
self.state.openai_serving_render is None; update the RenderCompletion function
(reference: RenderCompletion, context.abort, self.state.openai_serving_render)
to place a return following the abort invocation.
- Around line 65-70: In RenderChat, after calling await context.abort(...) add
an explicit return to make control flow clear and satisfy linters; specifically,
update the async method RenderChat so that immediately after the abort call you
return (so subsequent references like result on RenderChat's later code path are
not flagged as potentially uninitialized).
In `@grpc_servicer/tests/test_field_transforms.py`:
- Around line 51-61: Move the inline imports of pytest out of the test functions
into a single module-level import at the top of the file; specifically remove
the "import pytest" lines inside test_empty_dict_raises_value_error and
test_unknown_keys_raises_value_error and add a single "import pytest" at the
file scope so both functions (and any other tests) use the module-level pytest
import.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 87a0f179-a5e8-40f4-b89c-1706285c49ec
📒 Files selected for processing (5)
grpc_servicer/smg_grpc_servicer/vllm/field_transforms.pygrpc_servicer/smg_grpc_servicer/vllm/render_servicer.pygrpc_servicer/tests/test_field_transforms.pygrpc_servicer/tests/test_proto_utils.pygrpc_servicer/tests/test_render_servicer.py
| async def RenderChat(self, request, context): | ||
| if self.state.openai_serving_render is None: | ||
| await context.abort( | ||
| grpc.StatusCode.UNIMPLEMENTED, | ||
| "RenderChat is not configured on this server.", | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add explicit return after context.abort() for clarity.
context.abort() raises grpc.aio.AbortError internally, so the code after it won't execute. However, adding an explicit return makes the control flow clearer and prevents linter warnings about potentially unbound variables (result on line 92).
♻️ Suggested fix
async def RenderChat(self, request, context):
if self.state.openai_serving_render is None:
await context.abort(
grpc.StatusCode.UNIMPLEMENTED,
"RenderChat is not configured on this server.",
)
+ return # abort raises, but explicit return aids readability🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py` around lines 65 -
70, In RenderChat, after calling await context.abort(...) add an explicit return
to make control flow clear and satisfy linters; specifically, update the async
method RenderChat so that immediately after the abort call you return (so
subsequent references like result on RenderChat's later code path are not
flagged as potentially uninitialized).
| async def RenderCompletion(self, request, context): | ||
| if self.state.openai_serving_render is None: | ||
| await context.abort( | ||
| grpc.StatusCode.UNIMPLEMENTED, | ||
| "RenderCompletion is not configured on this server.", | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Same suggestion: add explicit return after context.abort().
Same clarity improvement applies to RenderCompletion.
♻️ Suggested fix
async def RenderCompletion(self, request, context):
if self.state.openai_serving_render is None:
await context.abort(
grpc.StatusCode.UNIMPLEMENTED,
"RenderCompletion is not configured on this server.",
)
+ return # abort raises, but explicit return aids readability🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@grpc_servicer/smg_grpc_servicer/vllm/render_servicer.py` around lines 98 -
103, The RenderCompletion method may continue executing after calling
context.abort(), so add an explicit return immediately after the await
context.abort(...) call inside RenderCompletion to ensure the method exits when
self.state.openai_serving_render is None; update the RenderCompletion function
(reference: RenderCompletion, context.abort, self.state.openai_serving_render)
to place a return following the abort invocation.
| def test_empty_dict_raises_value_error(self): | ||
| import pytest | ||
|
|
||
| with pytest.raises(ValueError, match="no supported oneof field set"): | ||
| flatten_completion_prompt({}) | ||
|
|
||
| def test_unknown_keys_raises_value_error(self): | ||
| import pytest | ||
|
|
||
| with pytest.raises(ValueError, match="no supported oneof field set"): | ||
| flatten_completion_prompt({"unknown": "value"}) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Move pytest import to module level.
pytest is imported inline within test methods (lines 52 and 58). Since pytest is already a test dependency and used elsewhere, move the import to the top of the file for consistency.
♻️ Suggested fix
from smg_grpc_servicer.vllm.field_transforms import (
FIELD_TRANSFORMS,
_ensure_message_content,
_parse_tool_choice,
flatten_completion_prompt,
)
+import pytestThen remove the inline imports:
def test_empty_dict_raises_value_error(self):
- import pytest
-
with pytest.raises(ValueError, match="no supported oneof field set"):
flatten_completion_prompt({})
def test_unknown_keys_raises_value_error(self):
- import pytest
-
with pytest.raises(ValueError, match="no supported oneof field set"):
flatten_completion_prompt({"unknown": "value"})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_empty_dict_raises_value_error(self): | |
| import pytest | |
| with pytest.raises(ValueError, match="no supported oneof field set"): | |
| flatten_completion_prompt({}) | |
| def test_unknown_keys_raises_value_error(self): | |
| import pytest | |
| with pytest.raises(ValueError, match="no supported oneof field set"): | |
| flatten_completion_prompt({"unknown": "value"}) | |
| def test_empty_dict_raises_value_error(self): | |
| with pytest.raises(ValueError, match="no supported oneof field set"): | |
| flatten_completion_prompt({}) | |
| def test_unknown_keys_raises_value_error(self): | |
| with pytest.raises(ValueError, match="no supported oneof field set"): | |
| flatten_completion_prompt({"unknown": "value"}) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@grpc_servicer/tests/test_field_transforms.py` around lines 51 - 61, Move the
inline imports of pytest out of the test functions into a single module-level
import at the top of the file; specifically remove the "import pytest" lines
inside test_empty_dict_raises_value_error and
test_unknown_keys_raises_value_error and add a single "import pytest" at the
file scope so both functions (and any other tests) use the module-level pytest
import.
Description
Context
This PR adds render gRPC support (VllmRender service) to smg-grpc-proto and smg-grpc-servicer, required by vllm-project/vllm#36102.
Per review feedback, the render servicer should live in this package rather than in the vllm repo, following the same pattern as VllmEngineServicer (#36169).
Problem
vLLM's disaggregated serving architecture requires a GPU-less render node that applies chat templates and tokenizes requests without running inference. Currently there is no gRPC interface for this render-only functionality, limiting communication between prefill/decode nodes and the render node to HTTP only.
Solution
Implement a new
VllmRendergRPC service with management RPCs (HealthCheck, GetModelInfo, GetServerInfo) and rendering RPCs (RenderChat, RenderCompletion). The service converts protobuf messages to vLLM's Pydantic request models, delegates toopenai_serving_render, and serializes responses back to proto.Changes
vllm_render.protodefining theVllmRenderservice, chat/completion rendering messages, andGenerateRequestProtoRenderGrpcServicerimplementing all VllmRender RPCs with proper gRPC status code error handlingproto_utils.pywith generic protobuf ↔ Pydantic/dict conversion utilities (proto_to_dict,from_proto,pydantic_to_proto)field_transforms.pywith transform rules bridging proto field naming limitations to vLLM's OpenAI-compatible Python modelsvllm_render_pb2/vllm_render_pb2_grpcfromsmg-grpc-protopackagesmg-grpc-prototo 0.5.0 andsmg-grpc-servicerto 0.6.0Test Plan
pytest grpc_servicer/tests/ -v— 51 passedpip install -e crates/grpc_client/python/buildsvllm_render_pb2stubs successfullyChecklist
cargo +nightly fmtpassescargo clippy --all-targets --all-features -- -D warningspassesSummary by CodeRabbit