Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a polling endpoint to retrieve asynchronous LLM job status/results ( Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant API as API Route<br/>(GET /llm/call/{job_id})
participant DB as Database<br/>(JobCrud)
participant Calls as LLMCallsCrud<br/>(get_llm_calls_by_job_id)
Client->>API: GET /llm/call/{job_id}
API->>DB: Fetch job by UUID
DB-->>API: Job (found / not found)
alt Job not found
API-->>Client: HTTP 404
else Job found
API->>API: Inspect job.status
alt status == "SUCCESS"
API->>Calls: get_llm_calls_by_job_id(job_id, project_id)
Calls-->>API: LLM calls list
API->>API: Map first call -> LLMCallResponse + Usage
else status != "SUCCESS"
API->>API: llm_response = null
end
API-->>Client: APIResponse[LLMJobPublic]
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/routes/llm.py`:
- Around line 109-113: The endpoint currently fetches Job via
JobCrud.get(job_id=job_id) and returns its status without verifying ownership or
job type; update the logic to authorize the job before responding by ensuring
the retrieved job belongs to the current user/tenant and is an LLM job: either
extend JobCrud.get to accept a tenant/project/user filter (e.g., pass project_id
or owner_id from _current_user) or, after fetching, check job.project_id ==
_current_user.project_id (or job.owner_id == _current_user.id) and check
job.type == "llm" (or JobType.LLM); if either check fails, raise
HTTPException(status_code=404, detail="Job not found") to avoid leaking job
existence or non-LLM jobs.
- Around line 100-104: The new route handler get_llm_call_status has typed
parameters but is missing a return type annotation; update the function
signature for get_llm_call_status to include an explicit return type that
matches what the endpoint actually returns (for example -> models.LLMCallStatus,
-> dict, or -> fastapi.Response / starlette.responses.JSONResponse), ensuring it
aligns with the response model used elsewhere in the module and other route
handlers.
- Around line 132-138: The code constructs LLMCallResponse with a possibly
missing usage (Usage(**llm_call.usage) if llm_call.usage else None) while
LLMCallResponse.usage is required; ensure you always pass a valid Usage instance
to LLMCallResponse by creating a default/minimal Usage when llm_call.usage is
absent (e.g., construct Usage() or Usage with zeroed fields) instead of None,
updating the creation of llm_call_response to use that guaranteed Usage object
(referencing Usage, llm_call.usage, LLMCallResponse, llm_call_response, and
llm_response).
In `@backend/app/models/llm/response.py`:
- Around line 124-128: The LLMJobPublic Pydantic schema is missing the timestamp
fields that the GET handler includes in its response (causing schema/route
drift); update the schema by adding the job_inserted_at and job_updated_at
attributes (appropriate datetime types) to LLMJobPublic (or to LLMJobBasePublic
if shared across responses) so the model matches the route payload and
documented shape (ensure any necessary imports/annotations are present).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5bf106a3-52b5-4bad-b006-89cbaf04510e
📒 Files selected for processing (5)
backend/app/api/docs/llm/get_llm_call.mdbackend/app/api/routes/llm.pybackend/app/models/__init__.pybackend/app/models/llm/__init__.pybackend/app/models/llm/response.py
| job_crud = JobCrud(session=session) | ||
| job = job_crud.get(job_id=job_id) | ||
|
|
||
| if not job: | ||
| raise HTTPException(status_code=404, detail="Job not found") |
There was a problem hiding this comment.
Authorize job_id before returning status.
This lookup is scoped only by raw UUID, and _current_user is otherwise unused here. Any caller with project access who learns a job ID can poll another tenant's job, and this endpoint will also return non-LLM jobs unless you enforce ownership and expected job type before responding.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/routes/llm.py` around lines 109 - 113, The endpoint currently
fetches Job via JobCrud.get(job_id=job_id) and returns its status without
verifying ownership or job type; update the logic to authorize the job before
responding by ensuring the retrieved job belongs to the current user/tenant and
is an LLM job: either extend JobCrud.get to accept a tenant/project/user filter
(e.g., pass project_id or owner_id from _current_user) or, after fetching, check
job.project_id == _current_user.project_id (or job.owner_id == _current_user.id)
and check job.type == "llm" (or JobType.LLM); if either check fails, raise
HTTPException(status_code=404, detail="Job not found") to avoid leaking job
existence or non-LLM jobs.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/api/routes/llm.py (1)
51-53:⚠️ Potential issue | 🟡 MinorAdd the return type annotation to
llm_call.The function parameters are typed, but the return type is missing.
Suggested fix
def llm_call( _current_user: AuthContextDep, session: SessionDep, request: LLMCallRequest -): +) -> APIResponse[LLMJobImmediatePublic]:As per coding guidelines "Always add type hints to all function parameters and return values in Python code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/routes/llm.py` around lines 51 - 53, Add an explicit return type annotation to the llm_call function signature (e.g., def llm_call(... ) -> LLMCallResponse:) matching the actual response object returned by the function; import or define the appropriate response type (LLMCallResponse or the concrete Response/Dict type used across the module) and update the signature for llm_call to include it so all parameters and the return value are fully typed.
♻️ Duplicate comments (1)
backend/app/api/routes/llm.py (1)
109-113:⚠️ Potential issue | 🔴 CriticalAuthorize
job_idbefore returning status.This was flagged in a previous review and remains unresolved.
JobCrud.get(job_id)performs an unscoped lookup by UUID only (seebackend/app/crud/jobs.py:41-42). Whileget_llm_calls_by_job_idon lines 117-119 correctly filters byproject_id, the job'sstatusanderror_messageare exposed in the response (lines 148, 150) before that filter takes effect.Any authenticated user who discovers a job UUID can poll another project's job status, leaking information about cross-tenant job existence and error messages.
Suggested approach
Either extend
JobCrud.get()to accept aproject_idfilter, or verify ownership after fetching:job_crud = JobCrud(session=session) job = job_crud.get(job_id=job_id) - if not job: + if not job or job.project_id != _current_user.project_.id: raise HTTPException(status_code=404, detail="Job not found")Note: This assumes
Jobmodel has aproject_idfield. If not, you may need to join throughLlmCallor add the field toJob.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/routes/llm.py` around lines 109 - 113, The handler currently calls JobCrud.get(job_id) and exposes job.status and job.error_message before verifying tenant ownership; update the logic to enforce project scoping: either extend JobCrud.get to accept a project_id and fetch by (job_id, project_id), or after calling JobCrud.get(job_id) validate that job.project_id == project_id (or, if Job has no project_id, perform a join/query via LlmCall to confirm the job is associated with the requested project) and raise HTTPException(404) if not owned, ensuring job.status/error_message are only returned for owned jobs; adjust references to JobCrud.get and get_llm_calls_by_job_id accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@backend/app/api/routes/llm.py`:
- Around line 51-53: Add an explicit return type annotation to the llm_call
function signature (e.g., def llm_call(... ) -> LLMCallResponse:) matching the
actual response object returned by the function; import or define the
appropriate response type (LLMCallResponse or the concrete Response/Dict type
used across the module) and update the signature for llm_call to include it so
all parameters and the return value are fully typed.
---
Duplicate comments:
In `@backend/app/api/routes/llm.py`:
- Around line 109-113: The handler currently calls JobCrud.get(job_id) and
exposes job.status and job.error_message before verifying tenant ownership;
update the logic to enforce project scoping: either extend JobCrud.get to accept
a project_id and fetch by (job_id, project_id), or after calling
JobCrud.get(job_id) validate that job.project_id == project_id (or, if Job has
no project_id, perform a join/query via LlmCall to confirm the job is associated
with the requested project) and raise HTTPException(404) if not owned, ensuring
job.status/error_message are only returned for owned jobs; adjust references to
JobCrud.get and get_llm_calls_by_job_id accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f3466bdd-8be7-418f-b6ca-df6ad2f2922d
📒 Files selected for processing (2)
backend/app/api/routes/llm.pybackend/app/crud/llm.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/tests/api/routes/test_llm.py`:
- Around line 21-24: The llm_job fixture currently constructs and persists a Job
inline using JobCrud.create with JobType.LLM_API; refactor this to use the test
factory for jobs (e.g., JobFactory or equivalent) by replacing direct JobCrud
usage in the llm_job fixture with a call to the job factory and returning its
persisted instance, and update any related fixtures that create LLM-call records
(lines 27-64) to consume corresponding LLMCallFactory (or similarly named
factories) instead of manual creation so all fixtures use the standard factory
pattern.
- Around line 323-329: The test function test_get_llm_call_success currently
accepts an unused fixture parameter llm_response_in_db which triggers Ruff
ARG001; remove llm_response_in_db from the function signature and annotate the
test with `@pytest.mark.usefixtures`("llm_response_in_db") so the fixture still
runs for side effects while eliminating the unused-argument lint error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 60eab00f-9de7-4b8b-b267-d19d80f7b25b
📒 Files selected for processing (2)
backend/app/api/routes/llm.pybackend/app/tests/api/routes/test_llm.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/api/routes/llm.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/app/tests/api/routes/test_llm.py (1)
277-281: Add missing type hints forllm_jobin the new polling tests.Line [280] and Line [328] should annotate
llm_job(e.g.,llm_job: Job) to keep test signatures fully typed.As per coding guidelines, "
**/*.py: Always add type hints to all function parameters and return values in Python code."Also applies to: 324-329
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/api/routes/test_llm.py` around lines 277 - 281, The tests `test_get_llm_call_pending` (and the related polling test around lines 324-329) are missing a type annotation for the `llm_job` parameter; update the function signatures to annotate `llm_job: Job` and add the necessary import for `Job` (where the Job test model/fixture lives) so the test functions are fully typed (e.g., change `llm_job` to `llm_job: Job` in `test_get_llm_call_pending` and the other polling test).backend/app/tests/utils/llm.py (1)
21-27: Fix inconsistent helper type contract increate_llm_call_with_response.Line [23] leaves
job_iduntyped, and Line [26]/Line [71] declareLLMCallResponsewhile returning the persisted call record object. This makes fixture typing misleading and leaks incorrect types into tests.As per coding guidelines, "
**/*.py: Always add type hints to all function parameters and return values in Python code."Also applies to: 71-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/utils/llm.py` around lines 21 - 27, The helper create_llm_call_with_response has an untyped parameter job_id and an incorrect return annotation LLMCallResponse; update the function signature to add an explicit type for job_id (e.g., job_id: Optional[int] or the correct concrete type used elsewhere) and change the return type to the actual persisted model class (e.g., LLMCall) to reflect that it returns the persisted call record; also update any imports (typing.Optional) and any other occurrences (including the other declaration around the same helper) so tests use the concrete model type instead of LLMCallResponse.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/tests/crud/test_llm.py`:
- Around line 33-34: The fixture function test_organization currently has an
unused parameter test_project (Ruff ARG001); remove the test_project parameter
from the test_organization signature so it only accepts (db: Session), or if you
relied on test_project for ordering/side-effects, instead keep the parameter out
and annotate the function with `@pytest.mark.usefixtures`("test_project"); update
the function definition (test_organization) accordingly and run linters to
confirm the ARG001 warning is resolved.
---
Nitpick comments:
In `@backend/app/tests/api/routes/test_llm.py`:
- Around line 277-281: The tests `test_get_llm_call_pending` (and the related
polling test around lines 324-329) are missing a type annotation for the
`llm_job` parameter; update the function signatures to annotate `llm_job: Job`
and add the necessary import for `Job` (where the Job test model/fixture lives)
so the test functions are fully typed (e.g., change `llm_job` to `llm_job: Job`
in `test_get_llm_call_pending` and the other polling test).
In `@backend/app/tests/utils/llm.py`:
- Around line 21-27: The helper create_llm_call_with_response has an untyped
parameter job_id and an incorrect return annotation LLMCallResponse; update the
function signature to add an explicit type for job_id (e.g., job_id:
Optional[int] or the correct concrete type used elsewhere) and change the return
type to the actual persisted model class (e.g., LLMCall) to reflect that it
returns the persisted call record; also update any imports (typing.Optional) and
any other occurrences (including the other declaration around the same helper)
so tests use the concrete model type instead of LLMCallResponse.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 07719275-896f-4c7a-a870-2c879078508d
📒 Files selected for processing (6)
backend/app/crud/llm.pybackend/app/models/llm/request.pybackend/app/models/llm/response.pybackend/app/tests/api/routes/test_llm.pybackend/app/tests/crud/test_llm.pybackend/app/tests/utils/llm.py
✅ Files skipped from review due to trivial changes (1)
- backend/app/models/llm/request.py
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/app/crud/llm.py
- backend/app/models/llm/response.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/tests/crud/test_llm.py`:
- Around line 38-39: The fixture function test_job lacks a return type
annotation; update its signature to include the proper return type that
create_llm_job returns (e.g., the LLM job model or DTO) and import any needed
typing symbols if missing, so change def test_job(db: Session) -> <ReturnType>:
return create_llm_job(db) (referencing test_job and create_llm_job to locate the
code).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f72e5f36-7172-4c04-ba4e-92e8d3172a4b
📒 Files selected for processing (1)
backend/app/tests/crud/test_llm.py
| def test_job(db: Session): | ||
| """Create a test job for LLM call tests.""" | ||
| crud = JobCrud(db) | ||
| return crud.create(job_type=JobType.LLM_API, trace_id="test-llm-trace") | ||
| return create_llm_job(db) |
There was a problem hiding this comment.
Add the missing return type hint on test_job fixture.
Line [38] defines test_job without a return annotation, which violates the repo typing rule for Python functions.
Suggested fix
-from app.models import Project, Organization
+from app.models import Job, Project, Organization
@@
-@pytest.fixture
-def test_job(db: Session):
+@pytest.fixture
+def test_job(db: Session) -> Job:
return create_llm_job(db)As per coding guidelines, **/*.py: Always add type hints to all function parameters and return values in Python code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/tests/crud/test_llm.py` around lines 38 - 39, The fixture
function test_job lacks a return type annotation; update its signature to
include the proper return type that create_llm_job returns (e.g., the LLM job
model or DTO) and import any needed typing symbols if missing, so change def
test_job(db: Session) -> <ReturnType>: return create_llm_job(db) (referencing
test_job and create_llm_job to locate the code).
Summary
Target issue is #623
Checklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Notes
Please add here if any other information is required for the reviewer.
Summary by CodeRabbit