feat: implement build pipeline — doc/code generation + deployer (#8, #9, #10)#32
feat: implement build pipeline — doc/code generation + deployer (#8, #9, #10)#32
Conversation
Walkthrough이 풀 리퀘스트는 에이전트의 세 핵심 노드(코드 생성, 문서 생성, 배포)를 실제 기능으로 구현하고, 프롬프트 템플릿을 체계화하며, 프론트엔드에 필수 UI 컴포넌트와 의존성을 추가합니다. Changes
Sequence Diagram(s)sequenceDiagram
participant State as Agent State
participant CodeGen as Code Generator Node
participant LLM as Claude LLM
participant Parser as JSON Parser
State->>CodeGen: idea, generated_docs
CodeGen->>CodeGen: Build context from state
CodeGen->>LLM: System prompt + Frontend request
LLM->>Parser: JSON response with files
Parser->>CodeGen: Parsed frontend_code
CodeGen->>LLM: System prompt + Backend request
LLM->>Parser: JSON response with files
Parser->>CodeGen: Parsed backend_code
CodeGen->>State: frontend_code, backend_code, phase="code_generated"
sequenceDiagram
participant State as Agent State
participant Deployer as Deployer Node
participant GitHub as GitHub API
participant DO as DigitalOcean API
State->>Deployer: frontend_code, backend_code, idea
Deployer->>Deployer: Merge frontend/backend files
Deployer->>GitHub: Create repo with merged files
GitHub->>Deployer: repo URL, repo details
Deployer->>DO: Build app spec, deploy
DO->>Deployer: app_id, deployment status
Deployer->>DO: Wait for completion (if app_id exists)
DO->>Deployer: live_url when ready
Deployer->>State: app_id, live_url, github_repo, status, phase="deployed"
sequenceDiagram
participant State as Agent State
participant DocGen as Doc Generator Node
participant LLM as Claude LLM
participant Parser as JSON/YAML Parser
State->>DocGen: idea, council_analysis, scoring
DocGen->>DocGen: Build planning context
DocGen->>LLM: Generate PRD
LLM->>Parser: Markdown response
Parser->>DocGen: prd content
DocGen->>LLM: Generate Tech Spec
LLM->>Parser: Markdown response
Parser->>DocGen: tech_spec content
DocGen->>LLM: Generate API Spec
LLM->>Parser: Markdown response
Parser->>DocGen: api_spec content
DocGen->>LLM: Generate DB Schema
LLM->>Parser: Markdown response
Parser->>DocGen: db_schema content
DocGen->>LLM: Generate App Spec YAML
LLM->>Parser: YAML response
Parser->>DocGen: app_spec_yaml content
DocGen->>State: generated_docs dict, phase="docs_generated"
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the complete build pipeline for the VibeDeploy project, encompassing LLM-powered document generation, full-stack code generation, and automated deployment to DigitalOcean. The changes transform stub implementations into robust, AI-native functionalities, ensuring that generated applications are domain-specific and deeply integrate AI into their core logic. The frontend also receives a significant upgrade with new UI components and libraries to support richer content and interactions. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant feature, implementing the core pipeline for document generation, code generation, and deployment. However, the system is vulnerable to prompt injection due to direct concatenation of untrusted user input into LLM prompts, and it lacks validation for LLM-generated file paths and content. This could lead to Remote Code Execution (RCE) on deployed applications or the GitHub environment. Additionally, the code has instances of duplication, particularly for utility functions like JSON parsing and slug generation, and could benefit from improved error handling and refactoring for better maintainability and robustness. Remediation should prioritize secure prompt engineering practices and strict validation of all LLM-generated outputs.
| }, | ||
| { | ||
| "role": "user", | ||
| "content": f"Generate frontend files from this product context:\n\n{context}", |
There was a problem hiding this comment.
At this line, the code_generator node directly concatenates untrusted user input (from idea and generated_docs state) into the LLM prompt. This creates a prompt injection vulnerability, which could lead to Remote Code Execution (RCE) on deployed applications or malicious GitHub Actions. It is crucial to use clear delimiters (e.g., ### Context ### or XML-style tags like <context>) to separate instructions from untrusted data. Additionally, consider refactoring the functions _generate_frontend_files and _generate_backend_files (lines 43-90) into a single, more generic helper function to reduce duplication and improve maintainability.
| parsed = _parse_json_response(response.content, {"files": {}}) | ||
| files = parsed.get("files", {}) | ||
| return _normalize_files_dict(files) |
There was a problem hiding this comment.
The code_generator node accepts arbitrary file paths and content from the LLM's output without validation. The _normalize_files_dict function only ensures that keys and values are strings, but does not restrict the paths. These files are then pushed to a GitHub repository in deployer.py. An attacker who successfully performs prompt injection can cause the LLM to generate sensitive files (e.g., .github/workflows/malicious.yml, .env, or overwriting critical application files) which will then be deployed. Implement a strict allow-list for file paths and extensions. Ensure that the LLM cannot generate files in sensitive directories like .github/ or overwrite critical configuration files.
| }, | ||
| { | ||
| "role": "user", | ||
| "content": f"Create the document from this planning context:\n\n{context}", |
There was a problem hiding this comment.
The doc_generator node constructs LLM prompts by directly concatenating untrusted user input (the idea object) into the prompt string without proper sanitization or the use of secure delimiters. This makes the system vulnerable to prompt injection attacks, where a malicious user can provide an "idea" that contains instructions to override the system prompt. Use clear delimiters (e.g., ### Context ### or XML-style tags like <context>) to separate instructions from untrusted data.
| def _merge_files(frontend_code: dict, backend_code: dict) -> dict[str, str]: | ||
| merged: dict[str, str] = {} | ||
|
|
||
| for path, content in backend_code.items(): | ||
| if isinstance(path, str) and isinstance(content, str): | ||
| merged[path] = content | ||
|
|
||
| for path, content in frontend_code.items(): | ||
| if not isinstance(path, str) or not isinstance(content, str): | ||
| continue | ||
|
|
||
| normalized_path = path if path.startswith("web/") else f"web/{path}" | ||
| merged[normalized_path] = content | ||
|
|
||
| return merged |
There was a problem hiding this comment.
The _merge_files function merges frontend and backend code into a single dictionary of files to be pushed to GitHub. However, it does not validate the file paths provided in the backend_code dictionary. If the LLM is manipulated via prompt injection to generate malicious file paths (e.g., .github/workflows/attack.yml), this function will include them in the final file set, leading to potential RCE on the GitHub runner or deployment of backdoored code. Implement path validation to ensure that only allowed directories and file types are included.
| def _slugify(value: str) -> str: | ||
| clean = re.sub(r"[^a-zA-Z0-9\s-]", "", value).strip().lower() | ||
| clean = re.sub(r"[\s_]+", "-", clean) | ||
| clean = re.sub(r"-+", "-", clean) | ||
| return clean or "vibedeploy-app" |
There was a problem hiding this comment.
This _slugify function is very similar to the slug generation logic in _build_repo_name in agent/nodes/deployer.py. However, it's missing the logic to truncate the slug to 45 characters and strip trailing hyphens. This could lead to inconsistencies where the generated repo placeholder URL in the documentation does not match the actual repository name created by the deployer. These two functions should be consolidated into a single, shared utility to ensure consistency.
References
- Avoid code duplication (DRY principle). When two or more pieces of code are very similar, they should be refactored into a single reusable component or function to improve maintainability and ensure consistency.
| except json.JSONDecodeError: | ||
| pass |
There was a problem hiding this comment.
Silently passing on a json.JSONDecodeError can hide issues with the LLM's output and make debugging difficult. If the regex match is not valid JSON, this error will be swallowed, and the function will return a default value, potentially masking an underlying problem. It is better to log this exception to aid in debugging.
References
- Error handling should not silently ignore exceptions, as this can hide bugs and make debugging difficult. At a minimum, exceptions should be logged to provide visibility into potential issues.
| frontend_code = state.get("frontend_code", {}) or {} | ||
| backend_code = state.get("backend_code", {}) or {} | ||
| idea = state.get("idea", {}) or {} |
There was a problem hiding this comment.
The or {} is redundant in expressions like state.get("key", {}) or {}. The state.get("key", {}) part already provides an empty dictionary if the key is missing. If the key's value could be None, a more idiomatic way to handle this is state.get("key") or {}. This change improves code clarity and conciseness.
| frontend_code = state.get("frontend_code", {}) or {} | |
| backend_code = state.get("backend_code", {}) or {} | |
| idea = state.get("idea", {}) or {} | |
| frontend_code = state.get("frontend_code") or {} | |
| backend_code = state.get("backend_code") or {} | |
| idea = state.get("idea") or {} | |
| def _parse_json_response(content: str, default: dict) -> dict: | ||
| content = content.strip() | ||
| if content.startswith("```"): | ||
| content = re.sub(r"^```(?:json)?\n?", "", content) | ||
| content = re.sub(r"\n?```$", "", content) | ||
|
|
||
| try: | ||
| return json.loads(content) | ||
| except json.JSONDecodeError: | ||
| json_match = re.search(r"\{[\s\S]*\}", content) | ||
| if json_match: | ||
| try: | ||
| return json.loads(json_match.group()) | ||
| except json.JSONDecodeError: | ||
| pass | ||
|
|
||
| result = dict(default) | ||
| result["raw_response"] = content[:500] | ||
| return result |
There was a problem hiding this comment.
The function _parse_json_response is identical to the one in agent/nodes/code_generator.py. This code duplication violates the DRY (Don't Repeat Yourself) principle and makes maintenance harder, as any changes would need to be applied in multiple places. Consider creating a shared utils.py module within the agent package to house common utility functions like this one.
References
- Avoid code duplication (DRY principle). When two or more pieces of code are very similar, they should be refactored into a single reusable component or function to improve maintainability and reduce the chance of introducing bugs when making changes.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a1ddb2bc71
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
|
|
||
| def _slugify(value: str) -> str: | ||
| clean = re.sub(r"[^a-zA-Z0-9\s-]", "", value).strip().lower() |
There was a problem hiding this comment.
Coerce idea name before slugifying app spec name
_slugify applies re.sub directly to value, but _generate_app_spec_yaml_doc passes idea.get("name")/idea.get("tagline") without type coercion; if the LLM returns a non-string field (for example a list/object), this raises TypeError and the docs phase fails entirely. Guarding/casting to string here (like _build_repo_name does) prevents one malformed idea field from crashing document generation.
Useful? React with 👍 / 👎.
| "phase": "deployed", | ||
| } | ||
|
|
||
| app_spec = build_app_spec(app_name, github_clone_url) |
There was a problem hiding this comment.
Pass the actual repo branch into deployment spec
build_app_spec is called without a branch argument, so it always deploys branch main; this breaks in orgs/users whose default branch is not main, where App Platform will track a nonexistent branch and deployment fails despite successful repo creation. Capture the created repository’s default branch and pass it through so deployment follows the branch that actually exists.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
agent/nodes/doc_generator.py (2)
116-120: slug 생성 로직 중복
_slugify는deployer.py의_build_repo_name과 유사한 정규화 로직을 사용합니다. 공통 유틸리티로 추출하면 일관성을 유지할 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/nodes/doc_generator.py` around lines 116 - 120, The slug normalization logic in _slugify duplicates the regex-based normalization in deployer.py's _build_repo_name; extract a shared utility (e.g., normalize_name/normalize_slug) into a common module (utility or utils) and replace both _slugify in agent/nodes/doc_generator.py and _build_repo_name in deployer.py to call that shared function; ensure the new utility preserves current behavior (remove non-alphanumerics, collapse whitespace/underscores to single hyphens, collapse repeated hyphens, lowercase, and fallback to "vibedeploy-app") and update imports in the two modules.
92-97: 프롬프트 지시문 불일치
APP_SPEC_SYSTEM_PROMPT는 "Return YAML only. No markdown fences."라고 지시하지만, Line 96에서 "Return JSON with one key: 'content' containing only YAML."로 덮어씁니다. LLM에 혼란을 줄 수 있으므로,doc_templates.py의APP_SPEC_SYSTEM_PROMPT에서 원본 반환 지시문을 제거하거나 JSON 래퍼 패턴을 명시하는 것을 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/nodes/doc_generator.py` around lines 92 - 97, The system prompts are inconsistent: DOC_GENERATION_BASE_SYSTEM_PROMPT + APP_SPEC_SYSTEM_PROMPT (from doc_templates.py) currently instruct YAML-only, but doc_generator.py builds a system message that says "Return JSON with one key: 'content' containing only YAML."; fix by making the instructions unambiguous — either remove the "Return YAML only. No markdown fences." sentence from APP_SPEC_SYSTEM_PROMPT in doc_templates.py, or update APP_SPEC_SYSTEM_PROMPT to explicitly state the JSON wrapper pattern (e.g., "Return JSON with one key: 'content' whose value is YAML, no markdown fences"), and ensure the assembled system message in doc_generator.py (where the f-string combines DOC_GENERATION_BASE_SYSTEM_PROMPT and APP_SPEC_SYSTEM_PROMPT) reflects that single, consistent instruction.agent/nodes/deployer.py (1)
94-97: 타임스탬프 접미사 충돌 가능성
time.time()의 마지막 6자리를 사용하면 약 11.5일 주기로 값이 반복되어 동일한 slug 기반에서 충돌 가능성이 있습니다. MVP에서는 허용 가능하나, 프로덕션에서는 UUID 일부 사용을 고려해보세요.♻️ 더 고유한 접미사 사용 제안
+import uuid + def _build_repo_name(idea: dict) -> str: # ... slug generation ... - suffix = str(int(time.time()))[-6:] + suffix = uuid.uuid4().hex[:8] return f"{slug}-{suffix}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/nodes/deployer.py` around lines 94 - 97, The timestamp-based suffix generation for slug in deployer.py (the slug variable and the current suffix = str(int(time.time()))[-6:]) can collide every ~11.5 days; replace it with a more unique suffix by combining the timestamp with a truncated UUID (e.g., use int(time.time()) or its last digits plus uuid.uuid4().hex[:6] or similar) so the return value f"{slug}-{suffix}" becomes much less likely to collide; update the code that computes suffix to use uuid.uuid4() (or uuid.uuid4().hex slice) alongside or instead of the time-based portion and keep the same final return format.agent/nodes/code_generator.py (2)
43-90: 중복 코드 리팩토링 권장
_generate_frontend_files와_generate_backend_files는 프롬프트와 메시지만 다르고 거의 동일한 구조입니다. 공통 헬퍼로 추출하면 유지보수성이 향상됩니다.♻️ 공통 헬퍼 추출 제안
async def _generate_files( llm: ChatGradient, system_prompt: str, user_message: str, context: str ) -> dict[str, str]: response = await llm.ainvoke([ { "role": "system", "content": ( f"{CODE_GENERATION_BASE_SYSTEM_PROMPT}\n\n" f"{system_prompt}\n\n" "Return JSON object with exactly one top-level key: 'files'." ), }, {"role": "user", "content": f"{user_message}\n\n{context}"}, ]) parsed = _parse_json_response(response.content, {"files": {}}) return _normalize_files_dict(parsed.get("files", {}))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/nodes/code_generator.py` around lines 43 - 90, Both _generate_frontend_files and _generate_backend_files duplicate the same LLM invocation/response parsing logic; extract a shared helper (e.g., _generate_files) that accepts the ChatGradient llm, the system prompt fragment (FRONTEND_SYSTEM_PROMPT or BACKEND_SYSTEM_PROMPT), and the user message prefix, then performs the ainvoke call, calls _parse_json_response(..., {"files": {}}) and returns _normalize_files_dict(parsed.get("files", {})). Replace _generate_frontend_files and _generate_backend_files to call this new _generate_files helper with the appropriate system_prompt and user message to preserve behavior and responses.
33-34: LLM 호출 실패 시 예외 처리 부재
_generate_frontend_files와_generate_backend_files호출 시 LLM 오류가 발생하면 예외가 전파되어 전체 파이프라인이 중단됩니다. deployer.py처럼 부분 실패를 허용하는 구조를 고려해보세요.♻️ 예외 처리 추가 제안
+ try: + frontend_code = await _generate_frontend_files(llm, context) + except Exception as e: + frontend_code = {"error": f"Frontend generation failed: {str(e)[:200]}"} + + try: + backend_code = await _generate_backend_files(llm, context) + except Exception as e: + backend_code = {"error": f"Backend generation failed: {str(e)[:200]}"} - frontend_code = await _generate_frontend_files(llm, context) - backend_code = await _generate_backend_files(llm, context)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/nodes/code_generator.py` around lines 33 - 34, LLM 호출 실패 시 예외가 전파되어 파이프라인이 중단되므로 `_generate_frontend_files`와 `_generate_backend_files` 호출을 각각 try/except로 감싸 예외를 잡고 처리하세요: 각 호출에서 Exception을 잡아(예: `except Exception as e`) 오류를 로그하고(`logger.error` 또는 프로젝트의 로거 사용), 실패 시 해당 결과 변수(`frontend_code`, `backend_code`)를 안전한 기본값(예: None 또는 빈 구조)으로 설정하여 나머지 파이프라인이 계속 실행되도록 변경하세요; 동작 방식은 이미 부분 실패를 허용하는 `deployer.py`의 처리 패턴을 참고해 일관되게 적용하세요.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agent/nodes/code_generator.py`:
- Around line 104-122: Duplicate implementation of _parse_json_response across
code_generator.py, doc_generator.py, and vibe_council.py should be refactored
into a single utility: create a new function (keep name _parse_json_response or
rename to parse_json_response) in a shared module agent/utils/json_utils.py
implementing the exact same logic (strip, remove triple-backticks, attempt
json.loads, fallback regex extract, and attach raw_response on failure), then
replace the three in-file definitions with an import from agent.utils.json_utils
and update callers in code_generator.py, doc_generator.py, and vibe_council.py
to use the centralized function so behavior and signature remain identical.
In `@web/src/components/ui/progress.tsx`:
- Line 25: The progress indicator uses value directly in the transform (style={{
transform: `translateX(-${100 - (value || 0)}%)` }}) which can push the
indicator outside its container for values <0 or >100; clamp the incoming value
in the Progress component (or where prop 'value' is handled) to the 0–100 range
(e.g., via Math.max(0, Math.min(100, value || 0)) or a small clamp utility) and
use that clampedValue in the transform calculation so the indicator never
escapes the container.
---
Nitpick comments:
In `@agent/nodes/code_generator.py`:
- Around line 43-90: Both _generate_frontend_files and _generate_backend_files
duplicate the same LLM invocation/response parsing logic; extract a shared
helper (e.g., _generate_files) that accepts the ChatGradient llm, the system
prompt fragment (FRONTEND_SYSTEM_PROMPT or BACKEND_SYSTEM_PROMPT), and the user
message prefix, then performs the ainvoke call, calls _parse_json_response(...,
{"files": {}}) and returns _normalize_files_dict(parsed.get("files", {})).
Replace _generate_frontend_files and _generate_backend_files to call this new
_generate_files helper with the appropriate system_prompt and user message to
preserve behavior and responses.
- Around line 33-34: LLM 호출 실패 시 예외가 전파되어 파이프라인이 중단되므로
`_generate_frontend_files`와 `_generate_backend_files` 호출을 각각 try/except로 감싸 예외를
잡고 처리하세요: 각 호출에서 Exception을 잡아(예: `except Exception as e`) 오류를
로그하고(`logger.error` 또는 프로젝트의 로거 사용), 실패 시 해당 결과 변수(`frontend_code`,
`backend_code`)를 안전한 기본값(예: None 또는 빈 구조)으로 설정하여 나머지 파이프라인이 계속 실행되도록 변경하세요; 동작
방식은 이미 부분 실패를 허용하는 `deployer.py`의 처리 패턴을 참고해 일관되게 적용하세요.
In `@agent/nodes/deployer.py`:
- Around line 94-97: The timestamp-based suffix generation for slug in
deployer.py (the slug variable and the current suffix =
str(int(time.time()))[-6:]) can collide every ~11.5 days; replace it with a more
unique suffix by combining the timestamp with a truncated UUID (e.g., use
int(time.time()) or its last digits plus uuid.uuid4().hex[:6] or similar) so the
return value f"{slug}-{suffix}" becomes much less likely to collide; update the
code that computes suffix to use uuid.uuid4() (or uuid.uuid4().hex slice)
alongside or instead of the time-based portion and keep the same final return
format.
In `@agent/nodes/doc_generator.py`:
- Around line 116-120: The slug normalization logic in _slugify duplicates the
regex-based normalization in deployer.py's _build_repo_name; extract a shared
utility (e.g., normalize_name/normalize_slug) into a common module (utility or
utils) and replace both _slugify in agent/nodes/doc_generator.py and
_build_repo_name in deployer.py to call that shared function; ensure the new
utility preserves current behavior (remove non-alphanumerics, collapse
whitespace/underscores to single hyphens, collapse repeated hyphens, lowercase,
and fallback to "vibedeploy-app") and update imports in the two modules.
- Around line 92-97: The system prompts are inconsistent:
DOC_GENERATION_BASE_SYSTEM_PROMPT + APP_SPEC_SYSTEM_PROMPT (from
doc_templates.py) currently instruct YAML-only, but doc_generator.py builds a
system message that says "Return JSON with one key: 'content' containing only
YAML."; fix by making the instructions unambiguous — either remove the "Return
YAML only. No markdown fences." sentence from APP_SPEC_SYSTEM_PROMPT in
doc_templates.py, or update APP_SPEC_SYSTEM_PROMPT to explicitly state the JSON
wrapper pattern (e.g., "Return JSON with one key: 'content' whose value is YAML,
no markdown fences"), and ensure the assembled system message in
doc_generator.py (where the f-string combines DOC_GENERATION_BASE_SYSTEM_PROMPT
and APP_SPEC_SYSTEM_PROMPT) reflects that single, consistent instruction.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ff5ff231-9c91-424e-b4ae-e4cc8e99f4c1
⛔ Files ignored due to path filters (1)
web/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (13)
agent/nodes/code_generator.pyagent/nodes/deployer.pyagent/nodes/doc_generator.pyagent/prompts/code_templates.pyagent/prompts/doc_templates.pyweb/package.jsonweb/src/components/ui/alert.tsxweb/src/components/ui/avatar.tsxweb/src/components/ui/dialog.tsxweb/src/components/ui/progress.tsxweb/src/components/ui/scroll-area.tsxweb/src/components/ui/skeleton.tsxweb/src/components/ui/tooltip.tsx
| def _parse_json_response(content: str, default: dict) -> dict: | ||
| content = content.strip() | ||
| if content.startswith("```"): | ||
| content = re.sub(r"^```(?:json)?\n?", "", content) | ||
| content = re.sub(r"\n?```$", "", content) | ||
|
|
||
| try: | ||
| return json.loads(content) | ||
| except json.JSONDecodeError: | ||
| json_match = re.search(r"\{[\s\S]*\}", content) | ||
| if json_match: | ||
| try: | ||
| return json.loads(json_match.group()) | ||
| except json.JSONDecodeError: | ||
| pass | ||
|
|
||
| result = dict(default) | ||
| result["raw_response"] = content[:500] | ||
| return result |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
rg -n "_parse_json_response" --type=pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 889
🏁 Script executed:
# Extract _parse_json_response from doc_generator.py
sed -n '123,141p' agent/nodes/doc_generator.py
# Extract _parse_json_response from vibe_council.py
sed -n '259,277p' agent/nodes/vibe_council.py
# Extract _parse_json_response from code_generator.py (already shown in review)
sed -n '104,122p' agent/nodes/code_generator.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 1951
_parse_json_response 중복 - 3개 파일에서 발견됨
이 함수는 code_generator.py, doc_generator.py, vibe_council.py 3개 파일에서 동일하게 구현되어 있습니다. 공통 유틸리티 모듈(예: agent/utils/json_utils.py)로 추출하여 중복을 제거하고 유지보수성을 향상시키세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/nodes/code_generator.py` around lines 104 - 122, Duplicate
implementation of _parse_json_response across code_generator.py,
doc_generator.py, and vibe_council.py should be refactored into a single
utility: create a new function (keep name _parse_json_response or rename to
parse_json_response) in a shared module agent/utils/json_utils.py implementing
the exact same logic (strip, remove triple-backticks, attempt json.loads,
fallback regex extract, and attach raw_response on failure), then replace the
three in-file definitions with an import from agent.utils.json_utils and update
callers in code_generator.py, doc_generator.py, and vibe_council.py to use the
centralized function so behavior and signature remain identical.
| <ProgressPrimitive.Indicator | ||
| data-slot="progress-indicator" | ||
| className="h-full w-full flex-1 bg-primary transition-all" | ||
| style={{ transform: `translateX(-${100 - (value || 0)}%)` }} |
There was a problem hiding this comment.
진행률 값은 0~100으로 클램프하는 게 안전합니다.
Line 25 계산은 value가 음수/100 초과일 때 인디케이터가 컨테이너 범위를 벗어날 수 있습니다. 렌더 전 클램프를 권장합니다.
수정 제안
function Progress({
className,
value,
...props
}: React.ComponentProps<typeof ProgressPrimitive.Root>) {
+ const clampedValue = Math.min(100, Math.max(0, value ?? 0))
return (
<ProgressPrimitive.Root
@@
<ProgressPrimitive.Indicator
data-slot="progress-indicator"
className="h-full w-full flex-1 bg-primary transition-all"
- style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
+ style={{ transform: `translateX(-${100 - clampedValue}%)` }}
/>
</ProgressPrimitive.Root>
)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/components/ui/progress.tsx` at line 25, The progress indicator uses
value directly in the transform (style={{ transform: `translateX(-${100 - (value
|| 0)}%)` }}) which can push the indicator outside its container for values <0
or >100; clamp the incoming value in the Progress component (or where prop
'value' is handled) to the 0–100 range (e.g., via Math.max(0, Math.min(100,
value || 0)) or a small clamp utility) and use that clampedValue in the
transform calculation so the indicator never escapes the container.
Summary
ChatGradient(claude-4.6-sonnet). Each document is informed by Vibe Council analysis and scoring results.frontend_code,backend_code).Key Design Decisions
ai_service.pycalling DO Serverless Inference API via httpx.ChatGradient → _parse_json_response → return dictpattern.Files Changed
agent/nodes/doc_generator.py— 141 lines (was 15-line stub)agent/nodes/code_generator.py— 122 lines (was 10-line stub)agent/nodes/deployer.py— 97 lines (was 14-line stub)agent/prompts/doc_templates.py— 95 lines (rich system prompts replacing placeholder Jinja2)agent/prompts/code_templates.py— 50 lines (rich system prompts replacing stubs)web/package.json— Added framer-motion, recharts, react-markdown, react-syntax-highlighter, canvas-confettiweb/src/components/ui/*— Added 7 shadcn components (scroll-area, skeleton, progress, alert, avatar, dialog, tooltip)Verification
ruff checkpasses on all modified Python filespython -m py_compilepasses on all modified Python filesnpm run buildpasses in web/ with new dependenciesSummary by CodeRabbit
릴리즈 노트
새로운 기능
Chores