Skip to content

feat: implement build pipeline — doc/code generation + deployer (#8, #9, #10)#32

Open
ComBba wants to merge 1 commit intomainfrom
feat/issue-8-9-10-build-pipeline
Open

feat: implement build pipeline — doc/code generation + deployer (#8, #9, #10)#32
ComBba wants to merge 1 commit intomainfrom
feat/issue-8-9-10-build-pipeline

Conversation

@ComBba
Copy link
Contributor

@ComBba ComBba commented Mar 4, 2026

Summary

Key Design Decisions

  • AI-native apps only: Prompts explicitly forbid chatbot wrappers and generic CRUD. Generated apps must embed AI in core business workflows (prediction, recommendation, classification, generation).
  • DO Gradient throughout: Generated backend includes ai_service.py calling DO Serverless Inference API via httpx.
  • Consistent patterns: All nodes follow the established ChatGradient → _parse_json_response → return dict pattern.

Files Changed

  • agent/nodes/doc_generator.py — 141 lines (was 15-line stub)
  • agent/nodes/code_generator.py — 122 lines (was 10-line stub)
  • agent/nodes/deployer.py — 97 lines (was 14-line stub)
  • agent/prompts/doc_templates.py — 95 lines (rich system prompts replacing placeholder Jinja2)
  • agent/prompts/code_templates.py — 50 lines (rich system prompts replacing stubs)
  • web/package.json — Added framer-motion, recharts, react-markdown, react-syntax-highlighter, canvas-confetti
  • web/src/components/ui/* — Added 7 shadcn components (scroll-area, skeleton, progress, alert, avatar, dialog, tooltip)

Verification

  • ruff check passes on all modified Python files
  • python -m py_compile passes on all modified Python files
  • npm run build passes in web/ with new dependencies

Summary by CodeRabbit

릴리즈 노트

  • 새로운 기능

    • 자동 코드 생성 기능 구현 (프론트엔드/백엔드)
    • GitHub 및 DigitalOcean 배포 자동화
    • 문서 자동 생성 (요구사항, 기술 사양, API 명세, 데이터베이스 스키마)
    • 새로운 UI 컴포넌트 추가 (알림, 아바타, 다이얼로그, 진행률, 스크롤 영역, 스켈레톤, 툴팁)
    • 코드 구문 강조 및 마크다운 렌더링 기능 추가
  • Chores

    • 시각화 및 UI 향상을 위한 의존성 라이브러리 추가

@coderabbitai
Copy link

coderabbitai bot commented Mar 4, 2026

Walkthrough

이 풀 리퀘스트는 에이전트의 세 핵심 노드(코드 생성, 문서 생성, 배포)를 실제 기능으로 구현하고, 프롬프트 템플릿을 체계화하며, 프론트엔드에 필수 UI 컴포넌트와 의존성을 추가합니다.

Changes

Cohort / File(s) Summary
Agent Logic Nodes
agent/nodes/code_generator.py, agent/nodes/deployer.py, agent/nodes/doc_generator.py
세 노드 모두 플레이스홀더를 실제 작동 로직으로 교체. 코드 생성기는 Claude LLM으로 프론트엔드/백엔드 파일 생성; 배포기는 GitHub 저장소 생성 후 DigitalOcean 배포; 문서 생성기는 PRD, 기술 사양, API 사양 등 여러 문서 생성. JSON 파싱 및 파일 정규화 헬퍼 함수 추가.
Prompt Templates
agent/prompts/code_templates.py, agent/prompts/doc_templates.py
코드 템플릿: 단순 플레이스홀더를 CODE_GENERATION_BASE_SYSTEM_PROMPT, FRONTEND_SYSTEM_PROMPT, BACKEND_SYSTEM_PROMPT로 교체. 문서 템플릿: PRD_TEMPLATE 등 레거시 명칭을 DOC_GENERATION_BASE_SYSTEM_PROMPT, PRD_SYSTEM_PROMPT, TECH_SPEC_SYSTEM_PROMPT, DB_SCHEMA_SYSTEM_PROMPT, APP_SPEC_SYSTEM_PROMPT로 재명명.
Frontend UI Components
web/src/components/ui/alert.tsx, web/src/components/ui/avatar.tsx, web/src/components/ui/dialog.tsx, web/src/components/ui/progress.tsx, web/src/components/ui/scroll-area.tsx, web/src/components/ui/skeleton.tsx, web/src/components/ui/tooltip.tsx
Radix UI 기반의 재사용 가능한 컴포넌트 라이브러리 추가. 각 컴포넌트는 기본 스타일링, 데이터 슬롯 속성, 타입 안정성 제공.
Frontend Dependencies
web/package.json
canvas-confetti, framer-motion, react-markdown, react-syntax-highlighter, recharts 및 관련 타입 선언 추가.

Sequence Diagram(s)

sequenceDiagram
    participant State as Agent State
    participant CodeGen as Code Generator Node
    participant LLM as Claude LLM
    participant Parser as JSON Parser
    
    State->>CodeGen: idea, generated_docs
    CodeGen->>CodeGen: Build context from state
    CodeGen->>LLM: System prompt + Frontend request
    LLM->>Parser: JSON response with files
    Parser->>CodeGen: Parsed frontend_code
    CodeGen->>LLM: System prompt + Backend request
    LLM->>Parser: JSON response with files
    Parser->>CodeGen: Parsed backend_code
    CodeGen->>State: frontend_code, backend_code, phase="code_generated"
Loading
sequenceDiagram
    participant State as Agent State
    participant Deployer as Deployer Node
    participant GitHub as GitHub API
    participant DO as DigitalOcean API
    
    State->>Deployer: frontend_code, backend_code, idea
    Deployer->>Deployer: Merge frontend/backend files
    Deployer->>GitHub: Create repo with merged files
    GitHub->>Deployer: repo URL, repo details
    Deployer->>DO: Build app spec, deploy
    DO->>Deployer: app_id, deployment status
    Deployer->>DO: Wait for completion (if app_id exists)
    DO->>Deployer: live_url when ready
    Deployer->>State: app_id, live_url, github_repo, status, phase="deployed"
Loading
sequenceDiagram
    participant State as Agent State
    participant DocGen as Doc Generator Node
    participant LLM as Claude LLM
    participant Parser as JSON/YAML Parser
    
    State->>DocGen: idea, council_analysis, scoring
    DocGen->>DocGen: Build planning context
    DocGen->>LLM: Generate PRD
    LLM->>Parser: Markdown response
    Parser->>DocGen: prd content
    DocGen->>LLM: Generate Tech Spec
    LLM->>Parser: Markdown response
    Parser->>DocGen: tech_spec content
    DocGen->>LLM: Generate API Spec
    LLM->>Parser: Markdown response
    Parser->>DocGen: api_spec content
    DocGen->>LLM: Generate DB Schema
    LLM->>Parser: Markdown response
    Parser->>DocGen: db_schema content
    DocGen->>LLM: Generate App Spec YAML
    LLM->>Parser: YAML response
    Parser->>DocGen: app_spec_yaml content
    DocGen->>State: generated_docs dict, phase="docs_generated"
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • feat: finalize agent + web scaffold (#1, #2) #25: 주요 PR에서 구현한 agent/nodes/code_generator.py, doc_generator.py, deployer.py 및 프롬프트 파일들이 이전 PR #25에서 스텁/추가된 동일 파일들을 수정하며, 동일 심볼과 기능을 다루고 있습니다.

Poem

🐰 코드 생성되고 문서 작성되어,
깃허브에 푸시하고 클라우드로 날아가네!
UI 컴포넌트들 반짝반짝 정렬되고,
프롬프트는 체계적으로 단정해졌구나.
배포 완료, 모두가 행복한 여정! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: implementing a build pipeline with doc/code generation and deployer functionality, referencing the related issue numbers.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/issue-8-9-10-build-pipeline

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the complete build pipeline for the VibeDeploy project, encompassing LLM-powered document generation, full-stack code generation, and automated deployment to DigitalOcean. The changes transform stub implementations into robust, AI-native functionalities, ensuring that generated applications are domain-specific and deeply integrate AI into their core logic. The frontend also receives a significant upgrade with new UI components and libraries to support richer content and interactions.

Highlights

  • Doc Generator Implementation: The [M3] Doc Generator: PRD + Tech Spec + API Spec + DB Schema #8 Doc Generator has been fully implemented, leveraging LLMs (claude-4.6-sonnet) via ChatGradient to produce PRD, Tech Spec, API Spec, DB Schema, and DO App Spec YAML documents. Each document is informed by Vibe Council analysis and scoring results.
  • Code Generator Implementation: The [M3] Code Generator: Frontend + Backend code generation #9 Code Generator is now fully functional, generating full-stack application code including a Next.js 15 frontend and a FastAPI backend. It integrates domain-specific AI features via the DigitalOcean Serverless Inference API and returns file maps for both frontend and backend code.
  • Deployer Implementation: The [M3] Deployer: GitHub repo + DO App Platform deployment #10 Deployer has been implemented to automate the deployment process. It creates a GitHub repository using PyGithub, pushes the generated code, builds the DigitalOcean App Platform specification, deploys the application, and polls for the live URL, including graceful partial-failure handling.
  • AI-Native Design Principle: A core design decision enforces that generated applications must be AI-native, explicitly forbidding generic chatbot wrappers or CRUD-only systems. AI features must be embedded in core business workflows like prediction, recommendation, classification, or generation.
  • DigitalOcean Gradient Integration: The entire pipeline consistently integrates DigitalOcean Gradient, with the generated backend's ai_service.py calling the DO Serverless Inference API via httpx, and all nodes following a ChatGradient -> _parse_json_response -> return dict pattern.
  • Frontend UI Enhancements: The web frontend has been significantly enhanced with the addition of several new dependencies, including framer-motion, recharts, react-markdown, react-syntax-highlighter, and canvas-confetti, along with 7 new Shadcn UI components (scroll-area, skeleton, progress, alert, avatar, dialog, tooltip).
Changelog
  • agent/nodes/code_generator.py
    • Implemented LLM-driven frontend and backend code generation.
    • Added utility functions for JSON parsing and file normalization.
    • Integrated ChatGradient for LLM interaction.
  • agent/nodes/deployer.py
    • Implemented logic for creating GitHub repositories and pushing generated code.
    • Integrated DigitalOcean App Platform deployment, including spec building and status polling.
    • Added robust error handling for GitHub and DigitalOcean operations.
    • Introduced helper functions for file merging and repository name generation.
  • agent/nodes/doc_generator.py
    • Implemented LLM-driven generation of PRD, Tech Spec, API Spec, DB Schema, and DigitalOcean App Spec YAML.
    • Integrated ChatGradient for LLM interaction, leveraging product idea and council analysis.
    • Added utility functions for context building and JSON response parsing.
  • agent/prompts/code_templates.py
    • Replaced generic code templates with specific system prompts for frontend (Next.js 15) and backend (FastAPI) generation.
    • Defined constraints for AI-native apps and DigitalOcean Serverless Inference API integration.
  • agent/prompts/doc_templates.py
    • Replaced generic document templates with detailed system prompts for PRD, Tech Spec, API Spec, DB Schema, and DigitalOcean App Spec YAML.
    • Established rules for AI-native, domain-specific app generation and architecture targets.
  • web/package-lock.json
    • Synchronized dependency versions and added new entries for canvas-confetti, framer-motion, react-markdown, react-syntax-highlighter, and recharts.
    • Included various transitive dependencies and their integrity hashes.
  • web/package.json
    • Introduced canvas-confetti, framer-motion, react-markdown, react-syntax-highlighter, and recharts as new dependencies.
    • Added corresponding TypeScript type definitions for new dependencies.
  • web/src/components/ui/alert.tsx
    • Introduced a new Alert component for displaying contextual messages.
    • Provided AlertTitle and AlertDescription for structured content within alerts.
  • web/src/components/ui/avatar.tsx
    • Added Avatar component for user profile pictures, including image, fallback, and badge elements.
    • Implemented AvatarGroup and AvatarGroupCount for displaying collections of avatars.
  • web/src/components/ui/dialog.tsx
    • Introduced a full-featured Dialog component for modal windows.
    • Included sub-components like DialogTrigger, DialogContent, DialogHeader, DialogFooter, DialogTitle, and DialogDescription.
  • web/src/components/ui/progress.tsx
    • Implemented a Progress component to visualize the completion status of tasks.
  • web/src/components/ui/scroll-area.tsx
    • Introduced ScrollArea and ScrollBar components for customizable scrollable regions.
  • web/src/components/ui/skeleton.tsx
    • Implemented a Skeleton component to indicate loading states in the UI.
  • web/src/components/ui/tooltip.tsx
    • Added Tooltip component for displaying informative text on hover or focus.
    • Provided TooltipProvider, TooltipTrigger, and TooltipContent for complete tooltip functionality.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant feature, implementing the core pipeline for document generation, code generation, and deployment. However, the system is vulnerable to prompt injection due to direct concatenation of untrusted user input into LLM prompts, and it lacks validation for LLM-generated file paths and content. This could lead to Remote Code Execution (RCE) on deployed applications or the GitHub environment. Additionally, the code has instances of duplication, particularly for utility functions like JSON parsing and slug generation, and could benefit from improved error handling and refactoring for better maintainability and robustness. Remediation should prioritize secure prompt engineering practices and strict validation of all LLM-generated outputs.

},
{
"role": "user",
"content": f"Generate frontend files from this product context:\n\n{context}",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

At this line, the code_generator node directly concatenates untrusted user input (from idea and generated_docs state) into the LLM prompt. This creates a prompt injection vulnerability, which could lead to Remote Code Execution (RCE) on deployed applications or malicious GitHub Actions. It is crucial to use clear delimiters (e.g., ### Context ### or XML-style tags like <context>) to separate instructions from untrusted data. Additionally, consider refactoring the functions _generate_frontend_files and _generate_backend_files (lines 43-90) into a single, more generic helper function to reduce duplication and improve maintainability.

Comment on lines +61 to +63
parsed = _parse_json_response(response.content, {"files": {}})
files = parsed.get("files", {})
return _normalize_files_dict(files)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The code_generator node accepts arbitrary file paths and content from the LLM's output without validation. The _normalize_files_dict function only ensures that keys and values are strings, but does not restrict the paths. These files are then pushed to a GitHub repository in deployer.py. An attacker who successfully performs prompt injection can cause the LLM to generate sensitive files (e.g., .github/workflows/malicious.yml, .env, or overwriting critical application files) which will then be deployed. Implement a strict allow-list for file paths and extensions. Ensure that the LLM cannot generate files in sensitive directories like .github/ or overwrite critical configuration files.

},
{
"role": "user",
"content": f"Create the document from this planning context:\n\n{context}",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The doc_generator node constructs LLM prompts by directly concatenating untrusted user input (the idea object) into the prompt string without proper sanitization or the use of secure delimiters. This makes the system vulnerable to prompt injection attacks, where a malicious user can provide an "idea" that contains instructions to override the system prompt. Use clear delimiters (e.g., ### Context ### or XML-style tags like <context>) to separate instructions from untrusted data.

Comment on lines +72 to +86
def _merge_files(frontend_code: dict, backend_code: dict) -> dict[str, str]:
merged: dict[str, str] = {}

for path, content in backend_code.items():
if isinstance(path, str) and isinstance(content, str):
merged[path] = content

for path, content in frontend_code.items():
if not isinstance(path, str) or not isinstance(content, str):
continue

normalized_path = path if path.startswith("web/") else f"web/{path}"
merged[normalized_path] = content

return merged

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The _merge_files function merges frontend and backend code into a single dictionary of files to be pushed to GitHub. However, it does not validate the file paths provided in the backend_code dictionary. If the LLM is manipulated via prompt injection to generate malicious file paths (e.g., .github/workflows/attack.yml), this function will include them in the final file set, leading to potential RCE on the GitHub runner or deployment of backdoored code. Implement path validation to ensure that only allowed directories and file types are included.

Comment on lines +116 to +120
def _slugify(value: str) -> str:
clean = re.sub(r"[^a-zA-Z0-9\s-]", "", value).strip().lower()
clean = re.sub(r"[\s_]+", "-", clean)
clean = re.sub(r"-+", "-", clean)
return clean or "vibedeploy-app"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This _slugify function is very similar to the slug generation logic in _build_repo_name in agent/nodes/deployer.py. However, it's missing the logic to truncate the slug to 45 characters and strip trailing hyphens. This could lead to inconsistencies where the generated repo placeholder URL in the documentation does not match the actual repository name created by the deployer. These two functions should be consolidated into a single, shared utility to ensure consistency.

References
  1. Avoid code duplication (DRY principle). When two or more pieces of code are very similar, they should be refactored into a single reusable component or function to improve maintainability and ensure consistency.

Comment on lines +117 to +118
except json.JSONDecodeError:
pass

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Silently passing on a json.JSONDecodeError can hide issues with the LLM's output and make debugging difficult. If the regex match is not valid JSON, this error will be swallowed, and the function will return a default value, potentially masking an underlying problem. It is better to log this exception to aid in debugging.

References
  1. Error handling should not silently ignore exceptions, as this can hide bugs and make debugging difficult. At a minimum, exceptions should be logged to provide visibility into potential issues.

Comment on lines +11 to +13
frontend_code = state.get("frontend_code", {}) or {}
backend_code = state.get("backend_code", {}) or {}
idea = state.get("idea", {}) or {}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The or {} is redundant in expressions like state.get("key", {}) or {}. The state.get("key", {}) part already provides an empty dictionary if the key is missing. If the key's value could be None, a more idiomatic way to handle this is state.get("key") or {}. This change improves code clarity and conciseness.

Suggested change
frontend_code = state.get("frontend_code", {}) or {}
backend_code = state.get("backend_code", {}) or {}
idea = state.get("idea", {}) or {}
frontend_code = state.get("frontend_code") or {}
backend_code = state.get("backend_code") or {}
idea = state.get("idea") or {}

Comment on lines +123 to +141
def _parse_json_response(content: str, default: dict) -> dict:
content = content.strip()
if content.startswith("```"):
content = re.sub(r"^```(?:json)?\n?", "", content)
content = re.sub(r"\n?```$", "", content)

try:
return json.loads(content)
except json.JSONDecodeError:
json_match = re.search(r"\{[\s\S]*\}", content)
if json_match:
try:
return json.loads(json_match.group())
except json.JSONDecodeError:
pass

result = dict(default)
result["raw_response"] = content[:500]
return result

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function _parse_json_response is identical to the one in agent/nodes/code_generator.py. This code duplication violates the DRY (Don't Repeat Yourself) principle and makes maintenance harder, as any changes would need to be applied in multiple places. Consider creating a shared utils.py module within the agent package to house common utility functions like this one.

References
  1. Avoid code duplication (DRY principle). When two or more pieces of code are very similar, they should be refactored into a single reusable component or function to improve maintainability and reduce the chance of introducing bugs when making changes.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a1ddb2bc71

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".



def _slugify(value: str) -> str:
clean = re.sub(r"[^a-zA-Z0-9\s-]", "", value).strip().lower()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Coerce idea name before slugifying app spec name

_slugify applies re.sub directly to value, but _generate_app_spec_yaml_doc passes idea.get("name")/idea.get("tagline") without type coercion; if the LLM returns a non-string field (for example a list/object), this raises TypeError and the docs phase fails entirely. Guarding/casting to string here (like _build_repo_name does) prevents one malformed idea field from crashing document generation.

Useful? React with 👍 / 👎.

"phase": "deployed",
}

app_spec = build_app_spec(app_name, github_clone_url)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass the actual repo branch into deployment spec

build_app_spec is called without a branch argument, so it always deploys branch main; this breaks in orgs/users whose default branch is not main, where App Platform will track a nonexistent branch and deployment fails despite successful repo creation. Capture the created repository’s default branch and pass it through so deployment follows the branch that actually exists.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
agent/nodes/doc_generator.py (2)

116-120: slug 생성 로직 중복

_slugifydeployer.py_build_repo_name과 유사한 정규화 로직을 사용합니다. 공통 유틸리티로 추출하면 일관성을 유지할 수 있습니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/doc_generator.py` around lines 116 - 120, The slug normalization
logic in _slugify duplicates the regex-based normalization in deployer.py's
_build_repo_name; extract a shared utility (e.g., normalize_name/normalize_slug)
into a common module (utility or utils) and replace both _slugify in
agent/nodes/doc_generator.py and _build_repo_name in deployer.py to call that
shared function; ensure the new utility preserves current behavior (remove
non-alphanumerics, collapse whitespace/underscores to single hyphens, collapse
repeated hyphens, lowercase, and fallback to "vibedeploy-app") and update
imports in the two modules.

92-97: 프롬프트 지시문 불일치

APP_SPEC_SYSTEM_PROMPT는 "Return YAML only. No markdown fences."라고 지시하지만, Line 96에서 "Return JSON with one key: 'content' containing only YAML."로 덮어씁니다. LLM에 혼란을 줄 수 있으므로, doc_templates.pyAPP_SPEC_SYSTEM_PROMPT에서 원본 반환 지시문을 제거하거나 JSON 래퍼 패턴을 명시하는 것을 권장합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/doc_generator.py` around lines 92 - 97, The system prompts are
inconsistent: DOC_GENERATION_BASE_SYSTEM_PROMPT + APP_SPEC_SYSTEM_PROMPT (from
doc_templates.py) currently instruct YAML-only, but doc_generator.py builds a
system message that says "Return JSON with one key: 'content' containing only
YAML."; fix by making the instructions unambiguous — either remove the "Return
YAML only. No markdown fences." sentence from APP_SPEC_SYSTEM_PROMPT in
doc_templates.py, or update APP_SPEC_SYSTEM_PROMPT to explicitly state the JSON
wrapper pattern (e.g., "Return JSON with one key: 'content' whose value is YAML,
no markdown fences"), and ensure the assembled system message in
doc_generator.py (where the f-string combines DOC_GENERATION_BASE_SYSTEM_PROMPT
and APP_SPEC_SYSTEM_PROMPT) reflects that single, consistent instruction.
agent/nodes/deployer.py (1)

94-97: 타임스탬프 접미사 충돌 가능성

time.time()의 마지막 6자리를 사용하면 약 11.5일 주기로 값이 반복되어 동일한 slug 기반에서 충돌 가능성이 있습니다. MVP에서는 허용 가능하나, 프로덕션에서는 UUID 일부 사용을 고려해보세요.

♻️ 더 고유한 접미사 사용 제안
+import uuid
+
 def _build_repo_name(idea: dict) -> str:
     # ... slug generation ...
-    suffix = str(int(time.time()))[-6:]
+    suffix = uuid.uuid4().hex[:8]
     return f"{slug}-{suffix}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/deployer.py` around lines 94 - 97, The timestamp-based suffix
generation for slug in deployer.py (the slug variable and the current suffix =
str(int(time.time()))[-6:]) can collide every ~11.5 days; replace it with a more
unique suffix by combining the timestamp with a truncated UUID (e.g., use
int(time.time()) or its last digits plus uuid.uuid4().hex[:6] or similar) so the
return value f"{slug}-{suffix}" becomes much less likely to collide; update the
code that computes suffix to use uuid.uuid4() (or uuid.uuid4().hex slice)
alongside or instead of the time-based portion and keep the same final return
format.
agent/nodes/code_generator.py (2)

43-90: 중복 코드 리팩토링 권장

_generate_frontend_files_generate_backend_files는 프롬프트와 메시지만 다르고 거의 동일한 구조입니다. 공통 헬퍼로 추출하면 유지보수성이 향상됩니다.

♻️ 공통 헬퍼 추출 제안
async def _generate_files(
    llm: ChatGradient,
    system_prompt: str,
    user_message: str,
    context: str
) -> dict[str, str]:
    response = await llm.ainvoke([
        {
            "role": "system",
            "content": (
                f"{CODE_GENERATION_BASE_SYSTEM_PROMPT}\n\n"
                f"{system_prompt}\n\n"
                "Return JSON object with exactly one top-level key: 'files'."
            ),
        },
        {"role": "user", "content": f"{user_message}\n\n{context}"},
    ])
    parsed = _parse_json_response(response.content, {"files": {}})
    return _normalize_files_dict(parsed.get("files", {}))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/code_generator.py` around lines 43 - 90, Both
_generate_frontend_files and _generate_backend_files duplicate the same LLM
invocation/response parsing logic; extract a shared helper (e.g.,
_generate_files) that accepts the ChatGradient llm, the system prompt fragment
(FRONTEND_SYSTEM_PROMPT or BACKEND_SYSTEM_PROMPT), and the user message prefix,
then performs the ainvoke call, calls _parse_json_response(..., {"files": {}})
and returns _normalize_files_dict(parsed.get("files", {})). Replace
_generate_frontend_files and _generate_backend_files to call this new
_generate_files helper with the appropriate system_prompt and user message to
preserve behavior and responses.

33-34: LLM 호출 실패 시 예외 처리 부재

_generate_frontend_files_generate_backend_files 호출 시 LLM 오류가 발생하면 예외가 전파되어 전체 파이프라인이 중단됩니다. deployer.py처럼 부분 실패를 허용하는 구조를 고려해보세요.

♻️ 예외 처리 추가 제안
+    try:
+        frontend_code = await _generate_frontend_files(llm, context)
+    except Exception as e:
+        frontend_code = {"error": f"Frontend generation failed: {str(e)[:200]}"}
+
+    try:
+        backend_code = await _generate_backend_files(llm, context)
+    except Exception as e:
+        backend_code = {"error": f"Backend generation failed: {str(e)[:200]}"}
-    frontend_code = await _generate_frontend_files(llm, context)
-    backend_code = await _generate_backend_files(llm, context)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/code_generator.py` around lines 33 - 34, LLM 호출 실패 시 예외가 전파되어
파이프라인이 중단되므로 `_generate_frontend_files`와 `_generate_backend_files` 호출을 각각
try/except로 감싸 예외를 잡고 처리하세요: 각 호출에서 Exception을 잡아(예: `except Exception as e`)
오류를 로그하고(`logger.error` 또는 프로젝트의 로거 사용), 실패 시 해당 결과 변수(`frontend_code`,
`backend_code`)를 안전한 기본값(예: None 또는 빈 구조)으로 설정하여 나머지 파이프라인이 계속 실행되도록 변경하세요; 동작
방식은 이미 부분 실패를 허용하는 `deployer.py`의 처리 패턴을 참고해 일관되게 적용하세요.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@agent/nodes/code_generator.py`:
- Around line 104-122: Duplicate implementation of _parse_json_response across
code_generator.py, doc_generator.py, and vibe_council.py should be refactored
into a single utility: create a new function (keep name _parse_json_response or
rename to parse_json_response) in a shared module agent/utils/json_utils.py
implementing the exact same logic (strip, remove triple-backticks, attempt
json.loads, fallback regex extract, and attach raw_response on failure), then
replace the three in-file definitions with an import from agent.utils.json_utils
and update callers in code_generator.py, doc_generator.py, and vibe_council.py
to use the centralized function so behavior and signature remain identical.

In `@web/src/components/ui/progress.tsx`:
- Line 25: The progress indicator uses value directly in the transform (style={{
transform: `translateX(-${100 - (value || 0)}%)` }}) which can push the
indicator outside its container for values <0 or >100; clamp the incoming value
in the Progress component (or where prop 'value' is handled) to the 0–100 range
(e.g., via Math.max(0, Math.min(100, value || 0)) or a small clamp utility) and
use that clampedValue in the transform calculation so the indicator never
escapes the container.

---

Nitpick comments:
In `@agent/nodes/code_generator.py`:
- Around line 43-90: Both _generate_frontend_files and _generate_backend_files
duplicate the same LLM invocation/response parsing logic; extract a shared
helper (e.g., _generate_files) that accepts the ChatGradient llm, the system
prompt fragment (FRONTEND_SYSTEM_PROMPT or BACKEND_SYSTEM_PROMPT), and the user
message prefix, then performs the ainvoke call, calls _parse_json_response(...,
{"files": {}}) and returns _normalize_files_dict(parsed.get("files", {})).
Replace _generate_frontend_files and _generate_backend_files to call this new
_generate_files helper with the appropriate system_prompt and user message to
preserve behavior and responses.
- Around line 33-34: LLM 호출 실패 시 예외가 전파되어 파이프라인이 중단되므로
`_generate_frontend_files`와 `_generate_backend_files` 호출을 각각 try/except로 감싸 예외를
잡고 처리하세요: 각 호출에서 Exception을 잡아(예: `except Exception as e`) 오류를
로그하고(`logger.error` 또는 프로젝트의 로거 사용), 실패 시 해당 결과 변수(`frontend_code`,
`backend_code`)를 안전한 기본값(예: None 또는 빈 구조)으로 설정하여 나머지 파이프라인이 계속 실행되도록 변경하세요; 동작
방식은 이미 부분 실패를 허용하는 `deployer.py`의 처리 패턴을 참고해 일관되게 적용하세요.

In `@agent/nodes/deployer.py`:
- Around line 94-97: The timestamp-based suffix generation for slug in
deployer.py (the slug variable and the current suffix =
str(int(time.time()))[-6:]) can collide every ~11.5 days; replace it with a more
unique suffix by combining the timestamp with a truncated UUID (e.g., use
int(time.time()) or its last digits plus uuid.uuid4().hex[:6] or similar) so the
return value f"{slug}-{suffix}" becomes much less likely to collide; update the
code that computes suffix to use uuid.uuid4() (or uuid.uuid4().hex slice)
alongside or instead of the time-based portion and keep the same final return
format.

In `@agent/nodes/doc_generator.py`:
- Around line 116-120: The slug normalization logic in _slugify duplicates the
regex-based normalization in deployer.py's _build_repo_name; extract a shared
utility (e.g., normalize_name/normalize_slug) into a common module (utility or
utils) and replace both _slugify in agent/nodes/doc_generator.py and
_build_repo_name in deployer.py to call that shared function; ensure the new
utility preserves current behavior (remove non-alphanumerics, collapse
whitespace/underscores to single hyphens, collapse repeated hyphens, lowercase,
and fallback to "vibedeploy-app") and update imports in the two modules.
- Around line 92-97: The system prompts are inconsistent:
DOC_GENERATION_BASE_SYSTEM_PROMPT + APP_SPEC_SYSTEM_PROMPT (from
doc_templates.py) currently instruct YAML-only, but doc_generator.py builds a
system message that says "Return JSON with one key: 'content' containing only
YAML."; fix by making the instructions unambiguous — either remove the "Return
YAML only. No markdown fences." sentence from APP_SPEC_SYSTEM_PROMPT in
doc_templates.py, or update APP_SPEC_SYSTEM_PROMPT to explicitly state the JSON
wrapper pattern (e.g., "Return JSON with one key: 'content' whose value is YAML,
no markdown fences"), and ensure the assembled system message in
doc_generator.py (where the f-string combines DOC_GENERATION_BASE_SYSTEM_PROMPT
and APP_SPEC_SYSTEM_PROMPT) reflects that single, consistent instruction.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ff5ff231-9c91-424e-b4ae-e4cc8e99f4c1

📥 Commits

Reviewing files that changed from the base of the PR and between a854673 and a1ddb2b.

⛔ Files ignored due to path filters (1)
  • web/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (13)
  • agent/nodes/code_generator.py
  • agent/nodes/deployer.py
  • agent/nodes/doc_generator.py
  • agent/prompts/code_templates.py
  • agent/prompts/doc_templates.py
  • web/package.json
  • web/src/components/ui/alert.tsx
  • web/src/components/ui/avatar.tsx
  • web/src/components/ui/dialog.tsx
  • web/src/components/ui/progress.tsx
  • web/src/components/ui/scroll-area.tsx
  • web/src/components/ui/skeleton.tsx
  • web/src/components/ui/tooltip.tsx

Comment on lines +104 to +122
def _parse_json_response(content: str, default: dict) -> dict:
content = content.strip()
if content.startswith("```"):
content = re.sub(r"^```(?:json)?\n?", "", content)
content = re.sub(r"\n?```$", "", content)

try:
return json.loads(content)
except json.JSONDecodeError:
json_match = re.search(r"\{[\s\S]*\}", content)
if json_match:
try:
return json.loads(json_match.group())
except json.JSONDecodeError:
pass

result = dict(default)
result["raw_response"] = content[:500]
return result
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n "_parse_json_response" --type=py

Repository: Two-Weeks-Team/vibeDeploy

Length of output: 889


🏁 Script executed:

# Extract _parse_json_response from doc_generator.py
sed -n '123,141p' agent/nodes/doc_generator.py

# Extract _parse_json_response from vibe_council.py
sed -n '259,277p' agent/nodes/vibe_council.py

# Extract _parse_json_response from code_generator.py (already shown in review)
sed -n '104,122p' agent/nodes/code_generator.py

Repository: Two-Weeks-Team/vibeDeploy

Length of output: 1951


_parse_json_response 중복 - 3개 파일에서 발견됨

이 함수는 code_generator.py, doc_generator.py, vibe_council.py 3개 파일에서 동일하게 구현되어 있습니다. 공통 유틸리티 모듈(예: agent/utils/json_utils.py)로 추출하여 중복을 제거하고 유지보수성을 향상시키세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/nodes/code_generator.py` around lines 104 - 122, Duplicate
implementation of _parse_json_response across code_generator.py,
doc_generator.py, and vibe_council.py should be refactored into a single
utility: create a new function (keep name _parse_json_response or rename to
parse_json_response) in a shared module agent/utils/json_utils.py implementing
the exact same logic (strip, remove triple-backticks, attempt json.loads,
fallback regex extract, and attach raw_response on failure), then replace the
three in-file definitions with an import from agent.utils.json_utils and update
callers in code_generator.py, doc_generator.py, and vibe_council.py to use the
centralized function so behavior and signature remain identical.

<ProgressPrimitive.Indicator
data-slot="progress-indicator"
className="h-full w-full flex-1 bg-primary transition-all"
style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

진행률 값은 0~100으로 클램프하는 게 안전합니다.

Line 25 계산은 value가 음수/100 초과일 때 인디케이터가 컨테이너 범위를 벗어날 수 있습니다. 렌더 전 클램프를 권장합니다.

수정 제안
 function Progress({
   className,
   value,
   ...props
 }: React.ComponentProps<typeof ProgressPrimitive.Root>) {
+  const clampedValue = Math.min(100, Math.max(0, value ?? 0))
   return (
     <ProgressPrimitive.Root
@@
       <ProgressPrimitive.Indicator
         data-slot="progress-indicator"
         className="h-full w-full flex-1 bg-primary transition-all"
-        style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
+        style={{ transform: `translateX(-${100 - clampedValue}%)` }}
       />
     </ProgressPrimitive.Root>
   )
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/components/ui/progress.tsx` at line 25, The progress indicator uses
value directly in the transform (style={{ transform: `translateX(-${100 - (value
|| 0)}%)` }}) which can push the indicator outside its container for values <0
or >100; clamp the incoming value in the Progress component (or where prop
'value' is handled) to the 0–100 range (e.g., via Math.max(0, Math.min(100,
value || 0)) or a small clamp utility) and use that clampedValue in the
transform calculation so the indicator never escapes the container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant