Workspace tα»ng hợp cho Technical Artist/Director vα»i AI, LLM, Houdini, vΓ Production Pipeline.
π START HERE - GETTING_STARTED.md
Dα»± Γ‘n ΔΓ£ Δược cαΊ₯u trΓΊc lαΊ‘i vΓ phΓ‘t triα»n vα»i LLM integration:
-
π€ FastAPI Backend (
api-server/)- LLM endpoints (chat, code generation, debug)
- Houdini integration endpoints
- Pipeline automation APIs
- WebSocket support
- RAG system vα»i ChromaDB
-
π Next.js Frontend (
web-dev/llm-assistant/)- Modern chat interface
- Houdini remote control
- Pipeline dashboard
- Real-time updates
- Responsive design
-
π Enhanced Documentation
- GETTING_STARTED.md - Complete setup guide
- PROJECT_STRUCTURE.md - Architecture details
- WORKSPACE_GUIDE.md - Workflows & examples
-
π¨ Houdini Integration (
houdini-integration/)- Python panels structure
- HDA templates
- API client for Houdini
TD/
βββ π TC/ # 16-week AI Course (131 modules)
β βββ week08*/ # β LLM content (base for this project)
β βββ [Complete documentation]
β
βββ π€ api-server/ # FastAPI LLM Backend β¨ NEW
β βββ main.py
β βββ routers/ # LLM, Houdini, Pipeline
β βββ services/ # Business logic
β βββ core/ # Config & logging
β
βββ π web-dev/llm-assistant/ # Next.js Frontend β¨ NEW
β βββ src/app/ # Pages & components
β βββ package.json
β
βββ π¨ houdini-integration/ # Houdini Tools β¨ NEW
β βββ python/ # Python panels
β βββ hdas/ # Digital assets
β
βββ π data/ # Data & Models
β βββ models/ # LLM models
β βββ prompts/ # Prompt templates
β βββ knowledge-base/ # RAG database
β
βββ π scripts-local/ # Python automation
βββ π¬ houdini-files/ # Scene files
βββ π projects/ # Active projects
- Python 3.9+ (Backend)
- Node.js 18+ (Frontend)
- Houdini 19.5+ (Optional, for integration)
- PostgreSQL 15+ (Optional, for production)
- Redis 7+ (Optional, for caching)
cd api-server
# Create virtual environment
python -m venv venv
source venv/bin/activate # macOS/Linux
# venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env with your API keys:
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# Start server
uvicorn main:app --reload --host 0.0.0.0 --port 8000API Server: http://localhost:8000
API Docs: http://localhost:8000/docs
Health Check: http://localhost:8000/health
cd web-dev/llm-assistant
# Install dependencies
npm install
# Configure environment
cp .env.example .env.local
# Edit .env.local:
# NEXT_PUBLIC_API_URL=http://localhost:8000
# Start development server
npm run devWeb App: http://localhost:3000
# PostgreSQL
docker run -d --name postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=llm_assistant \
-p 5432:5432 postgres:15
# Redis
docker run -d --name redis \
-p 6379:6379 redis:7-alpinecd TC
./setup.sh && source venv/bin/activate
jupyter notebook week08*/ # Study LLM modulesπ Full Guide: GETTING_STARTED.md
| Feature | Description | API Endpoint |
|---|---|---|
| π€ LLM Chat | Natural language interface for Houdini questions | POST /api/llm/chat |
| π» Code Generation | Generate Python, VEX, HScript code | POST /api/llm/generate-code |
| π Debugging | AI-powered code debugging | POST /api/llm/debug-code |
| β‘ Code Optimization | Optimize code for performance/readability | POST /api/llm/optimize-code |
| π¨ Houdini Control | Remote Houdini control via web | POST /api/houdini/execute-code |
| ποΈ Node Creation | Create Houdini nodes programmatically | POST /api/houdini/create-node |
| π Network Generation | Generate node networks from description | POST /api/houdini/generate-network |
| βοΈ Pipeline Automation | Automated workflows | GET /api/pipeline/jobs |
| π Monitoring | Real-time job monitoring | WebSocket /ws/{client_id} |
| π RAG Search | Context-aware responses with ChromaDB | Integrated in LLM service |
| π― Fine-tuning | Custom model training | Future feature |
| π Text Completion | Streaming text generation | POST /api/llm/completion |
| Module | Learn | Apply To |
|---|---|---|
| m01 | LLM basics | Architecture |
| m02 | Houdini AI Bot | Base implementation β |
| m03 | Fine-tuning | Custom models |
| m04 | Transformers | Technical depth |
| m05 | Pipeline assistants | Automation |
| m06 | LLM techniques | Prompt engineering |
| m07 | Synthetic data | Data generation |
| m10 | Digital Human | Production workflow β |
User β "Create VEX scatter points with noise"
β
Web UI β API (/api/llm/generate-code)
β
GPT-4/Claude β Generate optimized VEX
β
User β Formatted code + explanation
β
Apply to Houdini Point Wrangle
JSON spec β API β Houdini Integration
β
1. ML Mesh topology generation
2. Diffusion model textures
3. Body matching (scan library)
4. Clothing system
β
USD Export β Deployment
Error in Houdini
β
Send code + error to API
β
LLM analyzes & fixes
β
Return: analysis + fixed code + explanation
| Technology | Version | Purpose |
|---|---|---|
| FastAPI | 0.109.0 | Modern Python web framework |
| Python | 3.9+ | Programming language |
| Uvicorn | 0.27.0 | ASGI server |
| Pydantic | 2.5.3 | Data validation |
| PostgreSQL | 15+ | Relational database |
| Redis | 7+ | Cache & sessions |
| ChromaDB | 0.4.22 | Vector database (RAG) |
| SQLAlchemy | 2.0.25 | ORM |
| Alembic | 1.13.1 | Database migrations |
| Technology | Version | Purpose |
|---|---|---|
| Next.js | 14+ | React framework (App Router) |
| TypeScript | 5+ | Type safety |
| Tailwind CSS | 3+ | Utility-first CSS |
| Radix UI | Latest | Accessible components |
| React Query | 5+ | Server state management |
| Zustand | 4+ | Client state management |
| Service | Model | Context Window | Purpose |
|---|---|---|---|
| OpenAI | GPT-4 Turbo | 128K | Primary LLM |
| OpenAI | GPT-3.5 Turbo | 16K | Fast responses |
| Anthropic | Claude 3 Opus | 200K | Advanced reasoning |
| Anthropic | Claude 3 Sonnet | 200K | Balanced performance |
| Local | Mistral 7B | 8K | Offline inference |
| Embeddings | all-MiniLM-L6-v2 | - | Vector embeddings |
- Houdini Python API (hou) - Node manipulation
- Houdini RPC Server - Remote execution (Port 9090)
- HDAs - Digital assets
- Python Panels - In-app UI
- GETTING_STARTED.md β START HERE
- ARCHITECTURE.md β KiαΊΏn trΓΊc hα» thα»ng β
- PROJECT_STRUCTURE.md - Project structure
- WORKSPACE_GUIDE.md - Workflows
- TC/00-START-HERE.md
- TC/INDEX.md - 131 modules
- TC/TOPICS.md - A-Z lookup
- Swagger UI:
http://localhost:8000/docs- Interactive API documentation - ReDoc:
http://localhost:8000/redoc- Alternative API docs - OpenAPI Schema:
http://localhost:8000/openapi.json
POST /chat- Chat with LLM assistantPOST /completion- Text completion (with streaming)POST /generate-code- Generate code (Python, VEX, HScript)POST /debug-code- Debug code with AI assistancePOST /optimize-code- Optimize codeGET /models- List available LLM models
POST /execute-code- Execute Python code in HoudiniPOST /create-node- Create Houdini nodeGET /scene-info- Get current scene informationPOST /generate-network- Generate node network from description
GET /jobs- List pipeline jobsPOST /jobs- Create new jobGET /jobs/{id}- Get job details
WS /ws/{client_id}- Real-time communication
- Project restructuring
- API server architecture (FastAPI)
- FastAPI endpoints (LLM, Houdini, Pipeline, Assistant)
- WebSocket support for real-time communication
- Configuration management (Pydantic Settings)
- Logging system
- CORS middleware
- Comprehensive documentation
- Architecture documentation
- Getting started guide
- Project structure guide
- Workspace guide
- Integration design
- Complete LLM service implementation
- Build RAG system with ChromaDB
- Create chat UI components (Next.js)
- Houdini Python panel
- Database migrations (Alembic)
- Authentication & authorization (JWT)
- Production deployment (Docker)
- CI/CD pipeline
- Advanced monitoring (Prometheus, Grafana)
- Multi-tenant support
- Mobile app
- Fine-tuning pipeline
- Advanced RAG with fine-tuning
Technical Artists: Generate code, debug scripts, automate tasks Pipeline TDs: Monitor workflows, schedule jobs, optimize pipelines VFX Producers: Track progress, analyze metrics, generate reports
# Server
HOST=0.0.0.0
PORT=8000
ENVIRONMENT=development
DEBUG=true
# LLM Providers
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo-preview
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-opus-20240229
# Local LLM
LOCAL_LLM_MODEL=mistral-7b-instruct
LOCAL_LLM_PATH=../data/models/llm/
# Vector Database
CHROMA_PATH=../data/knowledge-base/vector-db/chroma
CHROMA_COLLECTION=houdini_docs
EMBEDDING_MODEL=all-MiniLM-L6-v2
# Databases
DATABASE_URL=postgresql://user:password@localhost:5432/llm_assistant
REDIS_URL=redis://localhost:6379/0
# Security
SECRET_KEY=change-this-secret-key-in-production
ACCESS_TOKEN_EXPIRE_MINUTES=30
# Houdini
HOUDINI_HOST=localhost
HOUDINI_PORT=9090
HOUDINI_RPC_ENABLED=true
# Rate Limiting
RATE_LIMIT_PER_MINUTE=60NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_WS_URL=ws://localhost:8000- Start Here: GETTING_STARTED.md - Complete setup guide
- Architecture: ARCHITECTURE.md - System architecture details
- Structure: PROJECT_STRUCTURE.md - Project organization
- Workflows: WORKSPACE_GUIDE.md - Usage examples
- API Docs:
http://localhost:8000/docs- Interactive API documentation - TC Course: Study TC Week 08 modules for LLM fundamentals
- TC Course: https://github.com/aerovfx/TechnicalArtist.git
- FastAPI: https://fastapi.tiangolo.com/
- Next.js: https://nextjs.org/
- OpenAI: https://platform.openai.com/
- Houdini: https://www.sidefx.com/
- CPU: 4 cores
- RAM: 8GB
- Storage: 10GB free space
- Network: Internet connection for LLM APIs
- CPU: 8+ cores
- RAM: 16GB+
- Storage: 50GB+ (for local models)
- GPU: NVIDIA GPU with 8GB+ VRAM (for local LLM inference)
- CPU: 16+ cores
- RAM: 32GB+
- Storage: 100GB+ SSD
- Database: PostgreSQL 15+ with replication
- Cache: Redis 7+ cluster
- Load Balancer: Nginx or similar
- Never commit
.envfiles to version control - Change
SECRET_KEYin production - Use strong passwords for databases
- Enable HTTPS in production
- Implement rate limiting
- Use environment-specific API keys
This project is for educational and professional use by Technical Artists.
Built with β€οΈ by Technical Artists, for Technical Artists
Version: 1.0.0
Last Updated: 2026-01-06
Ready to start? β GETTING_STARTED.md π