Skip to content

aerovfx/TechnicalArtist

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

TD - Technical Director Workspace with LLM Integration

Workspace tα»•ng hợp cho Technical Artist/Director vα»›i AI, LLM, Houdini, vΓ  Production Pipeline.

πŸš€ START HERE - GETTING_STARTED.md


🎯 What's New - LLM Integration

Dα»± Γ‘n Δ‘Γ£ được cαΊ₯u trΓΊc lαΊ‘i vΓ  phΓ‘t triển vα»›i LLM integration:

βœ… What Was Added

  1. πŸ€– FastAPI Backend (api-server/)

    • LLM endpoints (chat, code generation, debug)
    • Houdini integration endpoints
    • Pipeline automation APIs
    • WebSocket support
    • RAG system vα»›i ChromaDB
  2. 🌐 Next.js Frontend (web-dev/llm-assistant/)

    • Modern chat interface
    • Houdini remote control
    • Pipeline dashboard
    • Real-time updates
    • Responsive design
  3. πŸ“š Enhanced Documentation

  4. 🎨 Houdini Integration (houdini-integration/)

    • Python panels structure
    • HDA templates
    • API client for Houdini

πŸ“ Project Structure

TD/
β”œβ”€β”€ πŸ“š TC/                          # 16-week AI Course (131 modules)
β”‚   β”œβ”€β”€ week08*/                    # ⭐ LLM content (base for this project)
β”‚   └── [Complete documentation]
β”‚
β”œβ”€β”€ πŸ€– api-server/                  # FastAPI LLM Backend ✨ NEW
β”‚   β”œβ”€β”€ main.py
β”‚   β”œβ”€β”€ routers/                    # LLM, Houdini, Pipeline
β”‚   β”œβ”€β”€ services/                   # Business logic
β”‚   └── core/                       # Config & logging
β”‚
β”œβ”€β”€ 🌐 web-dev/llm-assistant/       # Next.js Frontend ✨ NEW
β”‚   β”œβ”€β”€ src/app/                    # Pages & components
β”‚   └── package.json
β”‚
β”œβ”€β”€ 🎨 houdini-integration/         # Houdini Tools ✨ NEW
β”‚   β”œβ”€β”€ python/                     # Python panels
β”‚   └── hdas/                       # Digital assets
β”‚
β”œβ”€β”€ πŸ“Š data/                        # Data & Models
β”‚   β”œβ”€β”€ models/                     # LLM models
β”‚   β”œβ”€β”€ prompts/                    # Prompt templates
β”‚   └── knowledge-base/             # RAG database
β”‚
β”œβ”€β”€ 🐍 scripts-local/               # Python automation
β”œβ”€β”€ 🎬 houdini-files/               # Scene files
└── πŸ“ projects/                    # Active projects

πŸš€ Quick Start

Prerequisites

  • Python 3.9+ (Backend)
  • Node.js 18+ (Frontend)
  • Houdini 19.5+ (Optional, for integration)
  • PostgreSQL 15+ (Optional, for production)
  • Redis 7+ (Optional, for caching)

1. API Server Setup

cd api-server

# Create virtual environment
python -m venv venv
source venv/bin/activate  # macOS/Linux
# venv\Scripts\activate   # Windows

# Install dependencies
pip install -r requirements.txt

# Configure environment
cp .env.example .env
# Edit .env with your API keys:
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...

# Start server
uvicorn main:app --reload --host 0.0.0.0 --port 8000

API Server: http://localhost:8000
API Docs: http://localhost:8000/docs
Health Check: http://localhost:8000/health

2. Web Frontend Setup

cd web-dev/llm-assistant

# Install dependencies
npm install

# Configure environment
cp .env.example .env.local
# Edit .env.local:
# NEXT_PUBLIC_API_URL=http://localhost:8000

# Start development server
npm run dev

Web App: http://localhost:3000

3. Database Setup (Optional)

# PostgreSQL
docker run -d --name postgres \
  -e POSTGRES_PASSWORD=password \
  -e POSTGRES_DB=llm_assistant \
  -p 5432:5432 postgres:15

# Redis
docker run -d --name redis \
  -p 6379:6379 redis:7-alpine

4. Learn Foundation

cd TC
./setup.sh && source venv/bin/activate
jupyter notebook week08*/  # Study LLM modules

πŸ“– Full Guide: GETTING_STARTED.md


✨ Key Features

Feature Description API Endpoint
πŸ€– LLM Chat Natural language interface for Houdini questions POST /api/llm/chat
πŸ’» Code Generation Generate Python, VEX, HScript code POST /api/llm/generate-code
πŸ› Debugging AI-powered code debugging POST /api/llm/debug-code
⚑ Code Optimization Optimize code for performance/readability POST /api/llm/optimize-code
🎨 Houdini Control Remote Houdini control via web POST /api/houdini/execute-code
πŸ—οΈ Node Creation Create Houdini nodes programmatically POST /api/houdini/create-node
🌐 Network Generation Generate node networks from description POST /api/houdini/generate-network
βš™οΈ Pipeline Automation Automated workflows GET /api/pipeline/jobs
πŸ“Š Monitoring Real-time job monitoring WebSocket /ws/{client_id}
πŸ” RAG Search Context-aware responses with ChromaDB Integrated in LLM service
🎯 Fine-tuning Custom model training Future feature
πŸ“ Text Completion Streaming text generation POST /api/llm/completion

πŸŽ“ Learning Path

Based on TC Week 08 Content

Module Learn Apply To
m01 LLM basics Architecture
m02 Houdini AI Bot Base implementation ⭐
m03 Fine-tuning Custom models
m04 Transformers Technical depth
m05 Pipeline assistants Automation
m06 LLM techniques Prompt engineering
m07 Synthetic data Data generation
m10 Digital Human Production workflow ⭐

πŸ”„ Example Workflows

1. Generate VEX Code

User β†’ "Create VEX scatter points with noise"
  ↓
Web UI β†’ API (/api/llm/generate-code)
  ↓
GPT-4/Claude β†’ Generate optimized VEX
  ↓
User ← Formatted code + explanation
  ↓
Apply to Houdini Point Wrangle

2. Automated Digital Human

JSON spec β†’ API β†’ Houdini Integration
  ↓
1. ML Mesh topology generation
2. Diffusion model textures
3. Body matching (scan library)
4. Clothing system
  ↓
USD Export β†’ Deployment

3. Debug Python Script

Error in Houdini
  ↓
Send code + error to API
  ↓
LLM analyzes & fixes
  ↓
Return: analysis + fixed code + explanation

πŸ› οΈ Tech Stack

Backend

Technology Version Purpose
FastAPI 0.109.0 Modern Python web framework
Python 3.9+ Programming language
Uvicorn 0.27.0 ASGI server
Pydantic 2.5.3 Data validation
PostgreSQL 15+ Relational database
Redis 7+ Cache & sessions
ChromaDB 0.4.22 Vector database (RAG)
SQLAlchemy 2.0.25 ORM
Alembic 1.13.1 Database migrations

Frontend

Technology Version Purpose
Next.js 14+ React framework (App Router)
TypeScript 5+ Type safety
Tailwind CSS 3+ Utility-first CSS
Radix UI Latest Accessible components
React Query 5+ Server state management
Zustand 4+ Client state management

AI/ML

Service Model Context Window Purpose
OpenAI GPT-4 Turbo 128K Primary LLM
OpenAI GPT-3.5 Turbo 16K Fast responses
Anthropic Claude 3 Opus 200K Advanced reasoning
Anthropic Claude 3 Sonnet 200K Balanced performance
Local Mistral 7B 8K Offline inference
Embeddings all-MiniLM-L6-v2 - Vector embeddings

Houdini Integration

  • Houdini Python API (hou) - Node manipulation
  • Houdini RPC Server - Remote execution (Port 9090)
  • HDAs - Digital assets
  • Python Panels - In-app UI

πŸ“– Documentation

Setup & Usage

TC Course

API Documentation

  • Swagger UI: http://localhost:8000/docs - Interactive API documentation
  • ReDoc: http://localhost:8000/redoc - Alternative API docs
  • OpenAPI Schema: http://localhost:8000/openapi.json

API Endpoints Summary

LLM Endpoints (/api/llm)

  • POST /chat - Chat with LLM assistant
  • POST /completion - Text completion (with streaming)
  • POST /generate-code - Generate code (Python, VEX, HScript)
  • POST /debug-code - Debug code with AI assistance
  • POST /optimize-code - Optimize code
  • GET /models - List available LLM models

Houdini Endpoints (/api/houdini)

  • POST /execute-code - Execute Python code in Houdini
  • POST /create-node - Create Houdini node
  • GET /scene-info - Get current scene information
  • POST /generate-network - Generate node network from description

Pipeline Endpoints (/api/pipeline)

  • GET /jobs - List pipeline jobs
  • POST /jobs - Create new job
  • GET /jobs/{id} - Get job details

WebSocket

  • WS /ws/{client_id} - Real-time communication

πŸ“Š Project Status

βœ… Completed (v1.0.0 - 2026-01-06)

  • Project restructuring
  • API server architecture (FastAPI)
  • FastAPI endpoints (LLM, Houdini, Pipeline, Assistant)
  • WebSocket support for real-time communication
  • Configuration management (Pydantic Settings)
  • Logging system
  • CORS middleware
  • Comprehensive documentation
    • Architecture documentation
    • Getting started guide
    • Project structure guide
    • Workspace guide
  • Integration design

🚧 In Progress

  • Complete LLM service implementation
  • Build RAG system with ChromaDB
  • Create chat UI components (Next.js)
  • Houdini Python panel
  • Database migrations (Alembic)
  • Authentication & authorization (JWT)

πŸ“‹ Planned Features

  • Production deployment (Docker)
  • CI/CD pipeline
  • Advanced monitoring (Prometheus, Grafana)
  • Multi-tenant support
  • Mobile app
  • Fine-tuning pipeline
  • Advanced RAG with fine-tuning

🎯 Use Cases

Technical Artists: Generate code, debug scripts, automate tasks Pipeline TDs: Monitor workflows, schedule jobs, optimize pipelines VFX Producers: Track progress, analyze metrics, generate reports


βš™οΈ Configuration

Environment Variables

API Server (.env)

# Server
HOST=0.0.0.0
PORT=8000
ENVIRONMENT=development
DEBUG=true

# LLM Providers
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo-preview
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-opus-20240229

# Local LLM
LOCAL_LLM_MODEL=mistral-7b-instruct
LOCAL_LLM_PATH=../data/models/llm/

# Vector Database
CHROMA_PATH=../data/knowledge-base/vector-db/chroma
CHROMA_COLLECTION=houdini_docs
EMBEDDING_MODEL=all-MiniLM-L6-v2

# Databases
DATABASE_URL=postgresql://user:password@localhost:5432/llm_assistant
REDIS_URL=redis://localhost:6379/0

# Security
SECRET_KEY=change-this-secret-key-in-production
ACCESS_TOKEN_EXPIRE_MINUTES=30

# Houdini
HOUDINI_HOST=localhost
HOUDINI_PORT=9090
HOUDINI_RPC_ENABLED=true

# Rate Limiting
RATE_LIMIT_PER_MINUTE=60

Frontend (.env.local)

NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_WS_URL=ws://localhost:8000

πŸ’‘ Getting Help

  1. Start Here: GETTING_STARTED.md - Complete setup guide
  2. Architecture: ARCHITECTURE.md - System architecture details
  3. Structure: PROJECT_STRUCTURE.md - Project organization
  4. Workflows: WORKSPACE_GUIDE.md - Usage examples
  5. API Docs: http://localhost:8000/docs - Interactive API documentation
  6. TC Course: Study TC Week 08 modules for LLM fundamentals

πŸ”— Links


πŸ“ˆ System Requirements

Minimum Requirements

  • CPU: 4 cores
  • RAM: 8GB
  • Storage: 10GB free space
  • Network: Internet connection for LLM APIs

Recommended Requirements

  • CPU: 8+ cores
  • RAM: 16GB+
  • Storage: 50GB+ (for local models)
  • GPU: NVIDIA GPU with 8GB+ VRAM (for local LLM inference)

Production Requirements

  • CPU: 16+ cores
  • RAM: 32GB+
  • Storage: 100GB+ SSD
  • Database: PostgreSQL 15+ with replication
  • Cache: Redis 7+ cluster
  • Load Balancer: Nginx or similar

πŸ” Security Notes

⚠️ Important:

  • Never commit .env files to version control
  • Change SECRET_KEY in production
  • Use strong passwords for databases
  • Enable HTTPS in production
  • Implement rate limiting
  • Use environment-specific API keys

πŸ“ License

This project is for educational and professional use by Technical Artists.


Built with ❀️ by Technical Artists, for Technical Artists

Version: 1.0.0
Last Updated: 2026-01-06

Ready to start? β†’ GETTING_STARTED.md πŸš€

About

TechnicalArtist

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors