AI-powered applicant tracking system demonstrating production backend patterns for distributed job processing, external integrations, and type-safe domain modeling.
| Layer | Technologies |
|---|---|
| Frontend | React 19, TanStack Query, Shadcn/ui, Tailwind CSS 4, Framer Motion |
| Backend | Next.js 16 App Router, Drizzle ORM, BullMQ, Zod |
| AI | OpenAI GPT-4, Vercel AI SDK |
| Database | PostgreSQL (Supabase), Redis |
| Auth & Storage | Supabase Auth, Supabase Storage (signed URLs) |
| Resend (transactional emails) | |
| Payments | Polar.sh (subscriptions + webhooks) |
| Infrastructure | Google Cloud Run, Docker, Cloud Build |
| DevOps | Husky (git hooks), Pino (structured logging), TypeScript strict mode |
| Analytics | Microsoft Clarity |
┌─────────────────────────────────────────────────────────────────┐
│ Client Layer │
│ └─ Next.js App Router (SSR + Server Actions) │
└────────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────────▼────────────────────────────────────┐
│ API Layer (Route Handlers) │
│ ├─ /api/jobs │ CRUD + filtering │
│ ├─ /api/candidates │ Resume upload & parsing │
│ ├─ /api/interviews │ State machine transitions │
│ └─ /api/assessment/:token │ Public endpoint (token auth) │
└────────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────────▼────────────────────────────────────┐
│ Service Layer (Business Logic) │
│ ├─ InterviewService │ Question generation, evaluation │
│ ├─ ResumeService │ AI parsing & metadata extraction │
│ └─ State Machine │ Interview lifecycle enforcement │
└────────────────────────────┬────────────────────────────────────┘
│
┌─────────────────────┴──────────────────────┐
│ │
┌──────▼──────────┐ ┌───────────▼───────────┐
│ PostgreSQL │ │ Redis + BullMQ │
│ (Drizzle ORM) │ │ ├─ Email Queue │
│ │ │ ├─ Evaluation Queue │
│ • 7 tables │ │ └─ Reminder Queue │
│ • FK cascades │ │ │
│ • JSONB fields │ │ Separate Worker │
└─────────────────┘ │ Process (4 workers) │
└───────────────────────┘
| Problem | Solution | Result |
|---|---|---|
| Batch ops block API (30s+) | Queue to Redis, return immediately | API responses <200ms |
| Failed jobs lose data | Exponential backoff retries | 3 retry attempts with persistence |
| Workers affect API stability | Separate process (worker.ts) |
Independent scaling & crashes |
Prevents invalid transitions with a finite state automaton:
// src/lib/domain/interview-state-machine.ts
type State = 'pending' | 'in_progress' | 'completed' | 'evaluated';
const transitions: Record<State, Partial<Record<Event, State>>> = {
pending: { start: 'in_progress' },
in_progress: { submit: 'completed' },
completed: { evaluate: 'evaluated' },
evaluated: {}, // Terminal state
};Why: State bugs cause real problems (duplicate emails, lost data). FSM makes illegal states unrepresentable.
All third-party SDKs wrapped with:
- ✅ Exponential backoff (3 attempts, 5-10s delays)
- ✅ Rate limit detection
- ✅ 30s timeout handling
- ✅ Error normalization
// Structured error handling with Zod validation
const result = await retry(
() => openai.chat.completions.create({...}),
{ attempts: 3, backoff: 'exponential' }
);
return ParsedResumeSchema.parse(JSON.parse(result.content));// Idempotent job creation (no duplicates)
await emailQueue.add('interview-invite', data, {
jobId: hash(data), // Same input = same ID
attempts: 3,
backoff: { type: 'exponential', delay: 5000 },
});Full TypeScript inference from schema to queries:
const result = await db.query.candidates.findMany({
with: {
interviews: { where: eq(interviews.status, 'completed') },
},
});
// Type: Candidate & { interviews: Interview[] }┌──────────────────────────────────────────────┐
│ Cloud Run (Singapore) │
│ ├─ App Container (Next.js) │
│ └─ Worker Container (BullMQ) │
└───────────────┬──────────────────────────────┘
│
┌───────────┴───────────┐
│ │
▼ ▼
PostgreSQL Redis VM
(Supabase) (VPC Private IP)
Why this setup:
- Cloud Run scales to zero (cost-efficient)
- Redis VM stays up 24/7 (queue persistence)
- VPC tunnel = Redis not exposed to internet
- Singapore region for low latency to Asia
# Clone & install
git clone https://github.com/knileshh/hireneo-ai.git
cd hireneo-ai && npm install
# Environment
cp .env.example .env # Fill in values
# Database
npm run db:push
# Run (2 terminals)
npm run dev # API server (port 3000)
npm run worker # Background jobsDATABASE_URL="postgresql://..."
UPSTASH_REDIS_URL="redis://..."
OPENAI_API_KEY="sk-..."
RESEND_API_KEY="re_..."
NEXT_PUBLIC_SUPABASE_URL="https://..."
NEXT_PUBLIC_SUPABASE_ANON_KEY="..."src/
├── app/
│ ├── api/ # REST endpoints
│ ├── dashboard/ # Protected UI
│ └── assessment/[token] # Public assessment page
│
├── lib/
│ ├── services/ # Business logic (InterviewService, ResumeService)
│ ├── domain/ # State machine
│ ├── integrations/ # OpenAI, Resend, Polar, Supabase wrappers
│ ├── queue/ # BullMQ queues + 4 workers
│ └── db/ # Drizzle schema + migrations
│
└── scripts/
└── worker.ts # Worker process entrypoint
MIT © Nilesh Kumar • hey@knileshh.com