Live Preview : https://fin-x-ai-six.vercel.app/
FinInsight AI is a full-stack financial market intelligence workspace built with Next.js App Router, TailwindCSS, Framer Motion, Recharts, Zustand, and a modular AI service layer. It combines a financial chatbot, document-grounded RAG, portfolio analysis, market theme exploration, visual analytics, and voice interaction in one production-oriented web application.
This application is for educational purposes only and does not provide financial advice.
- Dashboard with overview cards, quick insights, and fintech-style glassmorphism UI
- AI assistant with chat, RAG citations, streaming-like interaction flow, and optional voice playback
- Document analysis for PDF and TXT uploads with chunking, embeddings, retrieval, and grounded answers
- Portfolio analyzer with diversification scoring, sector concentration logic, risk observations, and charts
- Market insights assistant with theme summaries, momentum data, and visual analytics
- Voice mode using browser speech recognition plus ElevenLabs-ready text-to-speech
- Dark and light themes, motion-driven transitions, and reusable modular components
- Frontend: Next.js App Router, React, TailwindCSS, Framer Motion, Recharts, Zustand
- Backend: Next.js route handlers with modular controller and service layers
- AI: Provider-agnostic LLM client supporting Groq/OpenAI-style chat completion APIs
- RAG: Local chunking, lightweight embeddings, cosine similarity retrieval, and persistent JSON vector storage via Vercel Blob with a local fallback
- Voice: Web Speech API on the client, ElevenLabs text-to-speech on the backend
app/
(dashboard)/
page.tsx
(assistant)/
chat/
page.tsx
documents/
page.tsx
voice/
page.tsx
(analysis)/
market/
page.tsx
portfolio/
page.tsx
api/
layout.tsx
src/
frontend/
components/
store/
backend/
ai/
controllers/
modules/
routes/
services/
shared/
data/
lib/
types/
scripts/
smoke-test.ts
app/: Next.js App Router entrypoints, route groups, layouts, and HTTP route handlerssrc/frontend/: client-facing UI, charts, page panels, app shell, and Zustand statesrc/backend/: server-side AI logic, controllers, business modules, and API response helperssrc/shared/: reusable types, utility helpers, environment access, and static data used across both sides
app/(dashboard): landing experience and top-level overview screensapp/(assistant): conversational and document-driven workflows such as chat, RAG, and voiceapp/(analysis): deeper analytical experiences such as portfolio and market intelligence- Route groups improve readability in the codebase without changing the public URLs
Copy .env.example to .env.local and fill in the values you want to use.
LLM_API_KEY=
LLM_PROVIDER=groq
LLM_MODEL=llama-3.3-70b-versatile
LLM_BASE_URL=
ELEVENLABS_API_KEY=
ELEVENLABS_VOICE_ID=
BLOB_READ_WRITE_TOKEN=
PORT=3000
NEXT_PUBLIC_API_BASE_URL=http://localhost:3000- If
LLM_API_KEYis missing, the app falls back to deterministic local responses so the UI and APIs still work during setup. - If
ELEVENLABS_API_KEYis missing, voice output falls back to browser speech synthesis. - If
BLOB_READ_WRITE_TOKENis missing, RAG storage falls back to local temporary storage instead of persistent Vercel Blob storage.
npm install
npm run devOpen http://localhost:3000.
GET /api/test-envPOST /api/chatPOST /api/rag/uploadPOST /api/rag/queryPOST /api/portfolio/analyzePOST /api/market/insightsPOST /api/voice/speak
src/backend/ai/llmClient.tscentralizes provider-aware chat completion calls.src/backend/ai/llmService.tswraps model execution with safe fallbacks.src/backend/services/chatService.tscombines direct chat reasoning with optional RAG evidence.
The RAG flow is:
- Upload PDF or TXT through
POST /api/rag/upload - Extract text with
pdf-parsefor PDFs or UTF-8 decoding for text - Chunk text into overlapping segments
- Generate lightweight local embeddings
- Persist chunks and embeddings to a private Vercel Blob JSON file when
BLOB_READ_WRITE_TOKENis configured - Retrieve top-k chunks by cosine similarity
- Build grounded context for the LLM
- Return answer plus citations
Portfolio analysis combines rule-based heuristics and LLM summarization:
- Normalizes allocations to 100%
- Maps tickers to sector/risk profiles
- Computes weighted portfolio risk
- Detects large-position concentration
- Detects sector concentration
- Generates a diversification score
- Returns allocation, sector, and trend datasets for charts
- Adds an LLM-written educational summary
- Uses a structured local market theme dataset
- Ranks themes by momentum
- Produces supporting data for charts
- Generates summary, opportunities, and risks
- Uses the LLM abstraction with a deterministic fallback path
- Input: Browser Web Speech API captures microphone input on the client
- Output: Backend route calls ElevenLabs when configured
- Fallback: Browser
speechSynthesisis used when ElevenLabs is unavailable
- No secrets are hardcoded
- AI and voice providers are environment-driven
- Vector persistence uses private Vercel Blob storage in production and a local fallback for development/setup
- Route handlers use a modular controller/service layout for maintainability
- The UI uses reusable components rather than page-specific one-offs
Run the local checks:
npm run dev
npm run check
npm run smoke
npm run build- The smoke test at
scripts/smoke-test.tsperforms real HTTP calls against the local Next.js server. - Start the app first with
npm run devornpm run start, then runnpm run smoke. - The script loads
.env.localwith@next/envso it resolves the same values used by Next.js. GET /api/test-envconfirms whetherLLM_API_KEYandELEVENLABS_API_KEYare present on the server without exposing the secrets themselves.- Only
NEXT_PUBLIC_*variables are exposed to client-side code; server-side route handlers can safely readprocess.env.LLM_API_KEYandprocess.env.ELEVENLABS_API_KEY.
- Replace local embeddings with hosted embeddings for higher retrieval quality
- Add portfolio performance import from brokerage exports
- Add streaming responses through Server-Sent Events
- Add authentication and per-user document namespaces
- Swap the local vector store for PostgreSQL + pgvector or a hosted vector DB
Developed and maintained by Ayan113.


