Skip to content

Ayan113/Medi-Insight_AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MediInsight AI

MediInsight AI is a full-stack healthcare GenAI application built with React, TypeScript, Node.js, Groq, and a local retrieval pipeline. It combines multimodal imaging support, wellness planning, appointment intake, mental wellness chat, and document question answering into one interview-ready product.

Live Deployment

MediInsight AI Home

Project Overview

The application is designed to feel like a real product rather than a basic API demo:

  • Groq powers every LLM-backed response in the backend.
  • A local RAG pipeline ingests PDF and TXT documents, chunks them, generates lightweight local embeddings, and retrieves relevant context for grounded answers.
  • The frontend exposes clean healthcare workflows with environment-driven API configuration.
  • The backend is modularized into controllers, services, routes, and RAG-specific logic for easier maintenance and extension.

Features

  • AI-assisted X-ray and imaging analysis powered by Groq vision inference
  • Personalized diet and sleep plan generation powered by Groq text inference
  • Mental wellness support chat with contextual conversation history
  • Document QA page for uploading files and asking grounded questions
  • Retrieval-augmented generation with local chunk search and citations
  • Appointment intake workflow for care coordination demos

Tech Stack

  • Frontend: React, TypeScript, React Router, Tailwind CSS
  • Backend: Node.js, Express, Multer
  • LLM Inference: Groq
  • RAG: Local text extraction, chunking, hashed embeddings, local vector persistence
  • Utilities: pdf-parse, pdf-img-convert, Lucide React

Architecture

High-level Flow

graph TD
    A[React Frontend] -->|HTTP requests| B[Express API]
    B --> C[Groq Text Models]
    B --> D[Groq Vision Model]
    B --> E[Local RAG Pipeline]
    E --> F[Persisted Vector Index JSON]
Loading

RAG Pipeline

The document QA feature uses a local retrieval pipeline:

  1. A user uploads a PDF or TXT file to POST /api/rag/upload
  2. The backend extracts text from the document
  3. The text is chunked into overlapping passages
  4. Each chunk is converted into a lightweight local embedding
  5. Chunks and embeddings are stored in a local persisted vector index at healthcare-plus/backend/data/rag-index.json
  6. The user asks a question through POST /api/rag/query
  7. The backend embeds the question, retrieves the most relevant chunks locally, and sends only the grounded context plus the question to Groq
  8. The backend returns the final answer along with chunk citations and file names

Backend Structure

healthcare-plus/backend/
├── index.js
├── .env.example
├── package.json
└── src/
    ├── config/
    ├── controllers/
    ├── middleware/
    ├── rag/
    ├── routes/
    ├── services/
    └── utils/

Environment Variables

The backend supports loading .env from either:

  • healthcare-plus/backend/.env
  • the repo root .env

Required:

  • GROQ_API_KEY

Optional:

  • PORT default: 3001
  • GROQ_TEXT_MODEL default: llama-3.3-70b-versatile
  • GROQ_VISION_MODEL default: meta-llama/llama-4-scout-17b-16e-instruct
  • REACT_APP_API_BASE_URL for the frontend, default: http://localhost:3001/api

Example backend .env:

GROQ_API_KEY=your_groq_api_key
PORT=3001
GROQ_TEXT_MODEL=llama-3.3-70b-versatile
GROQ_VISION_MODEL=meta-llama/llama-4-scout-17b-16e-instruct

Local Setup

Prerequisites

  • Node.js 18, 20, or 22
  • npm
  • A Groq API key

Note: Node 25.x is not recommended in this repo because the PDF-to-image dependency used by the imaging workflow is not compatible with it in this environment.

1. Install dependencies

Backend:

cd healthcare-plus/backend
npm install

Frontend:

cd healthcare-plus/frontend
npm install

2. Configure environment variables

Create one of the supported .env files and set GROQ_API_KEY.

3. Start the backend

cd healthcare-plus/backend
npm run dev

4. Start the frontend

cd healthcare-plus/frontend
npm start

Frontend runs on http://localhost:3000 by default and targets http://localhost:3001/api unless REACT_APP_API_BASE_URL is overridden.

API Endpoints

Endpoint Method Description
/api/test GET Basic backend and Groq connectivity check
/api/analyze-image POST Analyze uploaded image or PDF imaging study
/api/health-plans POST Generate a wellness plan
/api/mental-health-chat POST Get a mental wellness assistant response
/api/rag/upload POST Upload and index a PDF or TXT file
/api/rag/query POST Ask grounded questions against indexed documents

Legacy compatibility:

  • /api/HealthPlans remains available as an alias for /api/health-plans

Frontend Pages

  • / home page
  • /xray-diagnosis imaging workflow
  • /health-plans wellness plan generator
  • /appointments appointment intake flow
  • /mental-health wellness chat
  • /document-qa document question answering with citations
  • /services, /about, /contact, /privacy

How Groq Is Integrated

Groq is used for all AI features:

  • Imaging analysis uses Groq vision inference in the backend service layer
  • Wellness plan generation uses Groq text inference
  • Mental wellness chat uses Groq text inference with short-lived in-memory conversation history
  • RAG answers use Groq text inference after local retrieval selects the relevant context

The Groq integration lives in:

  • healthcare-plus/backend/src/services/groqClient.js
  • healthcare-plus/backend/src/services/groqService.js

How To Test Locally

Groq-backed feature checks

  1. Start backend and frontend
  2. Open the app in the browser
  3. Test wellness plan generation on /health-plans
  4. Test mental wellness chat on /mental-health
  5. Test document QA on /document-qa

RAG test flow

  1. Upload a PDF or TXT file in the Document QA page
  2. Confirm the success message reports indexed chunks
  3. Ask a question clearly answered in the uploaded file
  4. Verify the response includes an answer and matching citations

CLI endpoint checks

curl http://localhost:3001/api/test
curl -X POST http://localhost:3001/api/rag/query \
  -H "Content-Type: application/json" \
  -d '{"question":"What does the document recommend?"}'

Production-Readiness Improvements

  • Replaced legacy provider-specific inference setup with Groq-backed services
  • Modularized the backend into maintainable layers
  • Added centralized error handling and async route wrappers
  • Added environment-aware config loading
  • Removed hardcoded frontend API URLs
  • Added a grounded document QA workflow with persisted local retrieval state
  • Removed unused frontend server stubs and dead route files

Notes

  • The local vector store uses a lightweight embedding strategy so the entire stack runs locally without paid vector infrastructure.
  • Indexed chunks are persisted to disk for local reuse across backend restarts.
  • The appointment flow remains a polished frontend demo and does not persist data to a database.

Created by Ayan113

About

MediInsight AI is a full-stack healthcare GenAI application built with React, TypeScript, Node.js, Groq, and a local retrieval pipeline. It combines multimodal imaging support, wellness planning, appointment intake, mental wellness chat, and document question answering into one interview-ready product.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors