A comprehensive laptop comparison and recommendation system built with AI-powered insights, specification analysis, and intelligent recommendations based on user reviews and technical specifications.
Demo Screen Recording: Click Here
Database Schema Documentation: Click Here
API Documentation: Click Here
Generated Dataset: Click Here
- Frontend: Next.js React application with TypeScript
- Backend API: FastAPI with PostgreSQL database
- AI Service: LangGraph agent with vector search capabilities
- Database: PostgreSQL with ChromaDB for vector storage
- Deployment: Docker containerized services
- Laptop Catalog: Compare 4 professional laptops (Lenovo ThinkPad E14 Gen 5, HP ProBook series)
- AI-Powered Chat: Natural language queries about laptops
- Smart Recommendations: Personalized suggestions based on requirements
- Price Tracking: Historical price trends and availability monitoring (Not Developed)
- Review Intelligence: Semantic search through user reviews and Q&A
- Detailed Specifications: Technical specs extracted from official PDFs
For a quick demo using pre-populated data:
- Docker and Docker Compose installed
- Git installed
- OpenAI API key
git clone https://github.com/HimashaRandil/laptop-intelligence-engine.git
cd laptop-intelligence-engineCreate a .env file in the project root:
# Database Configuration
DB_USER=myuser
DB_PASSWORD=mysecretpassword
DB_NAME=laptops_db
# AI Configuration
OPENAI_API_KEY=your-actual-openai-key-here
DEFAULT_MODEL=gpt-4
TEMPERATURE=0.1# Build and start all services
docker-compose up --build
# Or run in background
docker-compose up --build -d- Frontend: http://localhost:3000
- Backend API: http://localhost:8000/docs
- AI Service: http://localhost:8001/ai/docs
- Database: localhost:5432
curl -X POST "http://localhost:8001/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message": "What laptops are under $1000?"}'# Install dependencies
pip install -e .
# Set environment variables
export DATABASE_URL="postgresql://myuser:mysecretpassword@localhost:5432/laptops_db"
export OPENAI_API_KEY="your-key-here"
# Start backend
python -m backend.src.app.main# Start AI service
python -m ai_services.src.maincd frontend
# Install dependencies
npm install
# Create .env.local
echo "NEXT_PUBLIC_API_URL=http://localhost:8000" > .env.local
echo "NEXT_PUBLIC_AI_API_URL=http://localhost:8001" >> .env.local
# Start frontend
npm run devThe docker-compose setup automatically loads a database backup with sample data.
To rebuild all data from scratch:
# 1. Start database
docker-compose up db -d
# 2. Run PDF extraction
python -m scripts.ingest_data
python -m scripts.structure_data #LLM would structure data into predefined formats
python -m scripts.consolidated_specs #Preprocessing on the database
# 3. Run web scraping (Most of the scrapers does not work due to anti-bot manuvers)
# Strongly recommend against runing these scripts due to ToS from the product websites
python -m scripts.lenovo_scraper
python -m scripts.integrated_scraper
# 3.a Insted run this sample data uploader
python -m scripts.database_reset_script
python -m scripts.sample_data_loader
# 4. Index vector database
python backend/scripts/index_vector_data.pyGET /laptops- List all laptopsGET /laptops/{id}- Detailed laptop informationGET /laptops/{id}/specifications- Laptop specificationsGET /laptops/{id}/reviews- Customer reviewsGET /laptops/{id}/questions- Q&A dataGET /laptops/compare?ids=1,2,3- Compare multiple laptops
POST /ai/chat- General chat with AI assistantPOST /ai/recommend- Get laptop recommendationsGET /ai/health- Service health check
# Budget queries
curl -X POST "http://localhost:8001/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Show me laptops under $1200"}'
# Specification queries
curl -X POST "http://localhost:8001/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message": "I need a laptop with Intel i7 processor"}'
# Experience-based queries
curl -X POST "http://localhost:8001/ai/recommend" \
-H "Content-Type: application/json" \
-d '{"message": "Best laptop for programming work"}'- Natural Language Processing: Understands complex queries about laptop features
- Vector Search: Semantic search through reviews and Q&A content
- Contextual Recommendations: Suggestions based on use cases and preferences
- Multi-tool Orchestration: Combines database queries with vector search
- Citation Support: Responses include references to source data (Not implemneted in the UI)
# Build and start all services
docker-compose up --build
# View logs
docker-compose logs -f
# Stop services
docker-compose down- Launch EC2 Instance (t3.medium recommended)
- Install Docker:
sudo apt update && sudo apt upgrade -y sudo apt install -y docker.io docker-compose sudo usermod -aG docker $USER
- Clone and Deploy:
git clone https://github.com/your-username/laptop-intelligence-engine.git cd laptop-intelligence-engine # Create .env file with your configuration docker-compose up --build -d
- Configure Security Groups for ports 3000, 8000, 8001
laptop-intelligence-engine/
βββ backend/ # FastAPI backend service
β βββ src/app/
β β βββ models/ # SQLAlchemy database models
β β βββ schemas/ # Pydantic schemas
β β βββ core/ # Database configuration
β β βββ main.py # FastAPI application
β βββ scripts/ # Index Vector Script
β βββ Dockerfile
βββ ai_services/ # AI service with LangGraph
β βββ src/
β β βββ services/ # AI agent and tools
β β βββ core/ # Configuration and database
β β βββ main.py # FastAPI AI service
β βββ Dockerfile
βββ frontend/ # Next.js React frontend
β βββ src/components/ # React components
β βββ lib/ # API utilities and types
β βββ Dockerfile
βββ data/ # Data storage
β βββ processed/ # Database backups
β βββ vector_db/ # ChromaDB storage
βββ docker-compose.yml # Service orchestration
βββ pyproject.toml # Python dependencies
- CORS Errors: Ensure frontend environment variables point to correct backend URLs
- Database Connection: Verify PostgreSQL is running and credentials are correct
- AI Service Startup: Check OpenAI API key is set and vector database initializes
- Port Conflicts: Ensure ports 3000, 8000, 8001, 5432 are available
# Check service health
curl http://localhost:8000/health
curl http://localhost:8001/ai/health
# View container logs
docker logs laptop_backend
docker logs laptop_ai
docker logs laptop_frontend
# Check database
docker exec laptop_db psql -U myuser -d laptops_db -c "SELECT COUNT(*) FROM laptops;"- Memory: AI service requires ~2GB RAM for vector operations
- Storage: Vector database needs ~500MB for indexed reviews
- CPU: LLM calls may take 3-10 seconds depending on complexity
# Test backend endpoints
curl http://localhost:8000/laptops
curl http://localhost:8000/laptops/1
# Test AI service
curl -X POST "http://localhost:8001/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Hello"}'cd frontend
npm run test
npm run build # Test production buildAll services include health check endpoints:
- Backend:
GET /health - AI Service:
GET /ai/health - Database: Built into Docker Compose
Services log to:
- Console output (viewable with
docker-compose logs) - Local files in
./logs/directory
- Store API keys in environment variables, never in code
- Use strong database passwords
- Configure proper CORS origins for production
- Regular dependency updates via
pip updateandnpm audit
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Check the troubleshooting section above
- Review container logs for error details
- Ensure all environment variables are set correctly
- Verify Docker and dependencies are properly installed
This project demonstrates:
- Data Pipeline: PDF extraction
- Database Design: Comprehensive schema with relationships
- API Development: RESTful endpoints with documentation
- AI Integration: LLM-powered chat and recommendations
- Frontend Development: Interactive user interface
- DevOps: Containerization and cloud deployment
- Documentation: Comprehensive setup and usage guides