An enterprise-grade, full-stack AI travel planner that provides personalized, data-driven itineraries for Lucknow, India. This project showcases a decoupled, production-ready architecture, combining a FastAPI backend with a Streamlit frontend. It leverages an advanced agentic RAG system to deliver accurate, context-aware responses by integrating a local knowledge base with live, external APIs.
Note: This application is live and deployed on Render, demonstrating a complete end-to-end development cycle: from data curation and system design to quantitative evaluation and cloud deployment.
CLICK HERE TO ACCESS THE LIVE APPLICATION
- Decoupled Frontend/Backend Architecture: A scalable and maintainable client-server model, with a Streamlit UI making API calls to a robust FastAPI backend.
- Advanced Agentic Logic: The core of the application is a LangChain agent that can reason, make decisions, and intelligently choose between multiple tools to best answer a user's query.
- High-Fidelity RAG System: Provides factually grounded answers by retrieving information from a curated knowledge base on Lucknow's history and cuisine, stored in a
ChromaDBvector store. - Live External API Integration: The agent can call the
Open-Meteo APIin real-time to fetch current weather data and incorporate it into travel advice. - Quantitative Performance Evaluation: Includes a dedicated evaluation suite using the Ragas framework to rigorously test and validate the RAG pipeline's performance, ensuring high factual consistency and relevancy.
- Blazing-Fast Inference: Powered by Groq's high-speed LPU™ Inference Engine using Google's efficient
llama-4model.
To ensure the reliability of the system, a comprehensive evaluation was performed on the RAG pipeline. The following metrics validate the quality of the generated answers against a ground-truth dataset.
| Metric | Score (0.0 to 1.0) | Description |
|---|---|---|
faithfulness |
1.00 | Measures how factually consistent the generated answer is with the retrieved context. A score of 1.0 means no hallucinations. |
context_recall |
1.00 | Measures the retriever's ability to find all the necessary information from the knowledge base. |
context_precision |
0.92 | Measures the signal-to-noise ratio of the retrieved context. High precision means less irrelevant information. |
Conclusion: The high scores, especially in faithfulness and recall, quantitatively prove that the RAG system provides accurate, reliable, and contextually rich answers.
This project employs a modern, decoupled architecture, which is the industry standard for scalable web applications. The frontend is completely separate from the backend, communicating via a REST API.

+---------------------------+ +--------------------------------+
| Frontend (Client) | | Backend (Server) |
| (Streamlit on Port 8501) | | (FastAPI on Port 8000) |
+---------------------------+ +--------------------------------+
| | | |
| - Renders UI | | - Exposes REST API (/query) |
| - Captures User Input | | - Contains Agentic Logic |
| - Displays Chat History | | - Manages Tools (RAG, Weather)|
| | | |
| | HTTP | |
| User Query | -------> | Agent Processing |
| (e.g., "Plan trip") | | |
| | | |
| Final Answer | <------- | Structured Response |
| (Formatted Itinerary) | | |
| | | |
+---------------------------+ +--------------------------------+| Category | Technology / Service |
|---|---|
| Cloud Deployment | Render, Docker, Docker Compose |
| Frontend | Streamlit |
| Backend | FastAPI, Uvicorn |
| LLM & Agent | LangChain, Groq (Google llama-4-maverick) |
| Vector DB | ChromaDB (Local) |
| Embeddings | Hugging Face Inference API (BAAI/bge-small-en-v1.5) |
| Evaluation | Ragas (for quantitative metrics) |
| External API | Open-Meteo (Weather) |
This project is fully containerized and configured for both cloud and local execution.
This project is configured for "Infrastructure-as-Code" deployment using the render.yaml file.
- How it Works: The
render.yamlfile defines two "Web Service" instances (lucknow-guide-backendandlucknow-guide-frontend). - Secrets: It uses a Render Environment Group named
lucknow-guide-secretsto securely manage the API keys. - Networking: It correctly sets the
BACKEND_URLenvironment variable for the frontend to the backend's public URL, solving all networking. - Auto-Deploy: The backend and frontend will auto-deploy on push.
- Create the
backend/.envfile with yourGROQ_API_KEYandHUGGINGFACEHUB_API_TOKEN. - From the project root, run:
docker-compose up --build - Access the frontend at
http://localhost:8501.
- Start the Backend (Terminal 1) cd advanced-lucknow-guide/backend -> python -m venv venv -> venv\Scripts\activate -> pip install -r requirements.txt uvicorn main:app --reload
Keep this terminal open.
- Start the Frontend (Terminal 2) cd advanced-lucknow-guide/frontend -> python -m venv venv -> venv\Scripts\activate -> pip install -r requirements.txt streamlit run app.py
A new tab will open in your browser at http://localhost:8501.
📁 advanced-lucknow-guide/
│
├── 📁 backend/
│ ├── 📁 knowledge_base/
│ │ ├── 📄 lucknow_food.txt
│ │ └── 📄 lucknow_history.txt
│ ├── 📄 .env # (Must be created locally)
│ ├── 📄 agent_logic.py # Core AI agent and tool logic
│ ├── 📄 main.py # FastAPI server
│ ├── 📄 requirements.txt # Backend dependencies
│ ├── 📄 evaluation.py # Ragas evaluation script
│ └── 🐳 Dockerfile # Backend Docker instructions
│
├── 📁 frontend/
│ ├── 📄 app.py # Streamlit UI
│ ├── 📄 requirements.txt # Frontend dependencies
│ └── 🐳 Dockerfile # Frontend Docker instructions
│
├── 🐳 docker-compose.yml # Local orchestration
├── ☁️ render.yaml # Cloud orchestration (Render Blueprint)
└── 📖 README.md # This file