This project implements a Retrieval-Augmented Generation (RAG) system for document-based question answering.
- Document ingestion and semantic retrieval
- MVC architecture for clean separation of concerns
- Modular support for multiple LLM providers
- Multilingual support: Arabic, English, French
- Embedding and generation using OpenAI and Cohere
- Easily extensible to add new models or providers
Ask natural language questions over your documents and receive accurate, context-aware answers powered by retrieval and generation.