This Django application includes an LLM-based chatbot and some features of a virtual charity website.
The chatbot utilizes the OpenAI API and implements Retrieval-Augmented Generation (RAG).
FAQ entry data is stored in Milvus Lite as the knowledge base, and text is vectorized using the Universal Sentence Encoder (USE).
A Django custom management command is defined to download USE from Kaggle.
To run the application, the following environment variables must be set:
KAGGLE_USERNAMEKAGGLE_KEYOPENAI_API_KEYCACHE_DIR– specifies the directory for storing the USE model. If not set, it defaults to~/.cache.
Install dependencies:
pip install -r requirements.txtDownload the USE model:
python charityproject/manage.py download_use_modelPopulate the database:
This command populates both the relational and vector databases, including sample data such as FAQ entries. Since vector representations are generated during this process, the encoder must be available.
python charityproject/manage.py migrate
python charityproject/manage.py populate_dataStart the Redis server:
redis-serverRun the Django development server:
python charityproject/manage.py runserverTo run the tests:
pytest charityprojectTo check test coverage:
pytest charityproject --cov charityprojectThe performance of the developed chatbot was evaluated using the RAGAS framework.
The code used for the evaluation is provided in the evaluation folder.
See the following notebook for details:
evaluation/ragas_evaluation_example.ipynb
This notebook demonstrates how RAGAS was applied to assess the system’s performance on a sample query.
This project is submitted for academic evaluation only. No license has been applied at this stage.