GPTClone is a FastAPI-based chat application that uses langchain for conversation memory and OpenAI's language models to provide intelligent responses.
- FastAPI backend for efficient API handling
- Langchain integration for improved conversation memory
- OpenAI's GPT model for generating responses
- In-memory chat history storage
- Markdown formatting for better response presentation
- Python 3.7+
- OpenAI API key
-
Clone the repository:
git clone https://github.com/marketcalls/gptclone.git cd gptclone -
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate` -
Install the required packages:
pip install -r requirements.txt -
Create a
.envfile in the project root:cp .sample.env .envThen, edit the
.envfile and replace the placeholder values with your actual OpenAI API key and desired model:OPENAI_API_KEY=your_api_key_here OPENAI_MODEL=gpt-4o-mini
-
Start the FastAPI server:
uvicorn app.main:app --reload -
Open a web browser and navigate to
http://localhost:8000to use the chat interface.
app/: Contains the main application codemain.py: FastAPI application and route definitionsmodels.py: Database modelsschemas.py: Pydantic schemas for request/response validationdatabase.py: Database connection and session management
static/: Static files (CSS, JavaScript)templates/: HTML templatesrequirements.txt: List of Python dependencies.sample.env: Sample environment file with placeholder values.env: (Created by you) Actual environment file with your API key and model name
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.