Warning
Work-in-progress: This project is currently in a very early stage. Only the minimum core functionality is implemented, and many features are still missing or incomplete.
A simple, open-source Next.js web interface for interacting with Google's Gemini AI API. It provides a ChatGPT-like chat experience with basic streaming response functionality, using PostgreSQL to store chat history.
- Interactive, ChatGPT-inspired chat interface
- Integration with Google's Gemini AI API
- Basic streaming responses
- Persistent conversation history (PostgreSQL)
- Support for free and paid Google API keys
- Markdown and code syntax highlighting
- Modern UI built with Next.js and Tailwind CSS
git clone https://github.com/W4D-cmd/QuantaGem.gitCreate a new file .env.local in the root of the repository. Copy and paste the content of the .env file into it and set your API keys.
Note
If you do not have multiple Google accounts or wish to only use the free API simply put the same key for both entries.
FREE_GOOGLE_API_KEY="your_free_google_api_key"
PAID_GOOGLE_API_KEY="your_paid_google_api_key"You must also set JWT_SECRET to a random, cryptographically strong string.
This secret is vital for securing user sessions and should be at least 32 characters (256 bits) long.
You can generate a suitable value using node -e "console.log(require('crypto').randomBytes(32).toString('base64'))" and add it to your .env.local file.
JWT_SECRET="your_jwt_secret"The application uses the medium as the default model for Speech-to-Text transcription. You can change this to any other model from the Faster Whisper family to balance performance and accuracy according to your hardware and needs.
To change the model, you need to edit the model identifier string in the STT service's source code.
-
Open the file
stt-service/main.py. -
Locate the
model_sizevariable at the top of the file.model_size = "medium" compute_type = "int8"
-
Replace the string value of
model_size(e.g.,"medium") with the name of your desired model from the list below (e.g.,"distil-large-v3"). -
Save the file and rebuild the Docker container using
docker compose up --buildfor the changes to take effect.
Available Faster Whisper Models
Here is a list of available models, grouped by type. Larger models are more accurate but slower and require more resources.
tinybasesmallmediumlarge-v1large-v2large-v3
tiny.enbase.ensmall.enmedium.en
distil-small.endistil-medium.endistil-large-v2distil-large-v3
Inside the cloned repository execute the following command to start up the docker environment including the database and the Next.js app:
docker compose up --buildOpen your browser at http://localhost:3000.
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create your feature branch (
git checkout -b feature/my-feature). - Commit your changes (
git commit -am 'Add new feature'). - Push to the branch (
git push origin feature/my-feature). - Create a new pull request.
Licensed under the MIT License. See LICENSE for details.