This repository contains a backend for a multi-tenant SaaS platform built with FastAPI, Celery, Redis, and PostgreSQL. Everything is containerized and can be run locally for development.
The application follows a distributed architecture pattern with the following key components:
- FastAPI Backend: Handles HTTP requests and serves as the main API gateway
- Celery Workers: Process asynchronous tasks and long-running operations
- Redis: Serves dual purposes:
- Message broker for Celery task queue
- Caching layer for temporary data storage
- PostgreSQL: Persistent storage for application data
- Clients send requests to the FastAPI backend
- For long-running operations, the backend dispatches tasks to Celery workers via Redis
- Celery workers process tasks asynchronously and publish results to Redis
- Websocket endpoints are used to stream results to the client by subscribing to Redis channels
- Permanent data is stored in PostgreSQL
- Docker and Docker Compose installed on your system
git clone <repository-url>
cd woopdi-apiCreate a .env file in the root directory (same level as docker-compose.yml) with the following variables based on .env.example:
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
# PostgreSQL Configuration
POSTGRES_USER=development
POSTGRES_PASSWORD=devpass
POSTGRES_DB=name_your_database
POSTGRES_PORT=5432
# Google Cloud Storage (Optional)
GCP_BUCKET_NAME=your-gcp-bucket-name
GCP_SERVICE_FILE_CRED_JSON_LOCATION=your-json-service-file
# Email Configuration (SendGrid)
SENDGRID_API_KEY=your_sendgrid_api_key
SENDGRID_FROM_EMAIL=your_sendgrid_from_email
# Stripe Configuration
STRIPE_SECRET_KEY=your_stripe_secret_key
STRIPE_WEBHOOK_SECRET=your_stripe_webhook_secret
# Celery Configuration
CELERY_RESULT_BACKEND=redis://redis:6379/0
CELERY_BROKER_URL=redis://redis:6379/0
# URL Configuration
API_HOST=127.0.0.1
WEB_CLIENT_URL=http://localhost:3000
# Security Keys
JWT_SECRET=your_jwt_secret_key
ENCRYPTION_KEY=your_encryption_key
# Environment
IS_PROD=False
# AI Services (Optional)
REPLICATE_API_TOKEN=your-replicate-api-tokenGoogle Cloud Storage (Optional):
If you want to use Google Cloud Storage features, get your service key from Google Cloud Console and include its location in the .env.
The .env file is gitignored so it won't be committed to the repository.
docker compose upThis command will build all containers and start them, including:
- FastAPI application
- Celery workers
- Redis
- PostgreSQL
Open another terminal and run the seed to have test users and organizations:
docker exec -it woopdi_fastapi_app /bin/bash
python seed/seed_database.pyThe seed creates:
- System users (superadmin, admin)
- Organization users with test organizations
- Various roles (ADMIN, MODERATOR, MEMBER)
- Test data for development and testing
- API: http://localhost:8000
- API OAS Documentation: http://localhost:8000/docs
- API OAS Documentation Redoc: http://localhost:8000/redoc
- Interactive Terminal:
docker exec -it woopdi_fastapi_app /bin/bash
# Run all tests
docker exec woopdi_fastapi_app pytest
# Interactive testing
docker exec -it woopdi_fastapi_app /bin/bashTests mirror the development seed structure for consistency. The test configuration in tests/conftest.py creates the same users and organizations as the development seed.
The system uses SQLAlchemy as an ORM and Alembic for migrations.
When you make changes to models:
- Get into the container:
docker exec -it woopdi_fastapi_app /bin/bash- Generate migration:
alembic revision --autogenerate -m "added a new field to the project model"- Fix permissions (important!):
chmod 777 alembic/versions/your-new-migration-file.pyMigrations are run automatically when you start the containers. If you need to run them manually:
alembic upgrade head# Stop containers
docker compose down --remove-orphans
# Clear all volume data (fresh start)
docker compose down --remove-orphans -v
# Restart after stopping
docker compose down --remove-orphans
docker compose up
# View logs
docker compose logs -f
# View container status
docker compose psdocker exec -it woopdi_fastapi_app /bin/bash
python seed/seed_database.pyWhen adding new models that need seed data, use AI to help update the seed files. The AI is usually very good at understanding the existing pattern and adding appropriate test data.
-
System Level Users (not associated with organizations):
superadmin- Full system accessadmin- Limited system access
-
Organization Level Users:
ADMIN owner- Admin of organization AND owner of non-solo organizationADMIN- Admin of organization but not necessarily the ownerMODERATOR- Middle tier for privilegesMEMBER- Regular organization user
look at the acutal seed files to understand the different user types in the system.
The API uses middleware functions to gate endpoints. This approach is simple and easily modifiable - just add new functions or modify existing ones as needed.
Solo Organizations:
- Automatically assigned to all users who enter the system (signup or invite)
- Provide personal workspace/context for each user
- All users are ADMIN owners of their solo organization
- Used for inviting people to collaborate
Non-Solo Organizations:
- Created when users need their own organization space
- Users can only own 1 non-solo organization in this starter template
- Endpoint:
POST /organizations/create-organization - For users who were invited but need their own organization
Issue: Invited users don't get a solo organization automatically
Solution: Use POST /organizations/create-organization endpoint to create a non-solo organization for users who don't have one yet.
Production setup is similar to development since everything is containerized. Consider using managed services for:
- PostgreSQL (AWS RDS, Google Cloud SQL, etc.)
- Redis (AWS ElastiCache, Google Memorystore, etc.)
- File storage GCP production bucket
This setup provides a clean, consistent development environment that mirrors production architecture.
