An open-sourced backend service for Resell.
Make sure that npm installed (nvm is recomended to manage node versions.)
Depending on preference, either run yarn install or npm install to ensure
all required dependencies are installed. Copy the .env_template to either a
.env file , and update with necessary credentials. Sourcing the file is not
necessary.
In addition, this codebase requires these files in the project root to run:
.env→ Copy from.env_templateand fill with your database credentials, Firebase configs, and API keysresell.pem→ Private key file required for authentication- a resell firebase json file → Firebase service account key for connecting to Firebase
Steps to install Postgres:
-
Update Homebrew
brew update -
Install and start
brew install postgresql brew services start postgresql -
Initialize DB
initdb /usr/local/var/postgres -
Create User
In order to run the create and alter commands, you must be inside psql.psql postgres create user postgres with password 'postgres'; alter user postgres with superuser; create database "resell-dev";
Use the \l command to see if the "resell-dev" is owned by user postgres. If
instead it is owned by another root user, drop the database via:
drop database "resell-dev";
and login to psql via
psql postgres postgres
Then, create the database again, and it should be owned by user postgres.
- Download PostgreSQL from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
- During install, include Command Line Tools and pgAdmin 4. Set the default password for the
postgresuser (e.g.,postgres). - Add PostgreSQL’s bin folder to PATH:
C:\Program Files\PostgreSQL\<version>\bin - Open Command Prompt or PowerShell:
psql -U postgres -d postgres - Inside psql:
CREATE USER postgres WITH PASSWORD 'postgres'; ALTER USER postgres WITH SUPERUSER; CREATE DATABASE "resell-dev"; - Verify:
If
\lresell-devis owned by another user:DROP DATABASE "resell-dev"; CREATE DATABASE "resell-dev" OWNER postgres;
In order to connect to the database, follow these steps:
- Install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" - Update Homebrew to ensure you have the latest information
brew update - Install pgAdmin
brew install --cask pgadmin4
Open pgAdmin and configure the connection using the defined user and password.
pgAdmin is installed by default with PostgreSQL. Launch it from the Start Menu and configure a new connection with:
- Host:
localhost - Port:
5432 - Username:
postgres - Password:
postgres
This codebase uses the pgvector extension.
brew install pgvector
psql -U postgres -d resell-dev -c "CREATE EXTENSION vector;"
- Download/build pgvector from GitHub releases.
- Copy files:
vector.control→C:\Program Files\PostgreSQL\<version>\share\extension\vector.dll→C:\Program Files\PostgreSQL\<version>\lib\
- Restart PostgreSQL service.
- In psql:
CREATE EXTENSION vector;
docker run -d --name pgvector -e POSTGRES_PASSWORD=postgres -p 5432:5432 ankane/pgvector
Update .env:
DATABASE_URL=postgres://postgres:postgres@localhost:5432/resell-dev
To create/update the database objects, run:
npm run db:migrate
If you are encountering any migrations errors, use this as a last resort!
-
Log into psql and run
drop database "resell-dev"This will delete all data in your database as well. Make sure you do not have any important data in your database.
-
Create the database again via.
create database "resell-dev" -
Delete all of the migration files in the "migrations" folder
-
Create a new migration file titled "init" via.
npm run db:migrate:generate init -
Run the migration
npm run db:migrate
To work with the same data as the dev environment, you can import a copy of the dev database into your local setup. This works with both Docker and native postgres installations.
Before you can dump from the remote dev server, you need to configure the connection in pgAdmin:
-
Get Database Connection Details from DigitalOcean
- Go to DigitalOcean → Databases
- Scroll down to Database Connection Details (you'll need these for pgAdmin)
-
Add Your IP Address to Allowlist
- In DigitalOcean database settings, add your IP address to the trusted sources
- (You may have already done this)
-
Download CA Certificate
- In DigitalOcean, download the CA certificate
- Save it somewhere safe (e.g., same folder as your
resell.pemfile, like in anappdevfolder)
-
Register Server in pgAdmin
- Open pgAdmin
- Right-click Servers → Register → Server
-
Configure Connection Details
- General tab: Give it a name (e.g., "appdev-postgres")
- Connection tab: Fill in all the details from DigitalOcean:
- Host name/address
- Port
- Database name
- Username
- Password
-
Configure SSL Settings
- Go to the Parameters tab
- Find
sslmodeand set it to required - Click Add (plus icon) to add a new parameter
- Set parameter name:
root certificate - Set value: Full path to the CA certificate you downloaded
- Example:
/Users/yourname/appdev/ca-certificate.crt
-
Save and Connect
- Click Save
- pgAdmin should now connect to the remote dev database
Option A: Dump from Remote Server (Recommended)
Once you have the remote dev server configured in pgAdmin (see Step 0), use those connection details to dump:
REMOTE_DB_HOST=your-dev-server-host \
REMOTE_DB_PORT=your-port \
REMOTE_DB_USER=your-username \
REMOTE_DB_NAME=resell-dev \
PGPASSWORD=your-password \
FORCE_DOCKER=1 \
./scripts/dump-dev-db.shReplace the values with your actual DigitalOcean connection details from pgAdmin. The FORCE_DOCKER=1 flag tells the script to use your Docker container's pg_dump tool.
Option B: Dump from Local Database
If your dev database is local, simply run:
./scripts/dump-dev-db.shThis automatically detects your Docker container or native postgres installation.
The import script automatically detects your postgres setup (Docker or native) and imports accordingly:
./scripts/import-dev-data.shWARNING: This will replace all data in your local resell-dev database! You'll be prompted to confirm.
What these commands do:
The first command (dump):
- Connects to the remote DigitalOcean hosted dev database (appdev-postgres)
- Reads the
resell-devdatabase from DigitalOcean (READ ONLY - no changes to remote!) - Saves a copy to
dumps/dev_db_dump_[timestamp].sqlon your machine
The second command (import):
- Finds the dump file you just created
- Imports it into your local
resell-devdatabase (in postgres-docker) - Shows a summary of imported tables and row counts
Important Note: Both the remote DigitalOcean database and your local database are named resell-dev, but they are completely SEPARATE. The dump script only reads from DigitalOcean and never modifies it.
Manual Override Options:
- Force Docker:
FORCE_DOCKER=1 ./scripts/import-dev-data.sh - Force Native:
FORCE_NATIVE=1 ./scripts/import-dev-data.sh
Supported Setups:
- Docker containers (like
my_postgres) - Native PostgreSQL installations (Homebrew, apt, etc.)
- Custom PostgreSQL setups on localhost:5432
This project includes a mechanism for seeding consistent data for the dev environment using TypeORM and typeorm-seeding. The seeders generate users, posts, feedback, reviews, reports, and requests, making sure all devs work with the same data set.
To seed the database for dev, use the following command:
npm run db:seed
The seeding script checks the environment to ensure it only runs in the dev environment. If the environment variable IS_PROD is set to "true", the seeding process will be skipped to prevent accidental execution in prod.
The factories and seeders are configured to generate consistent data that is shared across all dev environments.
- Factories: Each entity has a corresponding factory that generates consistent data using an
indexto differentiate between records - Seeders: The main seeder script (
SeedInitialData) handles the creation of all entities, ensuring relationships are properly established
- Resetting Data: The seeder script will delete all existing data for users, posts, feedback, reviews, reports, and requests before creating new records. This ensures a clean slate for each run
- Relationships: Factories take into consideration entity relationships, such as assigning posts to users, feedback entries to users, and creating reports and reviews between users and posts