An online rpg built in TypeScript.
This is a pet project that I'm working on over the weekends over at https://www.twitch.tv/kasper_573.
I'm doing this project for fun and to teach myself more about multiplayer game development and web development infrastructure.
- graphics: pixi.js
- maps: tiled (+custom loader/renderer)
- ui: preact
- database: postgres + drizzle
- network: ws, trpc, custom sync lib
- auth: keycloak
- observability: grafana
- CI/CD: Lint, test, build, deploy in pipeline.
- Highly replicable: Containerized development, test and production environments.
- (near) Zero config: Just clone and run.
- Modularity
- Authoritative server, dead simple client
- little to no optimistic operations (maybe some lerping)
- subscribe to state changes, render them.
- Dynamically loaded and served game content
- only reusable and built-in game mechanics is part of this repo.
- game content like models, maps, npcs, monsters, etc. is not part of this repo.
- should be provided externally when deploying.
Local development is done using node and docker compose.
Check out the architecture section for an overview of the system.
- Install Docker
- Install NodeJS
- Clone this repository
- Run
cd docker && ./dockerctl.sh dev up -dIf you're on WSL you may need to run
dockerctl.shwithsudo - Run
sudo ./docker/install-cert.shYou may need to add the root certificate manually to your browser depending on which browser you are using.
- Enable and prepare corepack for this repo
- Run
pnpm install - Run
pnpm -F db devenv pushto initialize your database - Run
pnpm -F db devenv seedto seed the database - Run
pnpm -F keycloak devenv provisionto provision keycloak roles - Sign in as admin to
auth.mp.localhostand create a test account and add yourself to theadmingroup
- Run
pnpm dev - Visit
https://mp.localhostin your browser
You will have to perform the appropriate docker compose commands to apply your changes by using the dockerctl.sh script. See quirks.
You will need to use drizzle-kit.
Run its cli against the development environment using pnpm -F db devenv <drizzle-kit command>.
User roles are defined in typescript source code as a single source of truth and provisioned to keycloak via the provision script in the server package. If you make changes to the user roles you will have to run the provisioning script to update your keycloak instance.
Production is provisioned automatically when changes are pushed to master, so you don't have to handle that manually.
While most of the repo should be fairly conventional, I've made a few choices that may be unexpected and is worth mentioning. Here's what you need to know:
In development, our own apps run on the host machine and outside of the docker network, while 3rd party services run inside the docker network, contrary to production and testing where everything runs inside the docker network.
This provides the best development experience since a large amount of node development tools expect you to interact with them directly on the host machine, ie. vscode's language service, vite dev server, drizzle-kit cli, pnpm link, etc.
We don't use docker compose directly in the CLI. Instead we use a wrapper script that in turn will run the appropriate docker compose commands. See dockerctl.sh for more information.
All docker concerns reside in /docker and should be very loosely coupled with the rest of the codebase. Docker should only be aware of application and package build/dev tasks, their output artifacts and environment variables.
The /docker folder is designed to serve both as the required configuration for building docker images and running docker containers:
Building images: You can build docker images from source, and it will depend on the rest of the source code from the repo to be present.
Starting containers: You can also start containers for prebuilt docker images, in which case only the /docker folder is required to be present. You can do so by running dockerctl.sh with prod or test environment. In fact, this is how the production environment is managed: The CI uploads the /docker folder to the production server as a runtime dependency, and runs dockerctl.sh prod <docker compose command>.
This repository comes with a github actions workflow that performs automatic deployments whenever the main branch receives updates. It's a simple deploy script designed to deploy to a single remote machine. It logs in to your remote machine via ssh and updates or initializes the docker stack utilizing the same docker compose file as in development but with production environment variables provided via github action variables and secrets.
Review the workflow to see which variables and secrets you need to provide.
This repository utilizes pnpm workspaces to organize and separate concerns.
Lower level workspaces may not depend on higher level workspaces.
These are the workspaces, in order:
Deployable executables. Most business logic exist here.
May depend on other apps, but it's preferable to do so via protocol (ie. http requests) rather than direct dependency on code.
The apps are responsible for bundling.
Compositions of libraries or integrations with third party services.
May depend on libraries, but not apps, with one exception: @mp/game-service exposes event router typedefs that is okay to use. It allows clients to get a type safe event emitter for the game service.
Generic and low level systems. No business logic.
Should be highly configurable and modular.
Optimally each package is standalone and has no dependencies on other packages in the repo. However this is more of a goal rather than a rule. Many libraries will have to depend on really core stuff like @mp/std and @mp/time, but in general you should decouple packages and instead compose them together inside an app.
Does not need to handle bundling, package.json may directly export untranspiled code, ie. typescript.
There's also a reverse proxy (caddy) in front of everything, but it's not shown in the diagram because it's a proxy and doesn't help demonstrating how services interact.
flowchart LR
User --> WEB["Website"]
GC -->|Send game client events| GW["Gateway"]
GS -->|Flush game state to game clients| GW
GW -->|Cross service event broadcast| GS["Game service"]
GW -->|Broadcast game client events to game services| GS
WEB -->|Show game client on some pages| GC["Game client"]
GC -->|CRUD character data| API["API service"]
GS -->|Load area data required by instance| API
WEB -->|Load assets| FS["File server (caddy)"]
API -->|List game client assets| FS
GC -->|Load game client assets| FS
GW --> KC["Auth service (keycloak)"]
API --> KC
GS -->|Load/Save game state| DB["Database (postgres)"]
GW -->|Save player online state| DB
API -->|Load/Edit game state| DB
KC -->|CRUD user data| DB
To keep things consistent we use signals as a reusable state management system across all systems.
However, we have an encapsulation package in @mp/state that essentially just re-exports @preact/signals-core.
We do this to not directly couple all systems with preact.
flowchart LR
State["@mp/state (@preact/signals-core)"] --> GS["Game service"]
State --> GFX["Graphics (pixi.js)"]
State --> UI["UI (preact)"]
If you're writing code in the UI layer you're free to use preact signals directly, but if you're in any other part of the system you should depend on
@mp/stateinstead of preact directly.