-
Notifications
You must be signed in to change notification settings - Fork 3
Seperate user from dev documentation #168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,133 @@ | ||
| # Contributing | ||
|
|
||
| [](https://deepwiki.com/piercefreeman/waymark) | ||
|
|
||
| ## Development | ||
|
|
||
| ### Packaging | ||
|
|
||
| Use the helper script to produce distributable wheels that bundle the Rust executables with the | ||
| Python package: | ||
|
|
||
| ```bash | ||
| $ uv run scripts/build_wheel.py --out-dir target/wheels | ||
| ``` | ||
|
|
||
| The script compiles every Rust binary (release profile), stages the required entrypoints | ||
| (`waymark-bridge`, `boot-waymark-singleton`) inside the Python package, and invokes | ||
| `uv build --wheel` to produce an artifact suitable for publishing to PyPI. | ||
|
|
||
| ### Local Server Runtime | ||
|
|
||
| The Rust runtime exposes a gRPC API (plus gRPC health check) via the `waymark-bridge` binary: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin waymark-bridge | ||
| ``` | ||
|
|
||
| Developers can either launch it directly or rely on the `boot-waymark-singleton` helper which finds (or starts) a single shared instance on | ||
| `127.0.0.1:24117`. The helper prints the active gRPC port to stdout so Python clients can connect without additional | ||
| configuration: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin boot-waymark-singleton | ||
| 24117 | ||
| ``` | ||
|
|
||
| The Python bridge automatically shells out to the helper unless you provide | ||
| `WAYMARK_BRIDGE_GRPC_ADDR` (or `WAYMARK_BRIDGE_GRPC_HOST` + `WAYMARK_BRIDGE_GRPC_PORT`) overrides. | ||
| Once the port is known it opens a gRPC channel to the | ||
| `WorkflowService`. | ||
|
|
||
| ### Benchmarking | ||
|
|
||
| Run the Rust benchmark harness (defaults to `--count 1000`) via: | ||
|
|
||
| ```bash | ||
| $ make benchmark | ||
| ``` | ||
|
|
||
| `make benchmark` builds with `--features trace`, writes a tracing-chrome file, and prints | ||
| a pyinstrument-style summary via `scripts/parse_chrome_trace.py`. Override the trace path | ||
| with `BENCH_TRACE=...`, the summary size with `BENCH_TRACE_TOP=...`, or benchmark args with | ||
| `BENCH_ARGS="--count 200 --batch-size 50"`. Set `BENCH_RELEASE=1` to run the benchmark binary | ||
| from the release profile. `make benchmark-trace` is an alias if you want the explicit target | ||
| name. | ||
|
|
||
| To inspect task waits and blocking points via tokio-console, use: | ||
|
|
||
| ```bash | ||
| $ make benchmark-console | ||
| ``` | ||
|
|
||
| This opens a tmux session with the benchmark on the left and `tokio-console` on the right. | ||
| `make benchmark-console` requires tmux, and `tokio-console` must be installed (`cargo install | ||
| tokio-console --locked`). Tokio console also requires building with | ||
| `RUSTFLAGS="--cfg tokio_unstable"`, which the make target sets by default (override with | ||
| `BENCH_RUSTFLAGS=...`). The console listens on `127.0.0.1:6669` by default; override with | ||
| `TOKIO_CONSOLE_BIND`. This is a tokio-console socket, not an HTTP endpoint, so it won’t | ||
| load in a browser. If tokio-console shows "RECONNECTING", reinstall it so the client/server | ||
| protocols match. We track the latest `console-subscriber` (0.5.x), while the CLI is still | ||
| 0.1.x, so a stale install often causes reconnect loops. | ||
|
|
||
| Stream benchmark output directly into our parser to summarize throughput and latency samples: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin bench -- \ | ||
| --messages 100000 \ | ||
| --payload 1024 \ | ||
| --concurrency 64 \ | ||
| --workers 4 \ | ||
| --log-interval 15 \ | ||
| uv run python/tools/parse_bench_logs.py | ||
|
|
||
| The `bench` binary seeds raw actions to measure dequeue/execute/ack throughput. Use `bench_instances` for an end-to-end workflow run (queueing and executing full workflow instances via the scheduler) without installing a separate `waymark-worker` binary—the harness shells out to `uv run python -m waymark.worker` automatically: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin bench_instances -- \ | ||
| --instances 200 \ | ||
| --batch-size 4 \ | ||
| --payload-size 1024 \ | ||
| --concurrency 64 \ | ||
| --workers 4 | ||
| ``` | ||
|
|
||
| Add `--json` to the parser if you prefer JSON output. | ||
|
|
||
| ## Testing | ||
|
|
||
| ### Rust tests (unit + integration) | ||
|
|
||
| Integration fixtures are run by the Rust entrypoint binary `src/bin/integration_test.rs`. | ||
| It runs curated fixtures from `tests/integration_tests` and checks parity: | ||
| - Baseline execution via direct inline Python workflow logic | ||
| - Runtime execution via Rust DAG execution + in-memory backend | ||
| - Runtime execution via Rust DAG execution + Postgres backend | ||
| - Backend results must exactly match the inline baseline (result or error payload) | ||
|
|
||
| Commands: | ||
|
|
||
| ```bash | ||
| # Everything (unit + integration) | ||
| cargo test | ||
|
|
||
| # Run fixture integration parity (default backends: in-memory,postgres) | ||
| cargo run --bin integration_test | ||
|
|
||
| # Run selected fixture case IDs only | ||
| cargo run --bin integration_test -- --case simple --case parallel | ||
|
|
||
| # Restrict parity backends (comma-separated) | ||
| cargo run --bin integration_test -- --backends in-memory | ||
| ``` | ||
|
|
||
| Prereqs: | ||
| - No manual Postgres startup is required for the default test harness configuration. | ||
| - Ensure `uv` is installed and `python/.venv` is prepared (`cd python && uv sync`) | ||
|
|
||
| ### Python tests | ||
|
|
||
| ```bash | ||
| cd python | ||
| uv run pytest | ||
| ``` | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,7 +4,7 @@ | |
|
|
||
| waymark is a library to let you build durable background tasks that withstand server restarts, task crashes, and long-running jobs. It's built for Python and Postgres without any additional deploy time requirements. More languages are coming soon. | ||
|
|
||
| ## Getting Started | ||
| ## Usage | ||
|
|
||
| We ship all client and server wheels as a python package. Install it via your package manager of choice: | ||
|
|
||
|
|
@@ -20,8 +20,6 @@ export WAYMARK_DATABASE_URL=postgresql://postgres:postgres@localhost:5432/waymar | |
| uv run start-workers | ||
| ``` | ||
|
|
||
| ## Usage | ||
|
|
||
| Let's say you need to send welcome emails to a batch of users, but only the active ones. You want to fetch them all, filter out inactive accounts, then fan out emails in parallel. This is how you write that workflow in waymark: | ||
|
|
||
| ```python | ||
|
|
@@ -145,59 +143,7 @@ To build truly robust background tasks, you need to consider how things can go w | |
|
|
||
| By default we will only try explicit actions one time if there is an explicit exception raised. We will try them infinite times in the case of a timeout since this is usually caused by cross device coordination issues. | ||
|
|
||
| ## Project Status | ||
|
|
||
| > [!IMPORTANT] | ||
| > Right now you shouldn't use waymark in any production applications. The spec is changing too quickly and we don't guarantee backwards compatibility before 1.0.0. But we would love if you try it out in your side project and see how you find it. | ||
|
|
||
| Waymark is in an early alpha. Particular areas of focus include: | ||
|
|
||
| 1. Finalizing the Waymark Runtime Language | ||
| 1. Extending AST parsing logic to handle most core control flows | ||
| 1. Performance tuning | ||
| 1. Unit and integration tests | ||
|
|
||
| If you have a particular workflow that you think should be working but isn't yet producing the correct DAG (you can visualize it via CLI by `.visualize()`) please file an issue. | ||
|
|
||
| ## Testing | ||
|
|
||
| ### Rust tests (unit + integration) | ||
|
|
||
| Integration fixtures are run by the Rust entrypoint binary `src/bin/integration_test.rs`. | ||
| It runs curated fixtures from `tests/integration_tests` and checks parity: | ||
| - Baseline execution via direct inline Python workflow logic | ||
| - Runtime execution via Rust DAG execution + in-memory backend | ||
| - Runtime execution via Rust DAG execution + Postgres backend | ||
| - Backend results must exactly match the inline baseline (result or error payload) | ||
|
|
||
| Commands: | ||
|
|
||
| ```bash | ||
| # Everything (unit + integration) | ||
| cargo test | ||
|
|
||
| # Run fixture integration parity (default backends: in-memory,postgres) | ||
| cargo run --bin integration_test | ||
|
|
||
| # Run selected fixture case IDs only | ||
| cargo run --bin integration_test -- --case simple --case parallel | ||
|
|
||
| # Restrict parity backends (comma-separated) | ||
| cargo run --bin integration_test -- --backends in-memory | ||
| ``` | ||
|
|
||
| Prereqs: | ||
| - No manual Postgres startup is required for the default test harness configuration. | ||
| - Ensure `uv` is installed and `python/.venv` is prepared (`cd python && uv sync`) | ||
|
|
||
| ### Python tests | ||
|
|
||
| ```bash | ||
| cd python | ||
| uv run pytest | ||
| ``` | ||
|
|
||
| ## Configuration | ||
| ### Configuration | ||
|
|
||
| Waymark runtime configuration is environment-variable driven. | ||
| Waymark reads the process environment directly; it does not auto-load `.env` files. | ||
|
|
@@ -266,6 +212,20 @@ When a worker reaches its action limit, waymark spawns a replacement worker befo | |
|
|
||
| By default, this is set to `None` (no limit), meaning workers run indefinitely. If you notice memory growth in your workers over time, try setting this to a value like `1000` or `10000` depending on your action characteristics. | ||
|
|
||
| ## Project Status | ||
|
|
||
| > [!IMPORTANT] | ||
| > Right now you shouldn't use waymark in any production applications. The spec is changing too quickly and we don't guarantee backwards compatibility before 1.0.0. But we would love if you try it out in your side project and see how you find it. | ||
|
|
||
| Waymark is in an early alpha. Particular areas of focus include: | ||
|
|
||
| 1. Finalizing the Waymark Runtime Language | ||
| 1. Extending AST parsing logic to handle most core control flows | ||
| 1. Performance tuning | ||
| 1. Unit and integration tests | ||
|
|
||
| If you have a particular workflow that you think should be working but isn't yet producing the correct DAG (you can visualize it via CLI by `.visualize()`) please file an issue. | ||
|
|
||
| ## Philosophy | ||
|
|
||
| Background jobs in webapps are so frequently used that they should really be a primitive of your fullstack library: database, backend, frontend, _and_ background jobs. Otherwise you're stuck in a situation where users either have to always make blocking requests to an API or you spin up ephemeral tasks that will be killed during re-deployments or an accidental docker crash. | ||
|
|
@@ -280,7 +240,7 @@ On the point of control flow, we shouldn't be forced into a DAG definition (deco | |
|
|
||
| Nothing on the market provides this balance - `waymark` aims to try. We don't expect ourselves to reach best in class functionality for load performance. Instead we intend for this to scale _most_ applications well past product market fit. | ||
|
|
||
| ## How It Works | ||
| ### How It Works | ||
|
|
||
| Waymark takes a different approach from replay-based workflow engines like Temporal or Vercel Workflow. | ||
|
|
||
|
|
@@ -293,7 +253,7 @@ When you decorate a class with `@workflow`, Waymark parses the `run()` method's | |
|
|
||
| This is convenient in practice because it means that if your workflow compiles, your workflow will run as advertised. There's no need to hack around stdlib functions that are non-deterministic (like time/uuid/etc) because you'll get an error on compilation to switch these into an explicit `@action` where all non-determinism should live. | ||
|
|
||
| ## Other options | ||
| ### Other options | ||
|
|
||
| **When should you use Waymark?** | ||
|
|
||
|
|
@@ -317,95 +277,6 @@ Almost all of these require a dedicated task broker that you host alongside your | |
|
|
||
| Open source solutions like RabbitMQ have been battle tested over decades & large companies like Temporal are able to throw a lot of resources towards optimization. Both of these solutions are great choices - just intended to solve for different scopes. Expect an associated higher amount of setup and management complexity. | ||
|
|
||
| ## Development | ||
|
|
||
| ### Packaging | ||
|
|
||
| Use the helper script to produce distributable wheels that bundle the Rust executables with the | ||
| Python package: | ||
|
|
||
| ```bash | ||
| $ uv run scripts/build_wheel.py --out-dir target/wheels | ||
| ``` | ||
|
|
||
| The script compiles every Rust binary (release profile), stages the required entrypoints | ||
| (`waymark-bridge`, `boot-waymark-singleton`) inside the Python package, and invokes | ||
| `uv build --wheel` to produce an artifact suitable for publishing to PyPI. | ||
|
|
||
| ### Local Server Runtime | ||
|
|
||
| The Rust runtime exposes a gRPC API (plus gRPC health check) via the `waymark-bridge` binary: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin waymark-bridge | ||
| ``` | ||
|
|
||
| Developers can either launch it directly or rely on the `boot-waymark-singleton` helper which finds (or starts) a single shared instance on | ||
| `127.0.0.1:24117`. The helper prints the active gRPC port to stdout so Python clients can connect without additional | ||
| configuration: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin boot-waymark-singleton | ||
| 24117 | ||
| ``` | ||
|
|
||
| The Python bridge automatically shells out to the helper unless you provide | ||
| `WAYMARK_BRIDGE_GRPC_ADDR` (or `WAYMARK_BRIDGE_GRPC_HOST` + `WAYMARK_BRIDGE_GRPC_PORT`) overrides. | ||
| Once the port is known it opens a gRPC channel to the | ||
| `WorkflowService`. | ||
|
|
||
| ### Benchmarking | ||
|
|
||
| Run the Rust benchmark harness (defaults to `--count 1000`) via: | ||
|
|
||
| ```bash | ||
| $ make benchmark | ||
| ``` | ||
|
|
||
| `make benchmark` builds with `--features trace`, writes a tracing-chrome file, and prints | ||
| a pyinstrument-style summary via `scripts/parse_chrome_trace.py`. Override the trace path | ||
| with `BENCH_TRACE=...`, the summary size with `BENCH_TRACE_TOP=...`, or benchmark args with | ||
| `BENCH_ARGS="--count 200 --batch-size 50"`. Set `BENCH_RELEASE=1` to run the benchmark binary | ||
| from the release profile. `make benchmark-trace` is an alias if you want the explicit target | ||
| name. | ||
|
|
||
| To inspect task waits and blocking points via tokio-console, use: | ||
|
|
||
| ```bash | ||
| $ make benchmark-console | ||
| ``` | ||
|
|
||
| This opens a tmux session with the benchmark on the left and `tokio-console` on the right. | ||
| `make benchmark-console` requires tmux, and `tokio-console` must be installed (`cargo install | ||
| tokio-console --locked`). Tokio console also requires building with | ||
| `RUSTFLAGS="--cfg tokio_unstable"`, which the make target sets by default (override with | ||
| `BENCH_RUSTFLAGS=...`). The console listens on `127.0.0.1:6669` by default; override with | ||
| `TOKIO_CONSOLE_BIND`. This is a tokio-console socket, not an HTTP endpoint, so it won’t | ||
| load in a browser. If tokio-console shows "RECONNECTING", reinstall it so the client/server | ||
| protocols match. We track the latest `console-subscriber` (0.5.x), while the CLI is still | ||
| 0.1.x, so a stale install often causes reconnect loops. | ||
|
|
||
| Stream benchmark output directly into our parser to summarize throughput and latency samples: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin bench -- \ | ||
| --messages 100000 \ | ||
| --payload 1024 \ | ||
| --concurrency 64 \ | ||
| --workers 4 \ | ||
| --log-interval 15 \ | ||
| uv run python/tools/parse_bench_logs.py | ||
|
|
||
| The `bench` binary seeds raw actions to measure dequeue/execute/ack throughput. Use `bench_instances` for an end-to-end workflow run (queueing and executing full workflow instances via the scheduler) without installing a separate `waymark-worker` binary—the harness shells out to `uv run python -m waymark.worker` automatically: | ||
|
|
||
| ```bash | ||
| $ cargo run --bin bench_instances -- \ | ||
| --instances 200 \ | ||
| --batch-size 4 \ | ||
| --payload-size 1024 \ | ||
| --concurrency 64 \ | ||
| --workers 4 | ||
| ``` | ||
| ``` | ||
| ## Contributing | ||
|
|
||
| Add `--json` to the parser if you prefer JSON output. | ||
| If you want to contribute, check out the [contributing guidelines](./CONTRIBUTING.md). | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the only line added, to redirect to |
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just the badge to the DeepWiki.