Drasi Server is a standalone server for the Drasi data change processing platform. It wraps the DrasiLib library with enterprise-ready features including a REST API, YAML-based configuration, and production lifecycle management.
Drasi is an open-source Data Change Processing platform that simplifies the creation and operation of change-driven solutions. Rather than functioning as a generic event processor, Drasi specializes in detecting meaningful data modifications through continuous monitoring.
Traditional approaches require manual polling, parsing ambiguous payloads, filtering high-volume event streams, and maintaining external state—introducing brittleness and complexity. Drasi eliminates these overhead costs by letting you declaratively specify what changes matter to your solution through continuous queries.
- Sources: Data ingestion points that connect to your systems and stream changes
- Continuous Queries: Cypher or GQL queries that run perpetually, maintaining current results and generating notifications when they change
- Reactions: Automated responses triggered when query results change (webhooks, SSE streams, gRPC, logging)
- Bootstrap Providers: Pluggable components that deliver initial data to queries before streaming begins
- Rust 1.70 or higher
- Git with submodule support
# Start the full stack (Drasi Server + PostgreSQL)
docker compose up -d
# Or server only (bring your own database)
docker compose -f docker-compose-server-only.yml up -d
# View logs
docker compose logs -f drasi-server
# Check health
curl http://localhost:8080/healthBy default, this uses the ghcr.io/drasi-project/drasi-server:0.1.0 image.
To use a different version:
# Set image via environment variable
export DRASI_SERVER_IMAGE=ghcr.io/drasi-project/drasi-server:v1.0.0
docker compose up -d
# Or inline
DRASI_SERVER_IMAGE=ghcr.io/drasi-project/drasi-server:latest docker compose up -d# Clone with all submodules
git clone --recurse-submodules https://github.com/drasi-project/drasi-server.git
cd drasi-server
# Build the Docker image
make docker-build DOCKER_TAG_VERSION=local
# Update docker-compose to use local image
export DRASI_SERVER_IMAGE=ghcr.io/drasi-project/drasi-server:local
docker compose up -dSee DOCKER.md for detailed Docker deployment instructions.
# Ensure Rust is installed (1.70+)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone the repository with all submodules (including nested ones)
git clone --recurse-submodules https://github.com/drasi-project/drasi-server.git
cd drasi-server
# Build the server
cargo build --release
# Create a minimal configuration interactively
cargo run -- init --output config/my-config.yaml
# Start the server
cargo run -- --config config/my-config.yaml# Check server health
curl http://localhost:8080/health
# View API documentation
open http://localhost:8080/api/v1/docs/
# List configured queries
curl http://localhost:8080/api/v1/queries# config/server.yaml
host: 0.0.0.0
port: 8080
log_level: info
sources:
- kind: mock
id: test-source
auto_start: true
queries:
- id: my-query
query: "MATCH (n:Node) RETURN n"
sources:
- source_id: test-source
reactions:
- kind: log
id: log-output
queries: [my-query]drasi-server [OPTIONS] [COMMAND]
| Option | Short | Default | Description |
|---|---|---|---|
--config <PATH> |
-c |
config/server.yaml |
Path to the configuration file |
--port <PORT> |
-p |
(from config) | Override the server port |
--help |
-h |
Print help information | |
--version |
-V |
Print version information |
Run the server. This is the default command when no subcommand is specified.
# These are equivalent
drasi-server --config config/server.yaml
drasi-server run --config config/server.yamlOptions:
--config <PATH>: Path to configuration file (default:config/server.yaml)--port <PORT>: Override the server port from config
Create a new configuration file interactively. Guides you through setting up sources, queries, and reactions.
drasi-server init --output config/my-config.yaml
drasi-server init --output config/server.yaml --force # Overwrite existingOptions:
--output <PATH>,-o: Output path for configuration (default:config/server.yaml)--force: Overwrite existing configuration file
Validate a configuration file without starting the server. Useful for CI/CD pipelines.
drasi-server validate --config config/server.yaml
drasi-server validate --config config/server.yaml --show-resolvedOptions:
--config <PATH>: Path to configuration file to validate (default:config/server.yaml)--show-resolved: Display configuration with environment variables expanded
Check system dependencies and requirements.
drasi-server doctor
drasi-server doctor --all # Include optional dependenciesOptions:
--all: Check for optional dependencies (Docker, etc.)
Drasi Server uses YAML configuration files. All configuration values support environment variable interpolation using ${VAR} or ${VAR:-default} syntax.
| Field | Type | Default | Description |
|---|---|---|---|
id |
string | (auto-generated UUID) | Unique server identifier |
host |
string | 0.0.0.0 |
Server bind address |
port |
integer | 8080 |
Server port |
log_level |
string | info |
Log level: trace, debug, info, warn, error |
persist_config |
boolean | true |
Enable saving API changes to config file |
persist_index |
boolean | false |
Use RocksDB for persistent query indexes |
state_store |
object | (none) | State store provider for plugin state persistence |
default_priority_queue_capacity |
integer | 10000 |
Default capacity for query/reaction event queues |
default_dispatch_buffer_capacity |
integer | 1000 |
Default buffer capacity for event dispatching |
Example:
id: my-server
host: 0.0.0.0
port: 8080
log_level: info
persist_config: true
persist_index: false
state_store:
kind: redb
path: ./data/state.redb
sources: []
queries: []
reactions: []State stores allow plugins (Sources, Bootstrap Providers, Reactions) to persist runtime state that survives server restarts. If not configured, an in-memory state store is used (state is lost on restart).
File-based persistent storage using the REDB embedded database.
state_store:
kind: redb
path: ./data/state.redb # Supports ${ENV_VAR:-default}| Field | Type | Required | Description |
|---|---|---|---|
kind |
string | Yes | Must be redb |
path |
string | Yes | Path to the database file |
Sources connect to data systems and stream changes to queries. Each source type has specific configuration fields.
| Field | Type | Default | Description |
|---|---|---|---|
kind |
string | (required) | Source type: postgres, http, grpc, mock, platform |
id |
string | (required) | Unique source identifier |
auto_start |
boolean | true |
Start source automatically on server startup |
bootstrap_provider |
object | (none) | Bootstrap provider configuration |
Streams changes from PostgreSQL using logical replication (WAL).
sources:
- kind: postgres
id: my-postgres
auto_start: true
host: localhost
port: 5432
database: mydb
user: postgres
password: ${DB_PASSWORD}
tables: [orders, customers]
slot_name: drasi_slot
publication_name: drasi_pub
ssl_mode: prefer
table_keys:
- table: orders
key_columns: [id]
bootstrap_provider:
type: postgres| Field | Type | Default | Description |
|---|---|---|---|
host |
string | localhost |
Database host |
port |
integer | 5432 |
Database port |
database |
string | (required) | Database name |
user |
string | (required) | Database user |
password |
string | "" |
Database password |
tables |
array | [] |
Tables to monitor |
slot_name |
string | drasi_slot |
Replication slot name |
publication_name |
string | drasi_publication |
Publication name |
ssl_mode |
string | prefer |
SSL mode: disable, prefer, require |
table_keys |
array | [] |
Primary key definitions for tables |
Receives events via HTTP endpoints.
sources:
- kind: http
id: my-http
auto_start: true
host: 0.0.0.0
port: 9000
timeout_ms: 10000| Field | Type | Default | Description |
|---|---|---|---|
host |
string | (required) | Listen address |
port |
integer | (required) | Listen port |
endpoint |
string | (auto) | Custom endpoint path |
timeout_ms |
integer | 10000 |
Request timeout in milliseconds |
Receives events via gRPC streaming.
sources:
- kind: grpc
id: my-grpc
auto_start: true
host: 0.0.0.0
port: 50051
timeout_ms: 5000| Field | Type | Default | Description |
|---|---|---|---|
host |
string | 0.0.0.0 |
Listen address |
port |
integer | 50051 |
Listen port |
timeout_ms |
integer | 5000 |
Connection timeout in milliseconds |
Generates test data for development.
sources:
- kind: mock
id: test-source
auto_start: true
data_type: sensor
interval_ms: 5000| Field | Type | Default | Description |
|---|---|---|---|
data_type |
string | generic |
Type of mock data to generate |
interval_ms |
integer | 5000 |
Data generation interval in milliseconds |
Consumes events from Redis Streams for Drasi Platform integration.
sources:
- kind: platform
id: platform-source
auto_start: true
redis_url: redis://localhost:6379
stream_key: my-stream
consumer_group: drasi-core
batch_size: 100
block_ms: 5000| Field | Type | Default | Description |
|---|---|---|---|
redis_url |
string | (required) | Redis connection URL |
stream_key |
string | (required) | Redis stream key to consume |
consumer_group |
string | drasi-core |
Consumer group name |
consumer_name |
string | (auto) | Consumer name within group |
batch_size |
integer | 100 |
Events to read per batch |
block_ms |
integer | 5000 |
Block timeout in milliseconds |
Bootstrap providers deliver initial data to queries before streaming begins. Any source can use any bootstrap provider.
Loads initial data from PostgreSQL using the COPY protocol.
bootstrap_provider:
type: postgres
# Uses source connection detailsLoads initial data from JSONL files.
bootstrap_provider:
type: scriptfile
file_paths:
- /data/initial_nodes.jsonl
- /data/initial_relations.jsonlLoads initial data from a remote Drasi Query API.
bootstrap_provider:
type: platform
query_api_url: http://remote-drasi:8080
timeout_seconds: 300Returns no initial data.
bootstrap_provider:
type: noopContinuous queries process data changes and maintain materialized results.
queries:
- id: active-orders
query: |
MATCH (o:Order)
WHERE o.status = 'active'
RETURN o.id, o.customer_id, o.total
queryLanguage: Cypher
sources:
- source_id: orders-db
auto_start: true
enableBootstrap: true
bootstrapBufferSize: 10000| Field | Type | Default | Description |
|---|---|---|---|
id |
string | (required) | Unique query identifier |
query |
string | (required) | Query string (Cypher or GQL) |
queryLanguage |
string | Cypher |
Query language: Cypher or GQL |
sources |
array | (required) | Source subscriptions |
auto_start |
boolean | true |
Start query automatically |
enableBootstrap |
boolean | true |
Process initial data from sources |
bootstrapBufferSize |
integer | 10000 |
Event buffer size during bootstrap |
priority_queue_capacity |
integer | (global) | Override queue capacity for this query |
dispatch_buffer_capacity |
integer | (global) | Override buffer capacity for this query |
joins |
array | (none) | Synthetic join definitions |
Important Limitation: ORDER BY, TOP, and LIMIT clauses are not supported in continuous queries.
sources:
- source_id: orders-db
nodes: [Order, Customer] # Optional: filter node labels
relations: [PLACED_BY] # Optional: filter relation labels
pipeline: [decoder, mapper] # Optional: middleware pipelineCreate virtual relationships between nodes from different sources:
queries:
- id: order-customer-join
query: |
MATCH (o:Order)-[:CUSTOMER]->(c:Customer)
RETURN o.id, c.name
sources:
- source_id: orders-db
- source_id: customers-db
joins:
- id: CUSTOMER
keys:
- label: Order
property: customer_id
- label: Customer
property: idReactions respond to query result changes.
| Field | Type | Default | Description |
|---|---|---|---|
kind |
string | (required) | Reaction type |
id |
string | (required) | Unique reaction identifier |
queries |
array | (required) | Query IDs to subscribe to |
auto_start |
boolean | true |
Start reaction automatically |
Writes query results to console output.
reactions:
- kind: log
id: log-output
queries: [my-query]
auto_start: true
default_template:
added:
template: "Added: {{json this}}"
updated:
template: "Updated: {{json this}}"
deleted:
template: "Deleted: {{json this}}"| Field | Type | Default | Description |
|---|---|---|---|
routes |
object | {} |
Query-specific template configurations |
default_template |
object | (none) | Default template for all queries |
Sends query results to HTTP endpoints.
reactions:
- kind: http
id: webhook
queries: [my-query]
base_url: https://api.example.com
timeout_ms: 5000
token: ${API_TOKEN}
routes:
my-query:
added:
url: /events
method: POST
headers:
Content-Type: application/json| Field | Type | Default | Description |
|---|---|---|---|
base_url |
string | http://localhost |
Base URL for requests |
timeout_ms |
integer | 5000 |
Request timeout in milliseconds |
token |
string | (none) | Bearer token for authorization |
routes |
object | {} |
Query-specific endpoint configurations |
HTTP reaction with adaptive batching and retry logic.
reactions:
- kind: http-adaptive
id: adaptive-webhook
queries: [my-query]
base_url: https://api.example.com
timeout_ms: 5000
adaptive_min_batch_size: 1
adaptive_max_batch_size: 1000
adaptive_window_size: 100
adaptive_batch_timeout_ms: 1000| Field | Type | Default | Description |
|---|---|---|---|
adaptive_min_batch_size |
integer | 1 |
Minimum batch size |
adaptive_max_batch_size |
integer | 1000 |
Maximum batch size |
adaptive_window_size |
integer | 100 |
Window size for adaptive calculations |
adaptive_batch_timeout_ms |
integer | 1000 |
Batch timeout in milliseconds |
Streams query results via gRPC.
reactions:
- kind: grpc
id: grpc-stream
queries: [my-query]
endpoint: grpc://localhost:50052
timeout_ms: 5000
batch_size: 100
max_retries: 3| Field | Type | Default | Description |
|---|---|---|---|
endpoint |
string | grpc://localhost:50052 |
gRPC endpoint URL |
timeout_ms |
integer | 5000 |
Connection timeout in milliseconds |
batch_size |
integer | 100 |
Events per batch |
batch_flush_timeout_ms |
integer | 1000 |
Batch flush timeout |
max_retries |
integer | 3 |
Maximum retry attempts |
connection_retry_attempts |
integer | 5 |
Connection retry attempts |
initial_connection_timeout_ms |
integer | 10000 |
Initial connection timeout |
gRPC reaction with adaptive batching.
reactions:
- kind: grpc-adaptive
id: adaptive-grpc
queries: [my-query]
endpoint: grpc://localhost:50052
adaptive_min_batch_size: 1
adaptive_max_batch_size: 1000Streams query results via Server-Sent Events.
reactions:
- kind: sse
id: sse-stream
queries: [my-query]
host: 0.0.0.0
port: 8081
sse_path: /events
heartbeat_interval_ms: 30000| Field | Type | Default | Description |
|---|---|---|---|
host |
string | 0.0.0.0 |
Listen address |
port |
integer | 8080 |
Listen port |
sse_path |
string | /events |
SSE endpoint path |
heartbeat_interval_ms |
integer | 30000 |
Heartbeat interval in milliseconds |
Publishes query results to Redis Streams in CloudEvent format.
reactions:
- kind: platform
id: redis-publisher
queries: [my-query]
redis_url: redis://localhost:6379
emit_control_events: false
batch_enabled: true
batch_max_size: 100
batch_max_wait_ms: 100| Field | Type | Default | Description |
|---|---|---|---|
redis_url |
string | (required) | Redis connection URL |
pubsub_name |
string | (auto) | Pub/sub channel name |
source_name |
string | (auto) | Source identifier in events |
max_stream_length |
integer | (unlimited) | Maximum stream length |
emit_control_events |
boolean | false |
Emit control events |
batch_enabled |
boolean | false |
Enable batching |
batch_max_size |
integer | 100 |
Maximum batch size |
batch_max_wait_ms |
integer | 100 |
Maximum wait time for batch |
Collects performance metrics for queries.
reactions:
- kind: profiler
id: query-profiler
queries: [my-query]
window_size: 100
report_interval_secs: 60| Field | Type | Default | Description |
|---|---|---|---|
window_size |
integer | 100 |
Metrics window size |
report_interval_secs |
integer | 60 |
Report interval in seconds |
For advanced use cases requiring isolated processing environments, configure multiple DrasiLib instances:
host: 0.0.0.0
port: 8080
log_level: info
instances:
- id: analytics
persist_index: true
state_store:
kind: redb
path: ./data/analytics-state.redb
sources:
- kind: postgres
id: analytics-db
# ... source config
queries:
- id: high-value-orders
query: "MATCH (o:Order) WHERE o.total > 1000 RETURN o"
sources:
- source_id: analytics-db
reactions:
- kind: log
id: analytics-log
queries: [high-value-orders]
- id: monitoring
persist_index: false
sources:
- kind: http
id: metrics-api
host: 0.0.0.0
port: 9001
queries:
- id: alert-threshold
query: "MATCH (m:Metric) WHERE m.value > m.threshold RETURN m"
sources:
- source_id: metrics-api
reactions:
- kind: sse
id: alert-stream
queries: [alert-threshold]
port: 8082Each instance has:
- Its own isolated namespace for sources, queries, and reactions
- Optional separate state store and index persistence settings
- API access via
/api/v1/instances/{instanceId}/...
All configuration values support environment variable substitution:
host: ${SERVER_HOST:-0.0.0.0}
port: ${SERVER_PORT:-8080}
sources:
- kind: postgres
id: production-db
host: ${DB_HOST}
password: ${DB_PASSWORD} # Required - fails if not setSyntax:
${VAR}- Required variable, fails if not set${VAR:-default}- Optional variable with default value
The server exposes a REST API at http://localhost:8080 (default).
GET /health- Health check (unversioned)GET /api/versions- List available API versionsGET /api/v1/docs/- Interactive Swagger UIGET /api/v1/openapi.json- OpenAPI specification
GET /api/v1/instances # List all instancesGET /api/v1/sources # List sources (first instance)
POST /api/v1/sources # Create source
GET /api/v1/sources/{id} # Get source details
DELETE /api/v1/sources/{id} # Delete source
POST /api/v1/sources/{id}/start # Start source
POST /api/v1/sources/{id}/stop # Stop source
# Instance-specific routes
GET /api/v1/instances/{instanceId}/sourcesGET /api/v1/queries # List queries
POST /api/v1/queries # Create query
GET /api/v1/queries/{id} # Get query details
DELETE /api/v1/queries/{id} # Delete query
POST /api/v1/queries/{id}/start # Start query
POST /api/v1/queries/{id}/stop # Stop query
GET /api/v1/queries/{id}/results # Get current results
# Instance-specific routes
GET /api/v1/instances/{instanceId}/queriesGET /api/v1/reactions # List reactions
POST /api/v1/reactions # Create reaction
GET /api/v1/reactions/{id} # Get reaction details
DELETE /api/v1/reactions/{id} # Delete reaction
POST /api/v1/reactions/{id}/start # Start reaction
POST /api/v1/reactions/{id}/stop # Stop reaction
# Instance-specific routes
GET /api/v1/instances/{instanceId}/reactions{
"success": true,
"data": { ... },
"error": null
}queries:
- id: low-stock-alert
query: |
MATCH (p:Product)
WHERE p.quantity <= p.reorder_point
RETURN p.sku, p.name, p.quantity, p.reorder_point
sources:
- source_id: inventory-db
reactions:
- kind: http
id: reorder-webhook
queries: [low-stock-alert]
base_url: https://purchasing.example.com
routes:
low-stock-alert:
added:
url: /reorder
method: POSTqueries:
- id: suspicious-transactions
query: |
MATCH (t:Transaction)
WHERE t.amount > 10000
AND t.country <> t.account_country
RETURN t.id, t.account_id, t.amount, t.country
sources:
- source_id: transactions-db
reactions:
- kind: sse
id: fraud-alerts
queries: [suspicious-transactions]
port: 8081# Initialize all submodules recursively
git submodule update --init --recursive# Use a different port
cargo run -- --port 9090- Check source status:
GET /api/v1/sources/{id} - Verify query subscription:
GET /api/v1/queries/{id} - Enable debug logging:
RUST_LOG=debug cargo run
RUST_LOG=debug cargo run
RUST_LOG=drasi_server=trace cargo run# Clone with submodules
git clone --recurse-submodules https://github.com/drasi-project/drasi-server.git
cd drasi-server
# Build
cargo build --release
# Run tests
cargo test
# Format and lint
cargo fmt
cargo clippyApache License 2.0. See LICENSE for details.
- DrasiLib - Core event processing engine
- Drasi - Main Drasi project
- Drasi Documentation - Complete documentation
- Issues: GitHub Issues
- Discussions: GitHub Discussions