NSerf is a full, from-scratch port of HashiCorp Serf to modern C#. The project mirrors Serf's decentralized cluster membership, failure detection, and event dissemination model while embracing idiomatic .NET patterns for concurrency, async I/O, and tooling. The codebase targets .NET 8+ and is currently in beta while the team polishes APIs and round-trips real-world workloads.
- Key Differences
- Repository Layout
- Getting Started
- ASP.NET Core Integration
- Distributed Chat Example
- YARP Service Discovery Example
- Docker Deployment
- Testing
- Project Status
- License
While the behaviour and surface area match Serf's reference implementation, a few platform-specific choices differ:
- Serialization relies on the high-performance MessagePack-CSharp stack instead of Go's native MessagePack bindings, keeping message layouts identical to the original protocol.
- Compression uses the built-in .NET
System.IO.Compression.GZipStreamfor gossip payload compression, replacing the Go LZW (Lempel-Ziv-Welch) adapter while preserving wire compatibility. - Async orchestration embraces task-based patterns and the C# transaction-style locking helpers introduced during the port, matching Go's channel semantics without blocking threads.
- Lighthouse - Join the cluster without hardcoding a node address
- Service Discovery - A basic service discovery ready to be used with .net applications or other services using the CLI
NSerf/
├─ NSerf.sln # Solution entry point
├─ NSerf/ # Core library (agent, memberlist, Serf runtime)
│ ├─ Agent/ # CLI agent runtime, RPC server, script handlers
│ ├─ Client/ # RPC client, request/response contracts
│ ├─ Extensions/ # ASP.NET Core DI integration
│ ├─ Memberlist/ # Gossip, failure detection, transport stack
│ ├─ Serf/ # Cluster state machine, event managers, helpers
│ └─ ...
├─ NSerf.CLI/ # dotnet CLI facade mirroring `serf` command
├─ NSerf.CLI.Tests/ # End-to-end and command-level test harnesses
├─ NSerf.ChatExample/ # Distributed chat demo with SignalR
├─ NSerf.YarpExample/ # YARP reverse proxy with dynamic service discovery
├─ NSerf.BackendService/ # Sample backend service for YARP example
├─ NSerfTests/ # Comprehensive unit, integration, and verification tests
└─ documentation *.md # Test plans, remediation reports, design notes
- Agent – Implements the long-running daemon (
serf agent) including configuration loading, RPC hosting, script invocation, and signal handling. - Extensions – ASP.NET Core dependency injection integration for seamless web application integration.
- Memberlist – Full port of HashiCorp's SWIM-based memberlist, including gossip broadcasting, indirect pinging, and encryption support.
- Serf – Cluster coordination, state machine transitions, Lamport clocks, and query/event processing all live here.
- Client – Typed RPC requests/responses and ergonomic helpers for building management tooling.
- CLI – A drop-in
serfCLI replacement built onSystem.CommandLine, sharing the same RPC surface and defaults as the Go binary. - ChatExample – Real-world distributed chat application demonstrating NSerf capabilities with SignalR and Docker.
- YarpExample – Production-ready service discovery and load balancing with Microsoft YARP reverse proxy.
- BackendService – Sample microservice demonstrating auto-registration and cluster participation.
- .NET SDK 8.0 (or newer)
- Docker Desktop (optional, for containerized deployment)
Add NSerf to your project using the .NET CLI:
dotnet add package NSerfOr via Package Manager Console in Visual Studio:
Install-Package NSerfOr add directly to your .csproj file:
<PackageReference Include="NSerf" Version="0.1.6-beta" />Alternatively, clone and build from source:
git clone https://github.com/BoolHak/NSerfProject.git
cd NSerfProject
dotnet build NSerf.sln-
Restore and build
dotnet restore dotnet build NSerf.sln
-
Run the agent locally
dotnet run --project NSerf.CLI --agent
-
Invoke commands against a running agent
dotnet run --project NSerf.CLI --members dotnet run --project NSerf.CLI --query "ping" --payload "hello"
NSerf provides first-class ASP.NET Core integration through the NSerf.Extensions package, allowing seamless addition of cluster membership to web applications.
Add Serf to your application with default settings:
using NSerf.Extensions;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddNSerf();
var app = builder.Build();
app.Run();builder.Services.AddNSerf(options =>
{
options.NodeName = "web-server-1";
options.BindAddr = "0.0.0.0:7946";
options.Tags["role"] = "web";
options.Tags["datacenter"] = "us-east-1";
options.StartJoin = new[] { "10.0.1.10:7946", "10.0.1.11:7946" };
options.SnapshotPath = "/var/serf/snapshot";
options.RejoinAfterLeave = true;
});appsettings.json:
{
"Serf": {
"NodeName": "web-server-1",
"BindAddr": "0.0.0.0:7946",
"Tags": {
"role": "web",
"datacenter": "us-east-1"
},
"StartJoin": ["10.0.1.10:7946", "10.0.1.11:7946"],
"RetryJoin": ["10.0.1.10:7946", "10.0.1.11:7946"],
"SnapshotPath": "/var/serf/snapshot",
"RejoinAfterLeave": true
}
}Program.cs:
builder.Services.AddNSerf(builder.Configuration, "Serf");Inject SerfAgent or Serf into your services:
using NSerf.Agent;
using NSerf.Serf;
public class ClusterService
{
private readonly SerfAgent _agent;
private readonly Serf _serf;
public ClusterService(SerfAgent agent, Serf serf)
{
_agent = agent;
_serf = serf;
}
public int GetClusterSize()
{
return _serf.Members().Length;
}
public async Task BroadcastEventAsync(string eventName, byte[] payload)
{
await _serf.UserEventAsync(eventName, payload, coalesce: true);
}
}Register custom event handlers:
using NSerf.Agent;
using NSerf.Serf.Events;
public class CustomEventHandler : IEventHandler
{
private readonly ILogger<CustomEventHandler> _logger;
public CustomEventHandler(ILogger<CustomEventHandler> logger)
{
_logger = logger;
}
public void HandleEvent(Event evt)
{
switch (evt)
{
case MemberEvent memberEvent:
foreach (var member in memberEvent.Members)
{
_logger.LogInformation("Member {Name} is now {Status}",
member.Name, memberEvent.Type);
}
break;
case UserEvent userEvent:
_logger.LogInformation("User event {Name}", userEvent.Name);
break;
}
}
}
// Register handler
builder.Services.AddSingleton<CustomEventHandler>();
// Attach in startup
public class EventHandlerRegistration : IHostedService
{
private readonly SerfAgent _agent;
private readonly CustomEventHandler _handler;
public EventHandlerRegistration(SerfAgent agent, CustomEventHandler handler)
{
_agent = agent;
_handler = handler;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_agent.RegisterEventHandler(_handler);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
_agent.DeregisterEventHandler(_handler);
return Task.CompletedTask;
}
}| Property | Type | Default | Description |
|---|---|---|---|
NodeName |
string |
Machine name | Unique node identifier |
BindAddr |
string |
"0.0.0.0:7946" |
Bind address (IP:Port) |
AdvertiseAddr |
string? |
null |
Advertise address for NAT |
EncryptKey |
string? |
null |
Base64 encryption key (32 bytes) |
RPCAddr |
string? |
null |
RPC server address |
Tags |
Dictionary<string, string> |
Empty | Node metadata tags |
Profile |
string |
"lan" |
Network profile (lan/wan/local) |
SnapshotPath |
string? |
null |
Snapshot file path |
RejoinAfterLeave |
bool |
false |
Allow rejoin after leave |
StartJoin |
string[] |
Empty | Nodes to join on startup |
RetryJoin |
string[] |
Empty | Nodes to retry joining |
The NSerf.ChatExample project demonstrates real-world NSerf usage with a distributed chat application featuring SignalR for real-time communication and Serf for cluster coordination.
- Cluster Membership: Nodes automatically discover each other
- Distributed Messaging: Messages broadcast across all nodes via Serf user events
- Automatic Failover: Nodes continue operating if others go down
- Real-time UI: SignalR provides instant updates
- Docker Support: Run a 3-node cluster with one command
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Browser 1 │ │ Browser 2 │ │ Browser 3 │
│ (WebSocket) │ │ (WebSocket) │ │ (WebSocket) │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ SignalR │ SignalR │ SignalR
┌────────▼────────┐ ┌────────▼────────┐ ┌────────▼────────┐
│ Chat Node 1 │◄───────►│ Chat Node 2 │◄───────►│ Chat Node 3 │
│ (ASP.NET) │ Serf │ (ASP.NET) │ Serf │ (ASP.NET) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
└───────────────────────────┴───────────────────────────┘
Serf Cluster Membership
Run a 3-node cluster:
cd NSerf/NSerf.ChatExample
docker-compose up --buildAccess the chat:
- Node 1: http://localhost:5000
- Node 2: http://localhost:5001
- Node 3: http://localhost:5002
Stop the cluster:
docker-compose downTerminal 1 - Bootstrap Node:
cd NSerf.ChatExample
dotnet runTerminal 2 - Node 2:
dotnet run chat-node-2 5001 7947 "127.0.0.1:7946"Terminal 3 - Node 3:
dotnet run chat-node-3 5002 7948 "127.0.0.1:7946"Then open http://localhost:5000, http://localhost:5001, and http://localhost:5002 in different browsers!
Each node exposes:
GET /- Chat UIGET /health- Health checkGET /members- Cluster membership (JSON)/chatHub- SignalR WebSocket endpoint
Test 1: Message Broadcasting
- Open 3 browser tabs (one per node)
- Send a message from any tab
- Verify it appears in all tabs
Test 2: Node Failure
# Stop node 2
docker-compose stop chat-node-2
# Messages still work between nodes 1 and 3
# Restart node 2
docker-compose start chat-node-2
# Node 2 rejoins automaticallyTest 3: Cluster Status
curl http://localhost:5000/members | jqThe NSerf.YarpExample project demonstrates production-ready service discovery and load balancing by integrating NSerf with Microsoft YARP (Yet Another Reverse Proxy). This showcases the real production value of NSerf for microservices architectures.
This example solves a critical production problem: How do you dynamically discover and load balance backend services without external dependencies like Consul or Eureka?
NSerf + YARP provides:
- Zero-configuration service discovery - No hardcoded endpoints
- Automatic load balancing - Round-robin across healthy nodes
- Dynamic scaling - Add/remove services without restarting the proxy
- Health monitoring - Automatic removal of failed backends
- Encrypted cluster communication - AES-256-GCM secure gossip
- Persistent snapshots - Cluster state survives restarts
- No external dependencies - Everything built into your .NET application
Client Request → YARP Proxy → [NSerf Cluster Discovery] → Backend-1
→ Backend-2
→ Backend-3
The YARP proxy:
- Joins the Serf cluster with role
proxy - Discovers backend services tagged with
service=backend - Dynamically updates YARP routing configuration
- Load balances requests across healthy backends
- Monitors backend health and removes failed nodes
Run a complete setup with 1 proxy and 3 backend services:
cd NSerf/NSerf.YarpExample
docker-compose up --buildThis starts:
- YARP Proxy on
http://localhost:8080 - Backend-1 on
http://localhost:5001 - Backend-2 on
http://localhost:5002 - Backend-3 on
http://localhost:5003
1. Verify the proxy discovered all backends:
curl http://localhost:8080/proxy/members | jq2. Test load balancing (requests round-robin across backends):
for i in {1..10}; do
curl http://localhost:8080/api/info | jq '.instance'
doneOutput shows requests distributed: backend-1, backend-2, backend-3, backend-1, ...
3. Test dynamic service addition:
# Add a new backend - it's automatically discovered and added to load balancing!
docker-compose up -d --scale backend-3=2
# Wait 5 seconds for discovery, then:
curl http://localhost:8080/proxy/members | jq '.backends'
# Shows 4 backends now!4. Test automatic failover:
# Stop a backend
docker-compose stop backend-2
# Requests automatically route around the failed node
for i in {1..10}; do
curl http://localhost:8080/api/info | jq '.instance'
done
# Only shows backend-1 and backend-3
# Restart it - automatically rejoins
docker-compose start backend-2YARP Proxy (port 8080):
/{any-path}- Proxied to discovered backendsGET /proxy/health- Proxy health checkGET /proxy/members- View discovered services
Backend Services (ports 5001-5003):
GET /health- Health check endpointGET /api/info- Backend instance informationGET /api/work/{id}- Simulates processing workGET /api/cluster- View cluster from backend perspective
1. Microservices API Gateway
- Services auto-register on startup
- Gateway discovers and routes automatically
- No manual configuration management
2. Multi-Region Load Balancing
options.Tags["service"] = "api";
options.Tags["region"] = "us-east-1";
options.Tags["zone"] = "1a";3. Canary Deployments
options.Tags["version"] = "2.0";
options.Tags["canary"] = "true";
// Route 90% to stable, 10% to canary4. Service Mesh
- Combined with mTLS
- Circuit breakers
- Advanced health checks
- Traffic shaping
This example demonstrates that NSerf provides enterprise-grade service discovery for .NET microservices:
- No Consul/Eureka needed - Gossip protocol handles discovery
- No Kubernetes required - Works anywhere .NET runs
- Production-ready - Used patterns from HashiCorp and Netflix
- Simple integration - < 200 lines of integration code
- Zero external dependencies - Everything in your .NET stack
For detailed documentation, see NSerf.YarpExample/README.md.
| Service | Container Port | Host Port | Purpose |
|---|---|---|---|
| chat-node-1 | 5000 | 5000 | HTTP/WebSocket |
| chat-node-1 | 7946 | 7946 | Serf Gossip |
| chat-node-2 | 5000 | 5001 | HTTP/WebSocket |
| chat-node-2 | 7946 | 7947 | Serf Gossip |
| chat-node-3 | 5000 | 5002 | HTTP/WebSocket |
| chat-node-3 | 7946 | 7948 | Serf Gossip |
View Logs:
docker-compose logs -f
docker-compose logs -f chat-node-1
docker-compose logs --tail=50 -fCheck Health:
curl http://localhost:5000/health
curl http://localhost:5001/members | jqContainers Won't Start:
docker-compose build --no-cache
docker-compose up --build --force-recreateNodes Can't Join:
docker network inspect nserfchatexample_serf-chat-network
docker exec chat-node-2 ping chat-node-1Clear Everything:
docker-compose down -v --rmi allFor production:
- Use Kubernetes or Docker Swarm
- Enable encryption (
options.EncryptKey) - Add persistent volumes for snapshots
- Configure health checks and monitoring
- Use reverse proxy (Nginx/Traefik)
- Enable HTTPS with proper certificates
- Set resource limits
The solution ships with an extensive test suite that mirrors HashiCorp's verification matrix:
dotnet test NSerf.slnTest coverage includes:
- State machine transitions
- Gossip protocols
- RPC security
- Script execution
- CLI orchestration
- Agent lifecycle
- Event handling
NSerf is feature-complete relative to Serf 1.6.x but remains in beta while the team onboards additional users, tightens compatibility, and stabilizes the public API surface. Expect minor breaking changes as interoperability edge cases are addressed.
Current Status:
- Core Serf protocol implementation
- SWIM-based memberlist gossip
- RPC client/server with authentication
- Join the cluster without hardcoding a node address
- CLI tool (drop-in replacement)
- ASP.NET Core integration
- Event handlers and queries
- Docker deployment examples
- 1400+ comprehensive tests
All source files retain the original MPL-2.0 license notices from HashiCorp Serf. The port is distributed under the same MPL-2.0 terms.