Skip to content

Commit acf22a8

Browse files
jadeproheshanno-regret666xmkdgdzet21ff
committed
feat(middleware):implement Middleware Architecture
This Pull Request introduces a pluggable middleware architecture to the `trpc-mcp-go` server, inspired by the robust Filter pattern in `tRPC-Go`. This architecture provides a powerful mechanism for handling cross-cutting concerns in a modular and reusable way. In addition to the core engine, this PR delivers three fundamental and critical middleware implementations: `Recovery`, `Logging`, and `Metrics`, laying a solid foundation for future functional extensions. Notably, this integration effort also drove significant improvements to the core engine. We identified and fixed several key underlying bugs and ultimately established a clear, sustainable development model for future contributors to reference. --- - **Pluggable Architecture**: Implemented a `MiddlewareChain` that wraps the final request handler in an "onion model," allowing requests to flow sequentially through a series of middlewares before reaching the business logic. - **Seamless Integration**: The engine is directly integrated into `mcpHandler.handleRequest`, ensuring that all incoming requests automatically pass through the middleware chain. - **Robustness**: Added a defensive null pointer check in `handler.go` to prevent panics when the `server` is not initialized (e.g., in `SSEServer` tests), ensuring backward compatibility. - **Panic Safety**: Catches panics occurring anywhere in the request lifecycle (including other middlewares and the final handler). - **Standard-Compliant Error Response**: Returns a clean, JSON-RPC 2.0 compliant `Internal Server Error` response, avoiding the exposure of stack traces or other internal details to the client. - **Highly Configurable**: Supports advanced options such as custom loggers, stack trace control, and panic filters. - **Usage**: The `Recovery` middleware should be registered as the first (outermost) middleware to ensure it can catch all subsequent panics. ```go // Default usage for basic panic recovery s.Use(middlewares.Recovery()) // Advanced usage, e.g., with a custom error response s.Use(middlewares.RecoveryWithOptions( middlewares.WithCustomErrorResponse(func(ctx context.Context, req *mcp.JSONRPCRequest, panicErr interface{}) mcp.JSONRPCMessage { // Return a more user-friendly, custom error message return mcp.NewJSONRPCErrorResponse( req.ID, mcp.ErrCodeInternal, "Service is temporarily unavailable. Please try again later.", nil, ) }), )) ``` - **Structured Logging**: Provides structured, leveled logging for the entire request lifecycle (request start, request end, request failure). - **Adheres to Project Standards**: Refactored to fully adapt to the project's standard `mcp.Logger` interface, making it a "good citizen" within the ecosystem. - **Rich Context**: Supports configurable logging of request/response payloads and can extract custom fields from the `context` to be included in logs. - **Usage**: This middleware offers multiple configuration options for flexible logging. ```go // Import the project's standard logger logger := mcp.GetDefaultLogger() // Register the middleware, enabling all request logs and payload recording s.Use(middlewares.NewLoggingMiddleware(logger, // The default is to only log errors; this option logs all requests middlewares.WithShouldLog(func(level logging.Level, duration time.Duration, err error) bool { return true }), // Log the detailed content of requests and responses middlewares.WithPayloadLogging(true), )) ``` - **Based on OpenTelemetry**: Built on the OpenTelemetry standard, ensuring broad compatibility with modern observability platforms (like Prometheus, Jaeger). - **Core Metrics**: Provides out-of-the-box monitoring of core metrics: - `mcp.server.requests.count`: Total number of requests - `mcp.server.errors.count`: Number of failed requests - `mcp.server.request.latency`: Request latency histogram - `mcp.server.requests.in_flight`: Number of in-flight requests - **Usage**: The `metrics` middleware depends on an `OpenTelemetry Collector` and `Prometheus`. We have provided a `docker-compose.yaml` file in the `examples/middlewares/metrics/` directory to start all dependencies with a single command. **Verification Steps**: 1. `cd examples/middlewares/metrics/` 2. `docker-compose up -d` 3. Run the `metrics_integration_main.go` example and send some requests. 4. Access `http://localhost:9090` to query the above core metrics in the Prometheus UI. --- **Status**: All changes have been rebased on the latest `main` branch (`dd0bd82e69c7ae947a66059264c20940f46a4eb5`) in the `feat/middleware-final` branch. All tests pass, ensuring this PR can be merged without conflicts. We not only added middleware functionality but also made necessary improvements and fixes to the core engine during the integration process. - **Symptom**: In the early stages of development, we lacked a standard way to independently test middlewares. - **Reason**: Testing middlewares requires simulating `request`, `session`, and `next` functions, which is tedious to write manually. - **Solution**: We created a new `mcptest` public testing package, providing `RunMiddlewareTest` and `CheckMiddlewareFunc` helper functions. This greatly simplified the unit testing of middlewares and enabled parallel development within the team. - **Symptom**: E2E tests exposed a `nil pointer dereference` panic after middleware integration. - **Reason**: `SSEServer` did not provide a `server` instance when creating `mcpHandler`, causing the program to crash when accessing `h.server.middlewares`. - **Solution**: - Implemented the `MiddlewareChain` core engine in `middleware.go`. - Integrated the engine into the `handleRequest` method of `handler.go` and added a null pointer check for `handler.server`. - **Impact**: This change elegantly resolved the panic with minimal intrusion and without modifying any test files, ensuring backward compatibility. The scope of the change is limited to the `handleRequest` function. - **Symptom**: When a middleware returned a `*JSONRPCError`, the HTTP handler would incorrectly wrap it within the `result` field, violating the JSON-RPC 2.0 specification. - **Reason**: The `handlePostRequest` function incorrectly assumed that all returns from `mcpHandler` were successful results. - **Solution**: We introduced a type check in `handlePostRequest`. If the response type is already `*JSONRPCError`, it will be sent directly to the client. - **Impact**: This change is limited to the `handlePostRequest` function and ensures the correctness of error handling. - **Symptom**: The server would silently discard all requests to the root path (`/`) when no path was explicitly configured. - **Reason**: A logic flaw in the `isValidPath` check. - **Solution**: Although this fix was ultimately applied in the integration tests (via `mcp.WithServerPath("")`), identifying this behavior was crucial for correctly documenting the server's usage. - **Impact**: No code change, but contributed important "best practice" documentation to the project. - **Problem**: The internal function for creating error responses (`newJSONRPCErrorResponse`) was not exported and was used inconsistently throughout the codebase. - **Solution**: We performed a global refactoring, renaming it to `NewJSONRPCErrorResponse` and unifying all call sites. - **Impact**: This was a safe refactoring guaranteed by Go's compile-time checks. It affected multiple `manager_*.go` and `server_*.go` files but was limited to function call modifications and did not alter any business logic. --- This large-scale feature integration allowed us to establish and document a set of best practices for future development. We established a clear workflow: 1. **Write Tests First**: For any new middleware, start by writing a minimal, end-to-end integration test in `examples/middleware_usage/server/`. 2. **Let Tests Drive Development**: Run the test and let compiler and runtime errors guide all necessary fixes, refactoring, and adaptations. This model proved to be invaluable in ensuring quality and accelerating development. We adopted a two-tier convention for organizing example code: - **Simple Middlewares**: Single-file, self-contained middlewares are placed directly in `examples/middlewares/`. - **Complex Middlewares**: Modules with multiple files, documentation, and configurations (like `metrics`) are given their own subdirectories and separate packages (e.g., `examples/middlewares/metrics/`) to maintain high cohesion. --- The following is a brief example demonstrating how to configure all three new middlewares on a single server: ```go package main import ( "context" "fmt" "net/http" mcp "trpc.group/trpc-go/trpc-mcp-go" "trpc.group.com/trpc-go/trpc-mcp-go/examples/middlewares" metricmw "trpc.group.com/trpc-go/trpc-mcp-go/examples/middlewares/metrics" ) func main() { // 1. Create a new server instance s := mcp.NewServer( "my-server", "1.0.0", mcp.WithStatelessMode(true), mcp.WithServerPath(""), ) // 2. Set up and register the Metrics middleware rec, shutdown, _ := metricmw.NewOtelMetricsRecorder() defer func() { _ = shutdown(context.Background()) }() s.Use(metricmw.NewMetricsMiddleware(metricmw.WithRecorder(rec))) // 3. Register the Logging and Recovery middlewares // Note: The Recovery middleware should generally be the first (outermost) middleware s.Use(middlewares.Recovery()) s.Use(middlewares.NewLoggingMiddleware(mcp.GetDefaultLogger())) // 4. Register your business tools... // s.RegisterTool(...) // 5. Start the server fmt.Println("Server starting on :8080") http.ListenAndServe(":8080", s.Handler()) } ``` Co-authored-by: Wang jia <RangelJara195@gmail.com> Co-authored-by: Lin Yuze <jadeproheshan@gmail.com> Co-authored-by: Chang Mingyue <xmkdgdz@foxmail.com> Co-authored-by: Chen Lei <123213112a@gmail.com>
1 parent dd0bd82 commit acf22a8

33 files changed

+2979
-90
lines changed

.gitignore

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
bbg/
2+
GEMINI.md
3+
4+
# IDE configuration
5+
.idea/
6+
.vscode/
7+
8+
# Compiled binaries
9+
main

README.md

Lines changed: 121 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ A Go implementation of the [Model Context Protocol (MCP)](https://github.com/mod
1010
- **Streaming Support**: Real-time data streaming with Server-Sent Events (SSE)
1111
- **Tool Framework**: Register and execute tools with structured parameter handling
1212
- **Struct-First API**: Generate schemas automatically from Go structs with type safety
13+
- **Middleware Architecture**: A pluggable middleware architecture for handling cross-cutting concerns like logging, metrics, and recovery.
1314
- **Resource Management**: Serve text and binary resources with RESTful interfaces
1415
- **Prompt Templates**: Create and manage prompt templates for LLM interactions
1516
- **Progress Notifications**: Built-in support for progress updates on long-running operations
@@ -80,7 +81,7 @@ import (
8081
"log"
8182
"os"
8283
"os/signal"
83-
"syscall"
84+
syscall "syscall"
8485

8586
mcp "trpc.group/trpc-go/trpc-mcp-go"
8687
)
@@ -168,7 +169,7 @@ import (
168169

169170
// initializeClient initializes the MCP client with server connection and session setup
170171
func initializeClient(ctx context.Context) (*mcp.Client, error) {
171-
log.Println("===== Initialize client =====")
172+
log.Println("===== Initialize client ====")
172173
serverURL := "http://localhost:3000/mcp"
173174
mcpClient, err := mcp.NewClient(
174175
serverURL,
@@ -204,13 +205,14 @@ func initializeClient(ctx context.Context) (*mcp.Client, error) {
204205

205206
// handleTools manages tool-related operations including listing and calling tools
206207
func handleTools(ctx context.Context, client *mcp.Client) error {
207-
log.Println("===== List available tools =====")
208+
log.Println("===== List available tools ====")
208209
listToolsResp, err := client.ListTools(ctx, &mcp.ListToolsRequest{})
209210
if err != nil {
210211
return fmt.Errorf("failed to list tools: %v", err)
211212
}
212213

213-
tools := listToolsResp.Tools
214+
215+
tools := listToolsResp.Tools
214216
if len(tools) == 0 {
215217
log.Printf("No available tools.")
216218
return nil
@@ -250,7 +252,7 @@ func terminateSession(ctx context.Context, client *mcp.Client) error {
250252
return nil
251253
}
252254

253-
log.Printf("===== Terminate session =====")
255+
log.Printf("===== Terminate session ====")
254256
if err := client.TerminateSession(ctx); err != nil {
255257
return fmt.Errorf("failed to terminate session: %v", err)
256258
}
@@ -353,6 +355,116 @@ func main() {
353355
}
354356
```
355357

358+
## Middleware
359+
360+
The server supports a pluggable middleware architecture, allowing for modular and reusable handling of cross-cutting concerns. Middlewares are executed in an "onion model" before the main tool handler is called.
361+
362+
### How to Use
363+
364+
Here is a brief example of how to set up a server with the `Recovery`, `Logging`, and `Metrics` middlewares.
365+
366+
```go
367+
package main
368+
369+
import (
370+
"context"
371+
"fmt"
372+
"net/http"
373+
"time"
374+
375+
mcp "trpc.group/trpc-go/trpc-mcp-go"
376+
"trpc.group/trpc-go/trpc-mcp-go/examples/middlewares"
377+
"trpc.group/trpc-go/trpc-mcp-go/examples/middlewares/logging"
378+
metricmw "trpc.group/trpc-go/trpc-mcp-go/examples/middlewares/metrics"
379+
)
380+
381+
func main() {
382+
// 1. Create a new server.
383+
s := mcp.NewServer(
384+
"my-server", "1.0.0",
385+
mcp.WithStatelessMode(true),
386+
mcp.WithServerPath(""),
387+
)
388+
389+
// 2. Set up and register the Metrics middleware.
390+
rec, shutdown, _ := metricmw.NewOtelMetricsRecorder()
391+
defer func() { _ = shutdown(context.Background()) }()
392+
s.Use(metricmw.NewMetricsMiddleware(metricmw.WithRecorder(rec)))
393+
394+
// 3. Register Logging and Recovery middlewares.
395+
// Note: Recovery middleware should generally be the first (outermost) to catch all panics.
396+
s.Use(middlewares.Recovery())
397+
s.Use(middlewares.NewLoggingMiddleware(mcp.GetDefaultLogger()))
398+
399+
// 4. Register your business tools...
400+
// s.RegisterTool(...)
401+
402+
// 5. Start the server.
403+
fmt.Println("Server starting on :8080")
404+
http.ListenAndServe(":8080", s.Handler())
405+
}
406+
```
407+
408+
### Available Middlewares
409+
410+
All middleware implementations are provided as examples in the `examples/middlewares/` directory.
411+
412+
#### Recovery
413+
- **Panic Safety**: Catches panics anywhere in the request lifecycle.
414+
- **Clean Error Response**: Returns a clean, JSON-RPC 2.0 compliant `Internal Server Error` response.
415+
- **Configurable**: Supports custom logging, stack trace control, and panic filtering.
416+
- **Usage**:
417+
```go
418+
// Default usage
419+
s.Use(middlewares.Recovery())
420+
421+
// Advanced usage with custom error response
422+
s.Use(middlewares.RecoveryWithOptions(
423+
middlewares.WithCustomErrorResponse(func(ctx context.Context, req *mcp.JSONRPCRequest, panicErr interface{}) mcp.JSONRPCMessage {
424+
return mcp.NewJSONRPCErrorResponse(
425+
req.ID,
426+
mcp.ErrCodeInternal,
427+
"Service temporarily unavailable. Please try again later.",
428+
nil,
429+
)
430+
}),
431+
))
432+
```
433+
434+
#### Logging
435+
- **Structured Logging**: Provides structured, leveled logging for the entire request lifecycle.
436+
- **Standardized Interface**: Integrates with the project's standard `mcp.Logger` interface.
437+
- **Rich Context**: Can be configured to log request/response payloads and add custom fields from the `context`.
438+
- **Usage**:
439+
```go
440+
logger := mcp.GetDefaultLogger()
441+
442+
// Register middleware to log all requests and their payloads
443+
s.Use(middlewares.NewLoggingMiddleware(logger,
444+
// Default is to only log errors; this logs all requests
445+
middlewares.WithShouldLog(func(level logging.Level, duration time.Duration, err error) bool {
446+
return true
447+
}),
448+
// Log the full request and response content
449+
middlewares.WithPayloadLogging(true),
450+
))
451+
```
452+
453+
#### Metrics
454+
- **OpenTelemetry Based**: Built on the OpenTelemetry standard for broad compatibility with platforms like Prometheus and Jaeger.
455+
- **Core Metrics**: Tracks essential metrics out-of-the-box:
456+
- `mcp.server.requests.count`: Total number of requests.
457+
- `mcp.server.errors.count`: Number of failed requests.
458+
- `mcp.server.request.latency`: Request duration histogram.
459+
- `mcp.server.requests.in_flight`: Number of concurrent requests.
460+
- **Usage**:
461+
The `metrics` middleware requires an OpenTelemetry Collector and a Prometheus instance. A `docker-compose.yaml` file is provided in `examples/middlewares/metrics/` to easily start these dependencies.
462+
**Verification Steps**:
463+
1. `cd examples/middlewares/metrics/`
464+
2. `docker-compose up -d`
465+
3. Run the `metrics_integration_main.go` example and send some requests.
466+
4. Access `http://localhost:9090` to query the core metrics in the Prometheus UI.
467+
356468
## Configuration
357469

358470
### Server Configuration
@@ -735,13 +847,13 @@ Define MCP tools using Go structs for automatic schema generation and type safet
735847
```go
736848
// Define input/output structures
737849
type WeatherInput struct {
738-
Location string `json:"location" jsonschema:"required,description=City name"`
739-
Units string `json:"units,omitempty" jsonschema:"description=Temperature units,enum=celsius,enum=fahrenheit,default=celsius"`
850+
Location string `json:"location" jsonschema:"required,description=City name"
851+
Units string `json:"units,omitempty" jsonschema:"description=Temperature units,enum=celsius,enum=fahrenheit,default=celsius"
740852
}
741853

742854
type WeatherOutput struct {
743-
Temperature float64 `json:"temperature" jsonschema:"description=Current temperature"`
744-
Description string `json:"description" jsonschema:"description=Weather description"`
855+
Temperature float64 `json:"temperature" jsonschema:"description=Current temperature"
856+
Description string `json:"description" jsonschema:"description=Weather description"
745857
}
746858

747859
// Create tool with automatic schema generation

0 commit comments

Comments
 (0)