-
Notifications
You must be signed in to change notification settings - Fork 15
Open
Description
Part of duplicate code analysis: #2197
Summary
The 6-phase DIFC enforcement pipeline is implemented nearly identically in both internal/proxy/handler.go (handleWithDIFC) and internal/server/unified.go (the guarded tool-call handler). Since the proxy was explicitly designed to reuse the same enforcement logic as the MCP gateway, this duplication violates that design goal and creates a high maintenance risk.
Duplication Details
Pattern: 6-phase DIFC pipeline
- Severity: High
- Occurrences: 2 identical implementations
- Locations:
internal/proxy/handler.go(lines 95–260,handleWithDIFC)internal/server/unified.go(lines ~862–1030, guarded call-tool handler)
- Code Sample (Phase 0–2, representative):
// Phase 0: Get agent labels agentLabels := s.agentRegistry.GetOrCreate("proxy") // Phase 1: Guard labels the resource resource, operation, err := s.guard.LabelResource(ctx, toolName, args, backend, s.capabilities) if err != nil { /* fail closed */ } // Phase 2: Coarse-grained access check evalResult := s.evaluator.Evaluate(agentLabels.Secrecy, agentLabels.Integrity, resource, operation) if !evalResult.IsAllowed() { if operation == difc.OperationRead { /* skip */ } else { /* block */ } } // Phase 4: Guard labels the response labeledData, err := s.guard.LabelResponse(ctx, toolName, responseData, backend, s.capabilities) // Phase 5: Fine-grained filtering if collection, ok := labeledData.(*difc.CollectionLabeledData); ok { filtered := s.evaluator.FilterCollection(agentLabels.Secrecy, agentLabels.Integrity, collection, operation) // strict mode block, ToResult(), etc. } // Phase 6: Label accumulation (propagate mode) if s.enforcementMode == difc.EnforcementPropagate && labeledData != nil { agentLabels.AccumulateFromRead(overall) }
Impact Analysis
- Maintainability: Any change to DIFC enforcement semantics (e.g., a new phase, updated phase-2 read-skip logic, propagate-mode changes) must be made in two files. Given active development, drift between the two copies is likely.
- Bug Risk: Inconsistent handling of edge cases (e.g.,
nillabeledData,ToResulterrors) has already diverged between the two implementations. - Code Bloat: ~165 lines duplicated out of 340 total in
handler.go.
Refactoring Recommendations
- Extract
internal/difc/pipelinepackage- Create
internal/difc/pipeline/pipeline.gowith aRunnertype:type Request struct { ToolName string Args map[string]interface{} Body interface{} // upstream response data } type Runner struct { Guard guard.Guard Evaluator *difc.Evaluator AgentRegistry *difc.AgentRegistry Capabilities *difc.Capabilities EnforcementMode difc.EnforcementMode } type Result struct { FinalData interface{} Filtered *difc.FilteredCollectionLabeledData EvalResult difc.EvaluationResult } func (r *Runner) Execute(ctx context.Context, req Request, backendExec BackendExec) (Result, error)
- Both
proxy/handler.goandserver/unified.godelegate topipeline.Runner.Execute() BackendExecis a callback (func(ctx) (interface{}, error)) allowing each caller to inject its own Phase 3 (HTTP forward vs MCP backend call)- Estimated effort: ~1 day
- Benefits: single source of truth for DIFC semantics, easier testing
- Create
Implementation Checklist
- Review duplication findings
- Design
internal/difc/pipelineAPI - Extract shared pipeline logic
- Update
internal/proxy/handler.goto use pipeline - Update
internal/server/unified.goto use pipeline - Verify all existing unit + integration tests still pass (
make test-all)
Parent Issue
See parent analysis report: #2197
Related to #2197
Generated by Duplicate Code Detector · ◷
- expires on Mar 27, 2026, 2:59 AM UTC
Reactions are currently unavailable
Metadata
Metadata
Assignees
Type
Fields
Give feedbackNo fields configured for issues without a type.