Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion apps/backend/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ ENV UV_COMPILE_BYTECODE=1
ENV UV_NO_MANAGED_PYTHON=1

# Use copy mode for linking dependencies (required by the cache)
ENV UV_LINK_MODE=copy
ENV UV_LINK_MODE=copy
# Skip the installation of the dev dependencies
ENV UV_NO_DEV=1

Expand Down Expand Up @@ -68,6 +68,8 @@ FROM mirror.gcr.io/library/python:3.10.17-slim AS runtime
# Copy Bun from official image (lighter than Node.js, required for bunx to run MCP servers)
COPY --from=oven/bun:latest /usr/local/bin/bun /usr/local/bin/bun
COPY --from=oven/bun:latest /usr/local/bin/bunx /usr/local/bin/bunx
# Create npx alias to bunx for compatibility (users can write npx, Docker will use bunx)
RUN ln -s /usr/local/bin/bunx /usr/local/bin/npx

WORKDIR /app

Expand Down
2 changes: 2 additions & 0 deletions apps/backend/Dockerfile.dev
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Copy Bun from official image (lighter than Node.js, required for bunx to run MCP servers)
COPY --from=oven/bun:latest /usr/local/bin/bun /usr/local/bin/bun
COPY --from=oven/bun:latest /usr/local/bin/bunx /usr/local/bin/bunx
# Create npx alias to bunx for compatibility (users can write npx, Docker will use bunx)
RUN ln -s /usr/local/bin/bunx /usr/local/bin/npx

WORKDIR /app

Expand Down
39 changes: 29 additions & 10 deletions apps/backend/src/rhesis/backend/app/routers/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,23 +33,42 @@ def create_tool(
current_user: User = Depends(require_current_user_or_token),
):
"""
Create a new tool integration.
Create a new tool.
The credentials (JSON dict) will be encrypted in the database.
Examples: {"NOTION_TOKEN": "ntn_abc..."} or {"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_abc..."}
A tool allows the system to connect to an external service or API. Examples of tools are:
For custom providers (provider_type="custom"), you must provide the MCP server configuration
in tool_metadata with credential placeholders. Placeholders MUST use simple format like
{{ TOKEN }} (not {{ TOKEN | tojson }}) because the JSON must be valid before Jinja2 rendering.
- MCPs
- APIs
Currently, we support the following MCP tool providers:
1. **Notion**
- Store the Notion token in the credentials dictionary with the key `"NOTION_TOKEN"`.
- Example:
```json
{"NOTION_TOKEN": "ntn_abc..."}
```
Example tool_metadata for custom provider:
2. **Custom MCP provider**
- The MCP server configuration JSON should be provided in `tool_metadata`
- The API token should be stored in the credentials dictionary
- The custom provider should use **npx** to run the MCP server.
Example `tool_metadata` for a custom provider:
```json
{
"command": "bunx",
"args": ["--bun", "@custom/mcp-server"],
"command": "npx",
"args": ["@example/mcp-server"],
"env": {
"NOTION_TOKEN": "{{ NOTION_TOKEN }}"
"API_TOKEN": "{{ TOKEN }}"
}
}
```
Where the credentials dictionary is:
```json
{"TOKEN": "your_api_token_123"}
```
"""
organization_id, user_id = tenant_context
return crud.create_tool(db=db, tool=tool, organization_id=organization_id, user_id=user_id)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@ export function MCPConnectionDialog({
</Typography>

<TextField
label="Auth Token"
label="TOKEN"
fullWidth
required={!isEditMode}
type={showAuthToken ? 'text' : 'password'}
Expand Down Expand Up @@ -494,9 +494,9 @@ export function MCPConnectionDialog({
color="text.secondary"
sx={{ mb: 2 }}
>
Configure your custom MCP server. Use credential
placeholders with <code>{'{{'}</code> and{' '}
<code>{'}}'}</code> format.
Provide your API token above, then paste your MCP server config below
using <code>{"{{ TOKEN }}"}</code> as a placeholder
wherever the token is required.
</Typography>
<Typography
variant="body2"
Expand All @@ -519,10 +519,10 @@ export function MCPConnectionDialog({
}}
>
{`{
"command": "bunx",
"args": ["--bun", "@notionhq/notion-mcp-server"],
"command": "npx",
"args": ["@example/mcp-server"],
"env": {
"NOTION_TOKEN": "{{ TOKEN }}"
"API_TOKEN": "{{ TOKEN }}"
}
}`}
</Box>
Expand Down
72 changes: 24 additions & 48 deletions sdk/src/rhesis/sdk/services/mcp/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,11 @@
import asyncio
import json
import logging
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union

import jinja2

from rhesis.sdk.models.base import BaseLLM
from rhesis.sdk.models.factory import get_model
from rhesis.sdk.services.mcp.client import MCPClient
Expand All @@ -28,25 +31,6 @@ class MCPAgent:
and accomplish tasks. Clients can customize behavior via system prompts.
"""

DEFAULT_SYSTEM_PROMPT = """You are an autonomous agent that can use MCP tools \
to accomplish tasks.
You operate in a ReAct loop: Reason β†’ Act β†’ Observe β†’ Repeat
For each iteration:
- Reason: Think step-by-step about what information you need and how to get it
- Act: Either call tools to gather information, or finish with your final answer
- Observe: Examine tool results and plan next steps
Guidelines:
- Break complex tasks into simple tool calls
- Use tool results to inform next actions
- When you have sufficient information, use action="finish" with your final_answer
- Be efficient: minimize unnecessary tool calls
- You can call multiple tools in a single iteration if they don't depend on each other
Remember: You must explicitly use action="finish" when done."""

def __init__(
self,
model: Optional[Union[str, BaseLLM]] = None,
Expand All @@ -69,10 +53,19 @@ def __init__(
if not mcp_client:
raise ValueError("mcp_client is required")

# Initialize template environment
templates_dir = Path(__file__).parent / "prompt_templates"
self._jinja_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(str(templates_dir)),
autoescape=False,
trim_blocks=True,
lstrip_blocks=True,
)

# Convert model to BaseLLM instance if needed
self.model = self._set_model(model)
self.mcp_client = mcp_client
self.system_prompt = system_prompt or self.DEFAULT_SYSTEM_PROMPT
self.system_prompt = system_prompt or self._load_default_system_prompt()
self.max_iterations = max_iterations
self.verbose = verbose
self.executor = ToolExecutor(mcp_client)
Expand All @@ -83,6 +76,11 @@ def _set_model(self, model: Optional[Union[str, BaseLLM]]) -> BaseLLM:
return model
return get_model(model)

def _load_default_system_prompt(self) -> str:
"""Load the default system prompt from template."""
template = self._jinja_env.get_template("system_prompt.j2")
return template.render()

async def run_async(self, user_query: str) -> AgentResult:
"""
Execute the agent's ReAct loop asynchronously.
Expand Down Expand Up @@ -383,34 +381,12 @@ def _build_prompt(
tools_text = self._format_tools(available_tools)
history_text = self._format_history(history)

prompt = f"""User Query: {user_query}
Available MCP Tools:
{tools_text}
"""
if history_text:
prompt += f"""Execution History:
{history_text}
Based on the query, available tools, and execution history above, decide what to do next.
"""
else:
prompt += """This is the first iteration. Analyze the query and decide \
what tools to call.
"""

prompt += """Your response should follow this structure:
- reasoning: Your step-by-step thinking about what to do
- action: Either "call_tool" (to execute tools) or "finish" (when you have the answer)
- tool_calls: List of tools to call if action="call_tool" (can be multiple)
- final_answer: Your complete answer if action="finish"
Think carefully about what information you need and how to get it efficiently."""

return prompt
template = self._jinja_env.get_template("iteration_prompt.j2")
return template.render(
user_query=user_query,
tools_text=tools_text,
history_text=history_text,
)

def _format_tools(self, tools: List[Dict[str, Any]]) -> str:
"""Format tool list into human-readable text with names, descriptions, \
Expand Down
4 changes: 2 additions & 2 deletions sdk/src/rhesis/sdk/services/mcp/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def __init__(
Args:
server_name: Friendly name for the server (e.g., "notionApi")
command: Command to launch the server (e.g., "bunx", "python")
command: Command to launch the server (e.g., "npx", "python")
args: Command arguments (e.g., ["--bun", "@notionhq/notion-mcp-server"])
env: Environment variables to pass to the server process
"""
Expand Down Expand Up @@ -274,7 +274,7 @@ def from_tool_config(cls, tool_name: str, tool_config: Dict, credentials: Dict[s
Example:
tool_config = {
"command": "bunx",
"command": "npx",
"args": ["--bun", "@notionhq/notion-mcp-server"],
"env": {
"NOTION_TOKEN": "{{ NOTION_TOKEN }}"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
User Query: {{ user_query }}

Available MCP Tools:
{{ tools_text }}

{% if history_text %}
Execution History:
{{ history_text }}

Based on the query, available tools, and execution history above, decide what to do next.

{% else %}
This is the first iteration. Analyze the query and decide what tools to call.

{% endif %}
Your response should follow this structure:
- reasoning: Your step-by-step thinking about what to do
- action: Either "call_tool" (to execute tools) or "finish" (when you have the answer)
- tool_calls: List of tools to call if action="call_tool" (can be multiple)
- final_answer: Your complete answer if action="finish"

Think carefully about what information you need and how to get it efficiently.
17 changes: 17 additions & 0 deletions sdk/src/rhesis/sdk/services/mcp/prompt_templates/system_prompt.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
You are an autonomous agent that can use MCP tools to accomplish tasks.

You operate in a ReAct loop: Reason β†’ Act β†’ Observe β†’ Repeat

For each iteration:
- Reason: Think step-by-step about what information you need and how to get it
- Act: Either call tools to gather information, or finish with your final answer
- Observe: Examine tool results and plan next steps

Guidelines:
- Break complex tasks into simple tool calls
- Use tool results to inform next actions
- When you have sufficient information, use action="finish" with your final_answer
- Be efficient: minimize unnecessary tool calls
- You can call multiple tools in a single iteration if they don't depend on each other

Remember: You must explicitly use action="finish" when done.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"command": "bunx",
"command": "npx",
"args": ["--bun", "mcp-remote", "https://mcp.atlassian.com/v1/sse"]
}
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"command": "bunx",
"command": "npx",
"args": ["--bun", "@notionhq/notion-mcp-server"],
"env": {
"NOTION_TOKEN": {{ NOTION_TOKEN | tojson }}
Expand Down
Loading