Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
111 changes: 111 additions & 0 deletions community_usecase/File_System_MCP/demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
import asyncio
from pathlib import Path

import streamlit as st
from dotenv import load_dotenv

from camel.models import ModelFactory
from camel.toolkits import FunctionTool
from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from camel.toolkits import MCPToolkit, SearchToolkit
import sys

from owl.utils.enhanced_role_playing import OwlRolePlaying, arun_society
import logging

logging.basicConfig(level=logging.DEBUG)

# Load environment variables and set logger level
if sys.platform.startswith("win"):
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
load_dotenv()
set_log_level(level="DEBUG")

async def construct_society(task: str, tools: list[FunctionTool]) -> OwlRolePlaying:
"""
Build a multi-agent OwlRolePlaying instance.
"""
models = {
"user": ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O,
model_config_dict={"temperature": 0},
),
"assistant": ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O,
model_config_dict={"temperature": 0},
),
}

user_agent_kwargs = {"model": models["user"]}
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
task_kwargs = {
"task_prompt": task,
"with_task_specify": False,
}

society = OwlRolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,
assistant_role_name="assistant",
assistant_agent_kwargs=assistant_agent_kwargs,
)
return society

async def run_task(task: str) -> str:
"""
Connect to MCP servers, run the provided task, and return the answer.
"""
# Construct the path to your MCP server config file.
config_path = Path(__file__).parent / "mcp_servers_config.json"
mcp_toolkit = MCPToolkit(config_path=str(config_path))
answer = ""
try:
logging.debug("Connecting to MCP server...")
await mcp_toolkit.connect()
logging.debug("Connected to MCP server.")

# Prepare all tools from the MCP toolkit and the web search toolkit
tools = [*mcp_toolkit.get_tools(), SearchToolkit().search_duckduckgo]
society = await construct_society(task, tools)
answer, chat_history, token_count = await arun_society(society)
except Exception as e:
import traceback
st.error(f"An error occurred: {e}")
st.text(traceback.format_exc())
finally:
try:
await mcp_toolkit.disconnect()
except Exception as e:
answer += f"\nError during disconnect: {e}"
return answer

def main():
st.title("OWL X Filesystem MCP Server")

# Get the task from the user
task = st.text_input("Enter your task")

if st.button("Run Task"):
if not task.strip():
st.error("Please enter a valid task.")
else:
with st.spinner("Processing the task..."):
try:
# Create a new event loop for the current thread
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
result = new_loop.run_until_complete(run_task(task))
except Exception as e:
st.error(f"An error occurred: {e}")
result = ""
finally:
new_loop.close()
st.success("Task completed!")
st.write(result)

if __name__ == "__main__":
main()
13 changes: 13 additions & 0 deletions community_usecase/File_System_MCP/mcp_servers_config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/username/Desktop",
"/path/to/other/allowed/dir"
]
}
}
}
145 changes: 145 additions & 0 deletions community_usecase/File_System_MCP/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
# 🗂️ Filesystem Task Runner (Streamlit + OWL + Filesystem MCP)

A Streamlit app powered by the [CAMEL-AI OWL framework](https://github.com/camel-ai/owl) and **MCP (Model Context Protocol)** that connects to a filesystem-based MCP server. It allows natural language task execution via autonomous agents, enabling file management, editing, and exploration using simple prompts.

---

## ✨ Features

- **Natural language interface**: Describe your file system task and let agents take care of it.
- **Multi-agent roleplay**: Uses `OwlRolePlaying` to simulate interactions between user and assistant agents.
- **Filesystem MCP Integration**: Communicates with an MCP server that exposes filesystem tools like read/write/search/edit/move.
- **Debug logs and error reporting**: Transparent error display and detailed tracebacks in the UI.

---

## 📋 Prerequisites

- Python >=3.10,<3.13
- Node.js (for MCP server)
- Docker (optional, for running the server)
- A valid OpenAI API key:
```bash
export OPENAI_API_KEY="your_openai_key_here"
```

---

## 🛠️ Setup

1. **Clone the repository**

```bash
git clone https://github.com/camel-ai/owl.git
cd owl/community_usecase/File_System_MCP
```

2. **Create and activate a virtual environment**

```bash
python -m venv venv
source venv/bin/activate # macOS/Linux
venv\\Scripts\\activate.bat # Windows
```

3. **Install Python dependencies**

```bash
pip install -r requirements.txt
```

4. **Run the MCP filesystem server**

**Using Docker:**

```bash
docker build -t mcp/filesystem -f src/filesystem/Dockerfile .
docker run -i --rm \
--mount type=bind,src=/your/data,path=/projects \
mcp/filesystem /projects
```

**Or with NPX:**

```bash
npx -y @modelcontextprotocol/server-filesystem /your/data
```

---

## ⚙️ Configuration

1. **Environment Variables**

Create a `.env` file with:
```ini
OPENAI_API_KEY=your_openai_key_here
```

2. **MCP Server Config**

`mcp_servers_config.json` should look like:

```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/absolute/path/to/your/data"
]
}
}
}
```

---

## 🚀 Running the App

Launch the Streamlit UI:

```bash
streamlit run demo.py
```

Enter a task like:

> "Create a new file named report.txt and write 'Q2 sales increased by 20%'."

Then click **Run Task** to let the agents execute it.

---

## 🔧 Customization

- **Agent behavior**: Modify the `construct_society` function in `demo.py`.
- **Tools**: Add custom tools to the `tools` list from `MCPToolkit` or define new ones.
- **Models**: Adjust OpenAI model type or temperature in the `ModelFactory.create()` calls.

---

## 📂 Project Structure

```
filesystem-task-runner/
├── demo.py # Streamlit interface
├── mcp_servers_config.json # MCP server configuration
├── .env # Environment variables
└── README.md
```

---

## 📚 References

- [CAMEL-AI OWL Framework](https://github.com/camel-ai/owl)
- [Anthropic MCP Protocol](https://docs.anthropic.com/en/docs/agents-and-tools/mcp)
- [Streamlit Docs](https://docs.streamlit.io/)
- [Filesystem MCP Server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem)

---

*Let your agents manipulate the filesystem for you!*
5 changes: 5 additions & 0 deletions community_usecase/File_System_MCP/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
chunkr_ai
python-dotenv
streamlit
camel-ai["all"]
docx2markdown