Skip to content

Commit 0a6e577

Browse files
jefcoderweb-flow
authored andcommitted
Merge pull request #11 from bazinga012/working_dockerfile
README.md updated
2 parents f25953a + 9f2e3ad commit 0a6e577

File tree

1 file changed

+60
-2
lines changed

1 file changed

+60
-2
lines changed

README.md

Lines changed: 60 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
# MCP Code Executor
22
[![smithery badge](https://smithery.ai/badge/@bazinga012/mcp_code_executor)](https://smithery.ai/server/@bazinga012/mcp_code_executor)
33

4-
The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment.
4+
The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
55

66
<a href="https://glama.ai/mcp/servers/45ix8xode3"><img width="380" height="200" src="https://glama.ai/mcp/servers/45ix8xode3/badge" alt="Code Executor MCP server" /></a>
77

88
## Features
99

1010
- Execute Python code from LLM prompts
11+
- Support for incremental code generation to overcome token limitations
1112
- Run code within a specified environment (Conda, virtualenv, or UV virtualenv)
1213
- Install dependencies when needed
1314
- Check if packages are already installed
@@ -115,7 +116,7 @@ To configure the MCP Code Executor server, add the following to your MCP servers
115116
The MCP Code Executor provides the following tools to LLMs:
116117

117118
### 1. `execute_code`
118-
Executes Python code in the configured environment.
119+
Executes Python code in the configured environment. Best for short code snippets.
119120
```json
120121
{
121122
"name": "execute_code",
@@ -169,12 +170,69 @@ Gets the current environment configuration.
169170
}
170171
```
171172

173+
### 6. `initialize_code_file`
174+
Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
175+
```json
176+
{
177+
"name": "initialize_code_file",
178+
"arguments": {
179+
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
180+
"filename": "my_script"
181+
}
182+
}
183+
```
184+
185+
### 7. `append_to_code_file`
186+
Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
187+
```json
188+
{
189+
"name": "append_to_code_file",
190+
"arguments": {
191+
"file_path": "/path/to/code/storage/my_script_abc123.py",
192+
"content": "\ndef another_function():\n print('This was appended to the file')\n"
193+
}
194+
}
195+
```
196+
197+
### 8. `execute_code_file`
198+
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
199+
```json
200+
{
201+
"name": "execute_code_file",
202+
"arguments": {
203+
"file_path": "/path/to/code/storage/my_script_abc123.py"
204+
}
205+
}
206+
```
207+
208+
### 9. `read_code_file`
209+
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
210+
```json
211+
{
212+
"name": "read_code_file",
213+
"arguments": {
214+
"file_path": "/path/to/code/storage/my_script_abc123.py"
215+
}
216+
}
217+
```
218+
172219
## Usage
173220

174221
Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified `CODE_STORAGE_DIR` and running it within the configured environment.
175222

176223
LLMs can generate and execute code by referencing this MCP server in their prompts.
177224

225+
### Handling Large Code Blocks
226+
227+
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
228+
229+
1. **Initialize a file** with the basic structure using `initialize_code_file`
230+
2. **Add more code** in subsequent calls using `append_to_code_file`
231+
3. **Verify the file content** if needed using `read_code_file`
232+
4. **Execute the complete code** using `execute_code_file`
233+
234+
This approach allows LLMs to write complex, multi-part code without running into token limitations.
235+
178236
## Backward Compatibility
179237

180238
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.

0 commit comments

Comments
 (0)