ESBMC-Python-CPP is a work-in-progress toolkit that bridges Python code and C/++ verification through the ESBMC model checker. It supports three primary approaches for converting and analyzing Python code:
-
Static Conversion via Shedskin
Converts Python to C++ code using Shedskin. -
LLM-Based Conversion from Python to C
Uses local or remote LLMs to directly convert Python code to verifiable C code. -
Dynamic Tracing
Traces a specific execution path to analyze behavior during runtime.
git clone https://github.com/esbmc/esbmc-python-cpp.git
cd esbmc-python-cpp/mac
./setup-esbmc-mac.sh
./esbmc-mac.sh to run esbmc
To install the local model GLM 4.6 air:
cd llm-mlx
./start_daemon.sh to start the daemon
./stop_daemon.sh stops the daemon processFor Mac users, you can use the provided esbmc-mac.sh wrapper:
# Use the Mac-specific ESBMC executable
./verify.sh --esbmc-exec ./mac/esbmc-mac.sh <filename>
# Example with local LLM
./verify.sh --esbmc-exec ./mac/esbmc-mac.sh --local-llm <filename>
# Example with cloud LLM
./verify.sh --esbmc-exec ./mac/esbmc-mac.sh --llm --model openrouter/z-ai/glm-4.6 <filename># Run with default cloud LLM
./regression.sh
# Run with local LLM
./regression.sh --local-llm
# Run with specific model
./regression.sh --model openrouter/z-ai/glm-4.6
# Run with Mac ESBMC and local LLM
./regression.sh --esbmc-exec ./mac/esbmc-mac.sh --local-llm# Basic verification (uses default cloud LLM)
./verify.sh <path_to_python_file>
# With Mac ESBMC executable
./verify.sh --esbmc-exec ./mac/esbmc-mac.sh <path_to_python_file>
# With cloud LLM
./verify.sh --llm --model openrouter/z-ai/glm-4.6 <path_to_python_file>
# With local LLM
./verify.sh --local-llm <path_to_python_file>Use example files from the examples/ directory or your own.
./esbmc_python_regressions.shLLM mode allows faster and more flexible code verification:
- Converts Shedskin-style Python to pure C.
- Enables verification of thread-safe code.
- Supports both cloud LLMs and locally deployed LLMs.
Use cloud-based models like OpenRouter, OpenAI, or others:
# Default cloud model (openrouter/z-ai/glm-4.6)
./verify.sh --llm <filename>
# Specify a cloud model
./verify.sh --llm --model openrouter/anthropic/claude-3-sonnet <filename>
./verify.sh --llm --model openai/gpt-4 <filename>
./verify.sh --llm --model openrouter/google/gemini-2.0-flash-001 <filename>Use locally deployed models via aider.sh:
# Default local model (openai/mlx-community/GLM-4.5-Air-4bit)
./verify.sh --local-llm <filename>
# Specify a local model
./verify.sh --local-llm --model llama-3.1-8b <filename>
./verify.sh --local-llm --model qwen2.5-coder:32b <filename># Cloud LLM validation
./verify.sh --llm --model openrouter/z-ai/glm-4.6 examples/example_15_dictionary.py --validate-translation
# Local LLM validation
./verify.sh --local-llm examples/example_15_dictionary.py --validate-translationopenrouter/z-ai/glm-4.6(default)openrouter/anthropic/claude-3-sonnetopenrouter/anthropic/claude-3-haikuopenrouter/google/gemini-2.0-flash-001openrouter/deepseek/deepseek-r1openai/gpt-4openai/gpt-3.5-turbo
openai/mlx-community/GLM-4.5-Air-4bit(default for Mac)llama-3.1-8bqwen2.5-coder:7bqwen2.5-coder:32b- Any model available via your local LLM server
Trace and verify live execution paths using Python's built-in tracing:
# With cloud LLM
python3 dynamic_trace.py --model openrouter/z-ai/glm-4.6 aws_examples/chalice_awsclient.py
# With local LLM
python3 dynamic_trace.py --local-llm --model qwen2.5-coder:7b aws_examples/chalice_awsclient.py
# Or use Docker
python3 dynamic_trace.py --docker --image esbmc-image --model openrouter/z-ai/glm-4.6 aws_examples/chalice_awsclient.pybrew install ollamaollama serveBrowse: https://ollama.com/library
Pull a model:
ollama pull <model_name>Tested models:
qwen2.5-coder:7bqwen2.5-coder:32b
./verify.sh --local-llm --model qwen2.5-coder:32b jpl-examples/list_comprehension_complex.py --directFor Mac users, the MLX-based local LLM is pre-configured:
# Start the MLX daemon (one-time setup)
cd mac/llm-mlx
./start_daemon.sh
# Use the default MLX model
./verify.sh --local-llm <filename>
# Or specify the MLX model explicitly
./verify.sh --local-llm --model openai/mlx-community/GLM-4.5-Air-4bit <filename>If you have a custom LLM server running:
# Set environment variables for your server
export OPENAI_API_KEY=dummy
export OPENAI_API_BASE=http://localhost:8080/v1
# Run with local LLM
./verify.sh --local-llm --model your-model-name <filename>π Larger models offer better accuracy but are slower to run. π Mac users: The MLX option (GLM-4.5-Air-4bit) is optimized for Apple Silicon
This project is under active development. Feedback, issues, and contributions are welcome!