A command-line tool for conducting comprehensive research using OpenAI's deep research models. Features real-time streaming progress, AI-generated folder names, and automatic session cataloging.
┏━━━━━━━━━━━━━ Deep Research Progress ━━━━━━━━━━━━━┓
┃ Model: o4-mini-deep-research ┃
┃ Elapsed: 0:02:34 ┃
┃ Progress: 42.0% (42/100 calls) ┃
┃ ETA: 3 min (finishes ~02:32:15 PM) ┃
┃ ┃
┃ Web Searches: 38 ┃
┃ Code Calls: 4 ┃
┃ ┃
┃ Recent Actions: ┃
┃ [02:29:41 PM] Searching: quantum computing ┃
┃ [02:29:45 PM] Opening: nature.com/article... ┃
┃ [02:29:50 PM] Executing code ┃
┃ ┃
┃ ████████████████████░░░░░░░░░░░░░░░░░░░ 42.0% ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
The tool streams research progress in real-time with a rich UI showing progress bar, elapsed time, ETA, live statistics (web searches, code executions), and recent actions with timestamps.
research_sessions/
├── quantum_computing_developments/
│ ├── quantum_computing_developments_research.md
│ └── metadata.json
├── quantum_computing_developments_001/
│ ├── quantum_computing_developments_001_research.md
│ └── metadata.json
└── mrna_vaccine_effectiveness/
├── mrna_vaccine_effectiveness_research.md
└── metadata.json
Every research session is automatically saved with an AI-generated folder name (using GPT-5-mini), disambiguation codes if needed (_001, _002, etc.), and two files: {folder_name}_research.md (combined input/output with timestamp) and metadata.json (session info and tool usage statistics).
- Install uv if you haven't already:
curl -LsSf https://astral.sh/uv/install.sh | sh- Set your OpenAI API key:
export OPENAI_API_KEY='your-api-key-here'- Run the tool with uv:
uv run deep_research.py "your research query"uv run deep_research.py "What are the latest developments in quantum computing?"uv run deep_research.py --input-file example_query.md# Edit manual_input.md with your query, then run:
uv run deep_research.py -m -tc 1000uv run deep_research.py --helpusage: deep_research.py [-h]
[--model {o3-deep-research,o4-mini-deep-research}]
[--no-background] [-tc MAX_TOOL_CALLS]
[--no-web-search] [--code-interpreter] [--interactive]
[--input-file INPUT_FILE] [-m]
[--output-dir OUTPUT_DIR] [--no-save]
[query]
Conduct deep research using OpenAI's deep research models
positional arguments:
query Research query to investigate
options:
-h, --help show this help message and exit
--model {o3-deep-research,o4-mini-deep-research}
Model to use for research (default: o4-mini-deep-
research)
--no-background Run research synchronously (default: background mode)
-tc MAX_TOOL_CALLS, --max-tool-calls MAX_TOOL_CALLS
Maximum number of tool calls to make (default: 500)
--no-web-search Disable web search
--code-interpreter Enable code interpreter for data analysis
--interactive Enter interactive mode for multi-line queries
--input-file INPUT_FILE
Read query from a markdown file
-m, --manual Read query from manual_input.md
--output-dir OUTPUT_DIR
Directory to save research sessions (default:
./research_sessions)
--no-save Don't save research session to disk
uv run deep_research.py "Research the effectiveness of mRNA vaccines. Include peer-reviewed studies, clinical trial data, and regulatory approvals."uv run deep_research.py --input-file example_query.mdThe tool includes example_query.md showing how to structure complex research queries with multiple sections and requirements.
uv run deep_research.py "Analyze the electric vehicle market in 2025. Include market share, sales trends, and major manufacturers."uv run deep_research.py --code-interpreter "Analyze climate trends and predict future patterns"- Deep research can take several minutes to complete
- Streaming mode and background mode are always enabled for real-time progress updates and reliability
- Default model is
o4-mini-deep-research(faster/cheaper, use--model o3-deep-researchfor complex tasks) - Tool calls are capped at 500 by default to control costs
- Research sessions are saved by default to
./research_sessions(use--no-saveto disable) - The tool will show a summary of web searches and code executions made
- Results include inline citations to sources