Data analysis assistant using Amazon Bedrock Knowledge Base and Guardrail, based on Strands SDK.
- Python 3.10 or higher
- Streamlit
- Strands Agents SDK
- MCP
# Clone the repository
git clone https://github.com/KaJunho/strands-da-assistant.git
# cd to strands-da-assistant/, create a python virtual environment called .venv.
cd strands-da-assistant
python -m venv .venv
source .venv/bin/activate
cd strands-web-ui
# Install in development mode
pip install -e .Then, cd to strands-da-assistant/src/strands_web_ui/ , add a .env file to configure environment variables. .env should be like:
aws_access_key_id = <your ak here>
aws_secret_access_key = <your sk here>
knowledge_base_id = <your kb id here>
guardrail_id = <your guardrail id here>
guardrail_version = <version>Note: You can create your own Amazon Bedrock knowledge base and guardrail in AWS management console. Refer to this doc for procedures.
Finally, cd to strands-da-assistant/, run bash start_app.sh in your console.
- How many employees does the company have?
- Please tell me the name, employee ID, date of birth, gender, marital status, and position of the top-selling employee in 2024.
- Which employee had the highest total game sales in 2024 and 2025?
- Please query the total sales in 2024 and 2025, and summarize the company's sales trend.
- Please summarize the sales trends from 2024 to 2025 across the company's different divisions.
中文版
- 公司有多少名员工
- 请告诉我2024年销售额最高员工的姓名,员工ID,生日,性别,婚姻状况,岗位
- 哪名员工2024年、2025年游戏总销售量最多?
- 请查询2024年和2025年的总销量,概括公司销量的趋势
- 请概括公司不同division,2024至2025年的销量趋势
A Streamlit-based web interface for Strands Agents with thinking process visualization and MCP integration.
- 🤖 Interactive chat interface with streaming responses
- 💭 Visualization of agent thinking process
- 🔧 Tool execution and result display
- 🔌 MCP server integration for extended capabilities
- ⚙️ Configurable model and agent parameters
- 💬 Conversation history management
- Python 3.10 or higher
- Streamlit
- Strands Agents SDK
- MCP
pip install strands-web-ui# Clone the repository
git clone https://github.com/jief123/strands-web-ui.git
cd strands-web-ui
# Install in development mode
pip install -e .This will install the package in "editable" mode, which means you can modify the source code and see the changes immediately without reinstalling.
After installation, you can run the application in several ways:
# Method 1: Run directly with Streamlit
streamlit run app.py
# Method 2: Run the basic chat example
python examples/basic_chat.py
# Method 3: Run the custom tools example
python examples/custom_tools.pyYou can also import and use the package in your own Python scripts:
import streamlit as st
from strands_web_ui.app import main
# Run the Strands Web UI application
main()The application can be configured through JSON files in the config directory:
config_with_thinking.json: Main configuration file with model, agent, and UI settingsmcp_config.json: MCP server configurationexample_config.json: Example configuration with different settings
Configure the model provider, model ID, region, and token limits:
"model": {
"provider": "bedrock",
"model_id": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"region": "us-east-1",
"max_tokens": 24000
}Configure the system prompt, tool execution, and thinking capabilities:
"agent": {
"system_prompt": "You are a helpful assistant that provides concise, accurate information.",
"max_parallel_tools": 4,
"record_direct_tool_call": true,
"hot_reload_tools": true,
"enable_native_thinking": true,
"thinking_budget": 16000
}Adjust history window size and management:
"conversation": {
"window_size": 20,
"summarize_overflow": true
}Configure UI update intervals:
"ui": {
"update_interval": 0.1
}To use MCP servers:
- Configure servers in
config/mcp_config.json:
{
"mcpServers": {
"server-id": {
"command": "command-to-run",
"args": ["arg1", "arg2"],
"env": {"ENV_VAR": "value"}
}
}
}- Connect to servers through the UI
- Use the tools provided by the servers in your conversations
strands_web_ui/
├── config/ # Configuration files
│ ├── config_with_thinking.json
│ ├── mcp_config.json
│ └── example_config.json
├── examples/ # Example scripts
│ ├── basic_chat.py
│ └── custom_tools.py
├── src/ # Source code
│ └── strands_web_ui/
│ ├── __init__.py
│ ├── app.py # Main application
│ ├── mcp_server_manager.py
│ ├── handlers/ # Event handlers
│ │ └── streamlit_handler.py
│ └── utils/ # Utility functions
│ └── config_loader.py
├── static/ # Static assets
├── tests/ # Tests
├── LICENSE
├── README.md
├── CONTRIBUTING.md
└── pyproject.toml
This application demonstrates how to:
- Create a Strands agent with thinking capabilities
- Connect to MCP servers for extended functionality
- Visualize the agent's thinking process in real-time
- Maintain conversation history across interactions
- Execute tools and display results
The thinking process visualization shows how the agent reasons through problems, making the decision-making process transparent to users.
Contributions are welcome! Please see CONTRIBUTING.md for details.
This project is licensed under the MIT License - see the LICENSE file for details.