This project provides a reference architecture for building a Physical AI system that demonstrates the convergence of Agentic AI, IoT, and Robotics. It showcases how Physical AI bridges the gap between digital intelligence and physical action by combining Agentic AI reasoning, Robotic autonomy, and IoT sensing to create systems capable of autonomous decision-making and real-world action in industrial environments.
This demo showcases a next-generation industrial safety management solution that combines Amazon Bedrock AgentCore, AWS IoT services, and Robotics. An intelligent robotic system autonomously patrols hazardous areas, collecting data and performing edge inference, while an AI agent comprehensively analyzes this information to control patrol routes and responses in real-time. This enables dramatically improved accident prevention rates and response speeds in industrial environments where human access is difficult or dangerous.
Important
The examples provided in this repository are for experimental and educational purposes only. They demonstrate concepts and techniques but are not intended for direct use in production environments.
| Client App | Dashboard |
|---|---|
![]() |
![]() |
| Demo Video 1 | Demo Video 2 |
|---|---|
![]() |
![]() |
This project was showcased at AWS AI x Industry Week 2025
A cloud-native, event-driven system that integrates Physical AI, Agentic AI, IoT and Robotics for autonomous industrial monitoring.
- LLM-driven Autonomy: Powers intelligent decision-making beyond pre-programmed logic
- Bedrock AgentCore Deployment: AI Agent and MCP server operate in unified environment, following an Agentic AI pattern
- Dynamic Planning: Interprets complex situations and plans patrol routes by integrating sensor data, video analysis, and user requests
- Context-aware Decision: Maintains operational context for intelligent response to unpredictable situations
- Conversational Control: Enables intuitive robot control (e.g., "Patrol the storage area")
- Intelligent Query: Performs intent recognition and converts user requests into structured commands
- MCP Protocol: Standardized, extensible interface between AI Agent and robot hardware, ensuring natural language maps directly to precise robotic actions
- Autonomous Embodiment: Robot acts as physical embodiment of AI agent for real-world action
- Real-world Task Execution: Autonomous patrolling and hazard detection (fire, unsafe gestures, gas leaks, etc.)
- Edge-Cloud Hybrid Architecture: Distributed inference optimizes response time and computational efficiency
- Secure Device Communication: AWS IoT Core manages bidirectional data flow between robot fleet and cloud
- Distributed Processing: AWS IoT Greengrass enables low-latency edge inference; cloud performs deep analysis
- Real-time Video Streaming: Amazon Kinesis Video Streams delivers live footage for cloud-based analysis
- Centralized Data Integration: AWS IoT SiteWise aggregates robot telemetry, sensor metrics, and system status
- Unified Dashboard: Amazon Managed Grafana provides real-time operational visibility
- Seamless Feedback Loop: Continuous interaction between robot, AI agent, and human operators
| Service Category | AWS Service | Role in Architecture |
|---|---|---|
| π§ Agentic AI | Amazon Bedrock AgentCore | Agent runtime environment with MCP integration |
| Amazon Bedrock | Foundation models for reasoning and vision | |
| AWS Lambda | MCP tool integration and robot control | |
| π€ Robotics & IoT | AWS IoT Core | Device connectivity and messaging |
| AWS IoT Greengrass | Edge computing and local inference | |
| Amazon SQS | Event-driven robot feedback streaming | |
| π Data & Analytics | AWS IoT SiteWise | Industrial data modeling and analytics |
| Amazon Managed Grafana | Real-time monitoring dashboards | |
| Amazon Kinesis Video Streams | Live video processing and analysis | |
| π Security | Amazon Cognito | User authentication and authorization |
| AWS Secrets Manager | Secure credential management | |
| π» Frontend | AWS Amplify | Full-stack web application hosting |
- AWS Account with Bedrock access enabled
- Python 3.11+ and pip
- Node.js 18+ and npm/yarn
- AWS CLI configured with appropriate permissions
- Basic understanding of AI agents and IoT concepts
-
Clone the repository
git clone https://github.com/aws-samples/sample-agentic-ai-robot.git cd sample-agentic-ai-robot -
Environment Configuration
# Copy environment template cp .env.template .env # Edit with your AWS resource values nano .env # Generate all configuration files python scripts/generate_configs.py
π See CONFIGURATION.md for comprehensive environment setup instructions and configuration file management.
-
Deploy Backend Services
# Install backend dependencies cd agent-runtime pip install -r requirements.txt # Deploy AgentCore runtime ./scripts/deploy.sh
-
Setup Frontend Application
# Navigate to frontend directory cd ../amplify-frontend # Install dependencies npm install # Deploy Amplify backend npx ampx sandbox # Start development server npm start
-
Deploy IoT Components
# Return to root directory cd .. # Deploy feedback manager cd feedback-manager python create_feedback_manager.py # Deploy robot controller cd ../robo-controller python create_robo_controller.py # Deploy MCP gateway cd ../agent-gateway python mcp-interface/create_gateway_tool.py
| Component | Purpose | Technology |
|---|---|---|
| agent-runtime | AI agent backend | Amazon Bedrock, Python |
| agent-gateway | MCP server for robot control | AWS Lambda, MCP |
| amplify-app | Web interface | React, AWS Amplify |
| lambda-iot-managers | IoT data processing | AWS Lambda, AWS IoT Core, SQS |
| lambda-robo-controller | Direct robot commands | AWS Lambda |
| polly-tts | Text-to-speech | AWS Polly |
We would like to thank the following contributors for their valuable contributions to this project:
- Development - Jinseon Lee, Yoojung Lee, Kyoungsu Park, YeonKyung Park, Sejin Kim
- Support - Cheolmin Ki, Yongjin Lee, Hyewon Lee, Areum Lee
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.






