This project demonstrates how to instrument and trace an example strands agent to LangSmith using OpenTelemetry, enabling you to monitor model & agent performance, latency, and token usage.
🛠 Setup Clone the repo
git clone https://github.com/langchain-ai/strands-otel-tracing-example
$ cd strands-example
# Copy the .env.example file to .env
cp .env.example .env
Fill in fields such as OTel endpoint, headers (project and API key), and AWS credentials
Ensure you have a recent version of uv installed
$ uv sync
$ uv run otel_strands_share.py
You can then see an example trace in the LangSmith project specified!
Copy langsmith_exporter.py into your project, then call setup_langsmith_telemetry() before creating your agent:
from langsmith_exporter import setup_langsmith_telemetry
from strands import Agent
setup_langsmith_telemetry()This replaces the standard StrandsTelemetry().setup_otlp_exporter() call. It wraps the OTLP exporter with a transformation layer that:
- Standardizes message attributes — Strands emits messages as span events, but the GenAI semantic conventions specify them as
gen_ai.prompt/gen_ai.completionspan attributes. The exporter normalizes to the expected format. - Standardizes content blocks — Converts Bedrock/Converse-shaped blocks (
{"text": "..."},{"toolUse": {...}}) into typed blocks ({"type": "text", "text": "..."},{"type": "tool_use", ...}). - Maps run types — Sets
langsmith.span.kindbased ongen_ai.operation.nameso spans render as the correct type in LangSmith (chainfor agent invocations,llmfor model calls,toolfor tool executions).
The exporter reads endpoint and auth configuration from the standard OTEL_EXPORTER_OTLP_* environment variables in this repo's .env.example file.