Skip to content

Releases: DataDog/dd-trace-py

4.7.1

15 Apr 10:34
65fecc4

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 06-2027
See the support level definitions for more information.

Bug Fixes

  • CI Visibility: This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.
  • Fixed a race condition with internal periodic threads that could have caused a rare crash when forking.
  • Fixes an issue where internal background threads could cause crashes or instability in applications that fork (e.g. Gunicorn, uWSGI) or during Python shutdown. Affected applications could experience intermittent crashes or hangs on exit.
  • CI Visibility: This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.

4.6.8

15 Apr 09:46
ad4587e

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 06-2027
See the support level definitions for more information.

Bug Fixes

  • CI Visibility: This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.
  • Fixed a race condition with internal periodic threads that could have caused a rare crash when forking.
  • Fixes an issue where internal background threads could cause crashes or instability in applications that fork (e.g. Gunicorn, uWSGI) or during Python shutdown. Affected applications could experience intermittent crashes or hangs on exit.

4.5.10

14 Apr 15:03
64ee736

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 06-2027
See the support level definitions for more information.

Bug Fixes

  • CI Visibility: This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.

  • Fixed a race condition with internal periodic threads that could have caused a rare crash when forking.

4.8.0rc4

13 Apr 21:05
8511aad

Choose a tag to compare

4.8.0rc4 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Upgrade Notes

  • ray
    • ray.job.submit spans are removed. Ray job submission outcome is now reported on the existing ray.job span through ray.job.submit_status.

Deprecation Notes

  • LLM Observability
    • Removes support for the RAGAS integration. As an alternative, if you have RAGAS evaluations, you can manually submit these evaluation results. See LLM Observability external evaluation documentation for more information.
  • tracing
    • The pin parameter in ddtrace.contrib.dbapi.TracedConnection, ddtrace.contrib.dbapi.TracedCursor, and ddtrace.contrib.dbapi_async.TracedAsyncConnection is deprecated and will be removed in version 5.0.0. To manage configuration of DB tracing please use integration configuration and environment variables.
    • DD_TRACE_INFERRED_PROXY_SERVICES_ENABLED is deprecated and will be removed in 5.0.0. Use DD_TRACE_INFERRED_SPANS_ENABLED instead. The old environment variable continues to work but emits a DDTraceDeprecationWarning when set.

New Features

  • profiling

    • Thread sub-sampling is now supported. This allows to set a maximum number of threads to capture stacks for at each sampling interval. This can be used to reduce the CPU overhead of the Stack Profiler.
  • ASM

    • Adds a LiteLLM proxy guardrail integration for Datadog AI Guard. The ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.
  • azure_cosmos

    • Add tracing support for Azure CosmosDB. This integration traces CRUD operations on CosmosDB databases, containers, and items.
  • CI Visibility

    • adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Set DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.
  • llama_index

    • Adds APM tracing and LLM Observability support for llama-index-core>=0.11.0. Traces LLM calls, query engines, retrievers, embeddings, and agents. See the llama_index documentation for more information.
  • tracing

    • Adds support for exporting traces in OTLP HTTP/JSON format via libdatadog. Set OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.
  • mysql

    • This introduces tracing support for mysql.connector.aio.connect in the MySQL integration.
  • LLM Observability

    • Adds support for enabling and disabling LLMObs via Remote Configuration.
    • Introduces a decorator tag to LLM Observability spans that are traced by a function decorator.
    • Experiments accept a pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation [here](https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide).

    Example:

    from pydantic_evals.evaluators import ReportEvaluator
    from pydantic_evals.evaluators import ReportEvaluatorContext
    from pydantic_evals.reporting.analyses import ScalarResult
    
    from ddtrace.llmobs import LLMObs
    
    dataset = LLMObs.create_dataset(
        dataset_name="<DATASET_NAME>",
        description="<DATASET_DESCRIPTION>",
        records=[RECORD_1, RECORD_2, RECORD_3, ...]
    )
    
    class TotalCasesEvaluator(ReportEvaluator):
        def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
            return ScalarResult(
                title='Total Cases',
                value=len(ctx.report.cases),
                unit='cases',
            )
    
    def my_task(input_data, config):
        return input_data["output"]
    
    equals_expected = EqualsExpected()
    summary_evaluator = TotalCasesEvaluator()
    
    experiment = LLMObs.experiment(
        name="<EXPERIMENT_NAME>",
        task=my_task,
        dataset=dataset,
        evaluators=[equals_expected],
        summary_evaluators=[summary_evaluator],
        description="<EXPERIMENT_DESCRIPTION>."
    )
    
    result = experiment.run()
    

Bug Fixes

  • profiling
    • Fixes lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.
    • A rare crash that could occur post-fork in fork-based applications has been fixed.
    • A bug in Lock Profiling that could cause crashes when trying to access attributes of custom Lock subclasses (e.g. in Ray) has been fixed.
    • A rare crash occurring when profiling asyncio code with many tasks or deep call stacks has been fixed.
  • internal
    • Fix a potential internal thread leak in fork-heavy applications.
    • This fix resolves an issue where a ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.
    • A crash that could occur post-fork in fork-heavy applications has been fixed.
    • A crash has been fixed.
  • LLM Observability
    • Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.
    • Fixes multimodal OpenAI chat completion inputs being rendered as raw iterable objects in LLM Observability traces. Multimodal content parts (text, image, audio) are now properly materialized and formatted as readable text.
    • Fixes model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider now correctly match backend pricing data, which enables features including cost tracking.
    • Fixes an issue where deferred tools (defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.
    • Fixes an issue where deeply nested tool schemas in Anthropic and OpenAI integrations were not yet supported. The Anthropic and OpenAI integrations now check each tool's schema depth at extraction time. If a tool's schema exceeds the maximum allowed depth, the schema is truncated.
  • CI Visibility
    • This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.
    • This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.
  • Code Security (IAST)
    • This fix resolves a thread-safety issue in the IAST taint tracking context that could cause vulnerability detection to silently stop working under high concurrency in multi-threaded applications.
    • Fixes a missing return in the IAST taint tracking add_aspect native function that caused redundant work when only the right operand of a string concatenation was tainted.
  • celery
    • remove unnecessary warning log about missing span when using Task.replace().
  • django
    • Fixes RuntimeError: coroutine ignored GeneratorExit that occurred under ASGI with async views and async middleware hooks on Python 3.13+. Async view methods and middleware hooks are now correctly detected and awaited instead of being wrapped with sync bytecode wrappers.
  • ray
    • This fix resolves an issue where Ray integration spans could use an incorrect service name when the Ray job name was set after instrumentation initialization.
  • Other
    • Fixed a race condition with internal periodic threads that could have caused a rare crash when forking.
    • Fixes an issue where internal background threads could cause crashes or instability in applications that fork (e.g. Gunicorn, uWSGI) or during Python shutdown. Affected applications could experience intermittent crashes or hangs on exit.
  • tracing
    • Fixes the svc.auto process tag attribution logic. The tag now correctly reflects the auto-detected service name derived from the script or module entrypoint, matching the service name the tracer would assign to spans.
    • This fix resolves an issue where applications started with python -m <module> could report entrypoint.name as -m in process tags.
    • Fixed an issue where network.client.ip and http.client_ip span tags were missing when client IP collection was enabled and request had no headers.
  • litellm
    • Fix missing LLMObs spans when routing requests through a litellm proxy. Proxy requests were incorrectly suppressed and resulted in empty or missing LLMObs spans. Proxy requests for OpenAI models are now always handled by the litellm integration....
Read more

4.7.0

10 Apr 20:47
7b6be9f

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Upgrade Notes

  • profiling
    • This compiles the lock profiler's hot path to C via Cython, reducing per-operation overhead. At the default 1% capture rate, lock operations are ~49% faster for both contended and uncontended workloads. At 100% capture, gains are ~15-19%. No configuration changes are required.
  • openfeature
    • The minimum required version of openfeature-sdk is now 0.8.0 (previously 0.6.0). This is required for the finally_after hook to receive evaluation details for metrics tracking.

API Changes

  • openfeature
    • Flag evaluations for non-existent flags now return Reason.ERROR with ErrorCode.FLAG_NOT_FOUND instead of Reason.DEFAULT when configuration is available but the flag is not found. The previous behavior (Reason.DEFAULT) is preserved when no configuration is loaded. This aligns Python with other Datadog SDK implementations.

New Features

  • mlflow

    • Adds a request header provider (auth plugin) for MLFlow. If the environment variables DD_API_KEY, DD_APP_KEY and DD_MODEL_LAB_ENABLED are set, HTTP requests to the MLFlow tracking server will include the DD-API-KEY and DD-APPLICATION-KEY headers. #16685
  • ai_guard

    • Calls to evaluate now block if blocking was enabled for the service in the AI Guard UI. This behavior can be disabled by passing the parameter block=False, which now defaults to block=True.
    • This updates the AI Guard API client to return Sensitive Data Scanner (SDS) results in the SDK response.
    • This introduces AI Guard support for Strands Agents. The Plugin API requires
      strands-agents>=1.29.0; the HookProvider works with any version that exposes the hooks system.
  • azure_durable_functions

    • Add tracing support for Azure Durable Functions. This integration traces durable activity and entity functions.
  • profiling

    • This adds process tags to profiler payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • runtime metrics

    • This adds process tags to runtime metrics tags. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • remote configuration

    • This adds process tags to remote configuration payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • dynamic instrumentation

    • This adds process tags to debugger payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • crashtracking

    • This adds process tags to crash tracking payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • data streams monitoring

    • This adds process tags to Data Streams Monitoring payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • database monitoring

    • This adds process tags to Database Monitoring SQL service hash propagation. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • Stats computation

    • This adds process tags to stats computation payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • LLM Observability

    • Adds support for capturing stop_reason and structured_output from the Claude Agent SDK integration.

    • Adds support for user-defined dataset record IDs. Users can now supply an optional id field when creating dataset records via Dataset.append(), Dataset.extend(), create_dataset(), or create_dataset_from_csv() (via the new id_column parameter). If no id is provided, the SDK generates one automatically.

    • Experiment tasks can now optionally receive dataset record metadata as a third metadata parameter. Tasks with the existing (input_data, config) signature continue to work unchanged.

    • This introduces RemoteEvaluator which allows users to reference LLM-as-Judge evaluations configured in the Datadog UI by name when running local experiments. For more information, see the documentation: https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide/#using-managed-evaluators

    • This adds cache creation breakdown metrics for the Anthropic integration. When making Anthropic calls with prompt caching, ephemeral_5m_input_tokens and ephemeral_1h_input_tokens metrics are now reported, distinguishing between 5 minute and 1 hour prompt caches.

    • Adds support for reasoning and extended thinking content in Anthropic, LiteLLM, and OpenAI-compatible integrations. Anthropic thinking blocks (type: "thinking") are now captured as role: "reasoning" messages in both streaming and non-streaming responses, as well as in input messages for tool use continuations. LiteLLM now extracts reasoning_output_tokens from completion_tokens_details and captures reasoning_content in output messages for OpenAI-compatible providers.

    • LLMJudge now forwards any extra client_options to the underlying provider client constructor. This allows passing provider-specific options such as base_url, timeout, organization, or max_retries directly through client_options.

    • Dataset records' tags can now be operated on with 3 new Dataset methods: `dataset.add_tags, dataset.remove_tags, and dataset.replace_tags. All 3 new methods accepts an int indicating the zero based index of the record to operate on, and a list of strings in the format of key:values representing the tags. For example, if the tag "env:prod" exists on the 1st record of the dataset ds, calling ds.remove_tags(0, ["env:prod"]` will update the local state of the dataset record to have the "env:prod" tag removed.

    • Change experiment execution to run evaluators immediately after each record's task completes instead of batching all tasks first. Experiment spans and evaluation metrics are now posted incrementally as records complete rather than waiting until the end. This improves progress visibility and preserves partial results if a run fails midway.

    • Adds support for Pydantic AI evaluations in LLM Observability Experiments by allowing users to pass a pydantic evaluation (which inherents from Evaluator) in an LLM Obs Experiment.

      Example:

      from pydantic_evals.evaluators import EqualsExpected

      from ddtrace.llmobs import LLMObs

      dataset = LLMObs.create_dataset(
      dataset_name="<DATASET_NAME>", description="<DATASET_DESCRIPTION>", records=[RECORD_1, RECORD_2, RECORD_3, ...]

      )

      def my_task(input_data, config):
      return input_data["output"]

      def my_summary_evaluator(inputs, outputs, expected_outputs, evaluators_results):
      return evaluators_results["Correctness"].count(True)

      equals_expected = EqualsExpected()

      experiment = LLMObs.experiment(
      name="<EXPERIMENT_NAME>", task=my_task, dataset=dataset, evaluators=[equals_expected], summary_evaluators=[my_summary_evaluator], # optional, used to summarize the experiment results description="<EXPERIMENT_DESCRIPTION>."

      )

      result = experiment.run()

  • tracer

    • This introduces API endpoint discovery support for Tornado applications. HTTP endpoints are now automatically collected at application startup and reported via telemetry, bringing Tornado in line with Flask, FastAPI, and Django.
    • This adds process tags to trace payloads. To deactivate this feature, set DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.
    • Adds instrumentation support for `mlflow>=2.11.0. See the mlflow <https://ddtrace.readthedocs.io/en/stable/integrations.html#mlflow\> documentation for more information.
    • Add process tags to client side stats payload
  • aiohttp

    • Fixed an issue where spans captured an incomplete URL (e.g. /status/200) when aiohttp.ClientSession was initialized with a base_url. The span now records the fully-resolved URL (e.g. http://host:port/status/200), matching aiohttp's internal behaviour.
  • pymongo

    • Add a new configuration option called DD_TRACE_MONGODB_OBFUSCATION to allow the mongodb.query to be obfuscated or not. Resource names always remain normalized regardless of the value. To preserve raw mongodb.query values, pair with DD_APM_OBFUSCATION_MONGODB_ENABLED=false on the Datadog Agent. See Datadog trace obfuscation docs: Trace obfuscation.
  • google_cloud_pubsub

    • Add tracing support for the google-cloud-pubsub library. Instruments PublisherClient.publish() and SubscriberClient.subscribe() to generate spans for message publishing and consuming, with optional distributed trace context propagation via message attributes. Use DD_GOOGLE_CLOUD_PUBSUB_PROPAGATION_ENABLED to control context propagation (default: True) and DD_GOOGLE_CLOUD_PUBSUB_PROPAGATION_AS_SPAN_LINKS to attach propagated context as span links instead of re-parenting subscriber spans under the producer trace (default: False).

Bug Fixes

  • AAP
    • Fix multipart request body parsing to preserve all values when the same field name appears multiple times. Previously, only the last value was kept for duplicate keys in multipart/form-data bodies, which could allow an attacker to bypass WAF inspection by hiding a malicious value among safe ones.
    • Fixes a minor issue where the ASGI middleware used the framew...
Read more

4.5.9

10 Apr 10:04
3bf7bdf

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 06-2027
See the support level definitions for more information.

Bug Fixes

  • Fixes an issue where internal background threads could cause crashes or instability in applications that fork (e.g. Gunicorn, uWSGI) or during Python shutdown. Affected applications could experience intermittent crashes or hangs on exit.
  • CI Visibility: This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.

4.8.0rc3

09 Apr 18:05
973b840

Choose a tag to compare

4.8.0rc3 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Deprecation Notes

  • LLM Observability
    • Removes support for the RAGAS integration. As an alternative, if you have RAGAS evaluations, you can manually submit these evaluation results. See LLM Observability external evaluation documentation for more information.
  • tracing
    • The pin parameter in ddtrace.contrib.dbapi.TracedConnection, ddtrace.contrib.dbapi.TracedCursor, and ddtrace.contrib.dbapi_async.TracedAsyncConnection is deprecated and will be removed in version 5.0.0. To manage configuration of DB tracing please use integration configuration and environment variables.

New Features

  • ASM

    • Adds a LiteLLM proxy guardrail integration for Datadog AI Guard. The ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.
  • azure_cosmos

    • Add tracing support for Azure CosmosDB. This integration traces CRUD operations on CosmosDB databases, containers, and items.
  • CI Visibility

    • adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Set DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.
  • llama_index

    • Adds APM tracing and LLM Observability support for llama-index-core>=0.11.0. Traces LLM calls, query engines, retrievers, embeddings, and agents. See the llama_index documentation for more information.
  • tracing

    • Adds support for exporting traces in OTLP HTTP/JSON format via libdatadog. Set OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.
  • LLM Observability

    • Introduces a decorator tag to LLM Observability spans that are traced by a function decorator.
    • Experiments accept a pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation [here](https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide).

    Example:

    from pydantic_evals.evaluators import ReportEvaluator
    from pydantic_evals.evaluators import ReportEvaluatorContext
    from pydantic_evals.reporting.analyses import ScalarResult
    
    from ddtrace.llmobs import LLMObs
    
    dataset = LLMObs.create_dataset(
        dataset_name="<DATASET_NAME>",
        description="<DATASET_DESCRIPTION>",
        records=[RECORD_1, RECORD_2, RECORD_3, ...]
    )
    
    class TotalCasesEvaluator(ReportEvaluator):
        def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
            return ScalarResult(
                title='Total Cases',
                value=len(ctx.report.cases),
                unit='cases',
            )
    
    def my_task(input_data, config):
        return input_data["output"]
    
    equals_expected = EqualsExpected()
    summary_evaluator = TotalCasesEvaluator()
    
    experiment = LLMObs.experiment(
        name="<EXPERIMENT_NAME>",
        task=my_task,
        dataset=dataset,
        evaluators=[equals_expected],
        summary_evaluators=[summary_evaluator],
        description="<EXPERIMENT_DESCRIPTION>."
    )
    
    result = experiment.run()
    

Bug Fixes

  • profiling
    • Fixes lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.
    • A rare crash that could occur post-fork in fork-based applications has been fixed.
    • A bug in Lock Profiling that could cause crashes when trying to access attributes of custom Lock subclasses (e.g. in Ray) has been fixed.
  • internal
    • Fix a potential internal thread leak in fork-heavy applications.
    • This fix resolves an issue where a ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.
    • A crash that could occur post-fork in fork-heavy applications has been fixed.
  • LLM Observability
    • Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.
    • Fixes multimodal OpenAI chat completion inputs being rendered as raw iterable objects in LLM Observability traces. Multimodal content parts (text, image, audio) are now properly materialized and formatted as readable text.
    • Fixes model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider now correctly match backend pricing data, which enables features including cost tracking.
    • Fixes an issue where deferred tools (defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.
  • CI Visibility
    • This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.
    • This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.
  • Code Security (IAST)
    • This fix resolves a thread-safety issue in the IAST taint tracking context that could cause vulnerability detection to silently stop working under high concurrency in multi-threaded applications.

4.8.0rc2

08 Apr 23:37
fea6a8b

Choose a tag to compare

4.8.0rc2 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Deprecation Notes

New Features

  • ASM

    • Adds a LiteLLM proxy guardrail integration for Datadog AI Guard. The ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.
  • azure_cosmos

    • Add tracing support for Azure CosmosDB. This integration traces CRUD operations on CosmosDB databases, containers, and items.
  • CI Visibility

    • adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Set DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.
  • llama_index

    • Adds APM tracing and LLM Observability support for llama-index-core>=0.11.0. Traces LLM calls, query engines, retrievers, embeddings, and agents. See the llama_index documentation for more information.
  • tracing

    • Adds support for exporting traces in OTLP HTTP/JSON format via libdatadog. Set OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.
  • LLM Observability

    • Introduces a decorator tag to LLM Observability spans that are traced by a function decorator.
    • Experiments accept a pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation [here](https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide).

    Example:

    from pydantic_evals.evaluators import ReportEvaluator
    from pydantic_evals.evaluators import ReportEvaluatorContext
    from pydantic_evals.reporting.analyses import ScalarResult
    
    from ddtrace.llmobs import LLMObs
    
    dataset = LLMObs.create_dataset(
        dataset_name="<DATASET_NAME>",
        description="<DATASET_DESCRIPTION>",
        records=[RECORD_1, RECORD_2, RECORD_3, ...]
    )
    
    class TotalCasesEvaluator(ReportEvaluator):
        def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
            return ScalarResult(
                title='Total Cases',
                value=len(ctx.report.cases),
                unit='cases',
            )
    
    def my_task(input_data, config):
        return input_data["output"]
    
    equals_expected = EqualsExpected()
    summary_evaluator = TotalCasesEvaluator()
    
    experiment = LLMObs.experiment(
        name="<EXPERIMENT_NAME>",
        task=my_task,
        dataset=dataset,
        evaluators=[equals_expected],
        summary_evaluators=[summary_evaluator],
        description="<EXPERIMENT_DESCRIPTION>."
    )
    
    result = experiment.run()
    

Bug Fixes

  • profiling
    • Fixes lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.
    • A rare crash that could occur post-fork in fork-based applications has been fixed.
    • A bug in Lock Profiling that could cause crashes when trying to access attributes of custom Lock subclasses (e.g. in Ray) has been fixed.
  • internal
    • Fix a potential internal thread leak in fork-heavy applications.
    • This fix resolves an issue where a ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.
    • A crash that could occur post-fork in fork-heavy applications has been fixed.
  • LLM Observability
    • Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.
    • Fixes multimodal OpenAI chat completion inputs being rendered as raw iterable objects in LLM Observability traces. Multimodal content parts (text, image, audio) are now properly materialized and formatted as readable text.
    • Fixes model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider` now correctly match backend pricing data, which enables features including cost tracking.
    • Fixes an issue where deferred tools (defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.
  • CI Visibility
    • This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.
    • This fix resolves an issue where a failure response from the /search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.

4.6.7

08 Apr 14:50
92f4aef

Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 06-2027
See the support level definitions for more information.

Bug Fixes

  • LLM Observability: Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.
  • CI Visibility: This fix resolves an issue where pytest-xdist worker crashes (os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1.

4.8.0rc1

07 Apr 17:38
78a8fa3

Choose a tag to compare

4.8.0rc1 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 05-2027
See the support level definitions for more information.

Deprecation Notes

  • LLM Observability
    • Removes support for the RAGAS integration. As an alternative, if you have RAGAS evaluations, you can manually submit these evaluation results. See LLM Observability external evaluation documentation for more information.

New Features

  • ASM

    • Adds a LiteLLM proxy guardrail integration for Datadog AI Guard. The ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.
  • azure_cosmos

    • Add tracing support for Azure CosmosDB. This integration traces CRUD operations on CosmosDB databases, containers, and items.
  • CI Visibility

    • adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Set DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.
  • tracing

    • Adds support for exporting traces in OTLP HTTP/JSON format via libdatadog. Set OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.
  • LLM Observability

    • Introduces a decorator tag to LLM Observability spans that are traced by a function decorator.
    • Experiments accept a pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation here.

    Example:

    from pydantic_evals.evaluators import ReportEvaluator
    from pydantic_evals.evaluators import ReportEvaluatorContext
    from pydantic_evals.reporting.analyses import ScalarResult
    
    from ddtrace.llmobs import LLMObs
    
    dataset = LLMObs.create_dataset(
        dataset_name="<DATASET_NAME>",
        description="<DATASET_DESCRIPTION>",
        records=[RECORD_1, RECORD_2, RECORD_3, ...]
    )
    
    class TotalCasesEvaluator(ReportEvaluator):
        def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
            return ScalarResult(
                title='Total Cases',
                value=len(ctx.report.cases),
                unit='cases',
            )
    
    def my_task(input_data, config):
        return input_data["output"]
    
    equals_expected = EqualsExpected()
    summary_evaluator = TotalCasesEvaluator()
    
    experiment = LLMObs.experiment(
        name="<EXPERIMENT_NAME>",
        task=my_task,
        dataset=dataset,
        evaluators=[equals_expected],
        summary_evaluators=[summary_evaluator],
        description="<EXPERIMENT_DESCRIPTION>."
    )
    
    result = experiment.run()
    

Bug Fixes

  • profiling
    • Fixes lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.
  • internal
    • Fix a potential internal thread leak in fork-heavy applications.
    • This fix resolves an issue where a ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.
    • A crash that could occur post-fork in fork-heavy applications has been fixed.
  • LLM Observability
    • Fixes incorrect span hierarchy in LLMObs traces when using the ddtrace SDK alongside OTel-based instrumentation (e.g. Strands Agents). OTel gen_ai spans (e.g. invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.
    • Fixes model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider now correctly match backend pricing data, which enables features including cost tracking.
    • Fixes an issue where deferred tools (defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.