Skip to content

Conversation

@QuarkOS
Copy link

@QuarkOS QuarkOS commented Jan 5, 2026

Fixes #5146

Problem

ChatGenerationMetadata.getFinishReason() returns raw provider-specific strings that vary across providers:

Provider Completed Truncated Tool Call Filtered
OpenAI STOP LENGTH TOOL_CALLS CONTENT_FILTER
Anthropic end_turn max_tokens tool_use -
Gemini STOP MAX_TOKENS - SAFETY
Azure OpenAI stop length tool_calls content_filter
Bedrock end_turn max_tokens tool_use -

This forces applications to implement custom mapping logic for consistent audits/metrics/alerts.

Solution

Introduce a normalized FinishReasonCategory enum with a static categorize() method:

public enum FinishReasonCategory {
    COMPLETED,   // Normal stop
    TRUNCATED,   // Length/token limits
    TOOL_CALL,   // Tool invocation
    FILTERED,    // Content filtering
    OTHER,       // Known but uncategorized
    UNKNOWN      // Null/empty/unrecognized
}

Add a default method to ChatGenerationMetadata:

default FinishReasonCategory getFinishReasonCategory() {
    return FinishReasonCategory.categorize(getFinishReason());
}

Usage

ChatResponse response = chatClient.call(prompt);
FinishReasonCategory category = response.getResult()
    .getMetadata()
    .getFinishReasonCategory();

// Provider-agnostic handling
if (category == FinishReasonCategory.TRUNCATED) {
    log.warn("Output was truncated");
}

Changes

  • New: FinishReasonCategory.java - Enum with categorize() method
  • Modified: ChatGenerationMetadata.java - Added getFinishReasonCategory() default method
  • New: FinishReasonCategoryTests.java - Parameterized tests for all providers
  • New: ChatGenerationMetadataTests.java - Integration tests

Checklist

  • Signed-off-by in commit (DCO)
  • Rebased on latest main
  • Unit tests added
  • All tests pass (./mvnw test -pl spring-ai-model)
  • Code formatted with spring-javaformat:apply

Copilot AI review requested due to automatic review settings January 5, 2026 22:49
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces provider-agnostic finish reason normalization to address the inconsistency in raw finish reason strings across different AI providers (OpenAI, Anthropic, Gemini, Azure OpenAI, Bedrock). It adds a new FinishReasonCategory enum with a static categorize() method that normalizes provider-specific strings into standard categories (COMPLETED, TRUNCATED, TOOL_CALL, FILTERED, OTHER, UNKNOWN).

  • Added FinishReasonCategory enum with case-insensitive categorization logic for all major AI providers
  • Extended ChatGenerationMetadata interface with a getFinishReasonCategory() default method
  • Included comprehensive parameterized tests covering all providers and edge cases

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
spring-ai-model/src/main/java/org/springframework/ai/chat/metadata/FinishReasonCategory.java New enum providing normalized finish reason categories with static categorize() method supporting case-insensitive mapping for OpenAI, Anthropic, Gemini, Azure OpenAI, and Bedrock
spring-ai-model/src/main/java/org/springframework/ai/chat/metadata/ChatGenerationMetadata.java Added default getFinishReasonCategory() method that delegates to FinishReasonCategory.categorize()
spring-ai-model/src/test/java/org/springframework/ai/chat/metadata/FinishReasonCategoryTests.java Comprehensive parameterized tests validating categorization for all provider finish reasons and edge cases (null, blank, unrecognized)
spring-ai-model/src/test/java/org/springframework/ai/chat/metadata/ChatGenerationMetadataTests.java Integration tests verifying the default method implementation and proper category resolution through the metadata interface

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

This introduces a provider-agnostic way to categorize LLM finish reasons. Fixes spring-projectsgh-5146

Signed-off-by: Quark <quackingquark@gmail.com>
@QuarkOS QuarkOS force-pushed the feature/issue-5146-finish-reasons branch from 0bea17c to a060903 Compare January 5, 2026 23:04
@ilayaperumalg ilayaperumalg added model client enhancement New feature or request labels Jan 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request model client

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Normalize / categorize LLM finish reasons across providers in ChatGenerationMetadata (keep raw reason)

2 participants