Skip to content

Hope to support Reasoning Outputs like vLLM does. #261

@zhanghw0354

Description

@zhanghw0354

Currently, when inferring the DeepSeek R1 model via xLLM, due to the lack of support for the Reasoning Outputs feature similar to that of vLLM (https://docs.vllm.ai/en/stable/features/reasoning_outputs.html), the thinking content and final answer content of the DeepSeek R1 model are mixed together. Users need to extract the thinking content from this mixture. It is hoped that support for Reasoning Outputs can be added like vLLM, allowing users to directly obtain the thinking content from the reasoning_content field.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions