-
Couldn't load subscription status.
- Fork 74
Open
Labels
help wantedExtra attention is neededExtra attention is needed
Description
Currently, when inferring the DeepSeek R1 model via xLLM, due to the lack of support for the Reasoning Outputs feature similar to that of vLLM (https://docs.vllm.ai/en/stable/features/reasoning_outputs.html), the thinking content and final answer content of the DeepSeek R1 model are mixed together. Users need to extract the thinking content from this mixture. It is hoped that support for Reasoning Outputs can be added like vLLM, allowing users to directly obtain the thinking content from the reasoning_content field.

Metadata
Metadata
Assignees
Labels
help wantedExtra attention is neededExtra attention is needed