Skip to content

Refactor message extraction to support reasoning content #62

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

aaudin90
Copy link

• Introduce CompletionDeltaModel to encapsulate reasoning and standard content. • Update OpenAIChatCompletionEventSourceListener to wrap reasoning text with tags and properly transition to normal content. • Update JSON deserialization in OpenAIChatCompletionResponseChoiceDelta to extract reasoning_content. • Bump version to 0.8.39.

• Introduce CompletionDeltaModel to encapsulate reasoning and standard content.
• Update OpenAIChatCompletionEventSourceListener to wrap reasoning text with <think> tags and properly transition to normal content.
• Update JSON deserialization in OpenAIChatCompletionResponseChoiceDelta to extract reasoning_content.
• Bump version to 0.8.39.
}

@Override
protected ErrorDetails getErrorDetails(String data) throws JsonProcessingException {
return OBJECT_MAPPER.readValue(data, ApiResponseError.class).getError();
}

private String getMessageFromDeltaModel(CompletionDeltaModel completionDeltaModel) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like an ugly hack to make it compliant with how ProxyAI parses the thinking output.

Perhaps we can change the logic of how and where the respected thinking output is taken. For example, similar support (new param) was just recently added for Google Gemini requests as well, but as far as I know, it's not accepted in the UI logic yet.

<think> tags were DeepSeek's invention and the first ever model displaying its thinking process, hence the logic in the plugin repo.

#61

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to do this by replacing the return type from a string to a class with two fields (message, reasoningContent). but it turned out that a lot of changes had to be made on the plug-in side, because the OpenAI format is used not only by CustomOpenAI, but also by many other providers. it didn't seem advisable without prior refactoring.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking ahead, I think there will be a lot of changes when mcp integration begins, which will require rewriting the integration of the llm client library.

I'm waiting for a resolution, leave it that way, or still make the transition to the class in the OpenAI response.

@@ -7,7 +7,7 @@ plugins {
}

group = "ee.carlrobert"
version = "0.8.38"
version = "0.8.39"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this? I usually increment it when I modify the changelog.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

@aaudin90 aaudin90 closed this May 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants