Skip to content

Fix v1_chat_generate_request to allow for None content #5701

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Amadeus-Winarto
Copy link
Contributor

Motivation

Related to #5452.
Allow content field to be None, as per OpenAI types: https://github.com/openai/openai-python/blob/05810dd4088b6dbfc4194d7b0bea03eec236c83a/src/openai/types/chat/chat_completion_assistant_message_param.py#L46C67-L47C1

This fixes issues when running agentic-workflow related use of SGLang, where the assistant's output may be fed back into /v1/chat/completions as is without modification. For example, in the following flow:

LLM attempts tool call i.e. content is None and tool_calls is a valid list
LLM's response is fed back into messages in ChatCompletionRequest without modification
It will currently fail on the request generation step since a non-string typed content is automatically assumed to be a list, even though content can be None, even though the OpenAI spec allows this. This change fixes this issue.

Modifications

Checklist

@merrymercy
Copy link
Contributor

please fix the lint

@Amadeus-Winarto
Copy link
Contributor Author

please fix the lint

Oops thanks, it's fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants