generated from oracle/template-repo
-
Notifications
You must be signed in to change notification settings - Fork 32
Open
Description
Checklist
- I have searched the existing issues for similar issues.
- I added a very descriptive title to this issue.
- I have provided sufficient information below to help reproduce this issue.
Summary
Using NL2SQL, get error:
025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (mcp.server.streamable_http): Terminating session: None
2025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (server.agents.chatbot): NL2SQL: Iteration 2
2025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (LiteLLM):
LiteLLM completion() model= gpt-4o-mini; provider = openai
�[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new�[0m
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
2025-Dec-03 13:19:44 (v1.3.2.dev24+g320905989) - ERROR - (server.agents.chatbot): NL2SQL: Unexpected error: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - This model's maximum context length is 128000 tokens. However, your messages resulted in 260555 tokens (259233 in the messages, 1322 in the functions). Please reduce the length of the messages or functions.
Steps To Reproduce
No response
Expected Behavior
Catch, summerise and retry.
Current Behavior
025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (mcp.server.streamable_http): Terminating session: None
2025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (server.agents.chatbot): NL2SQL: Iteration 2
2025-Dec-03 13:19:42 (v1.3.2.dev24+g320905989) - INFO - (LiteLLM):
LiteLLM completion() model= gpt-4o-mini; provider = openai
�[1;31mGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new�[0m
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
2025-Dec-03 13:19:44 (v1.3.2.dev24+g320905989) - ERROR - (server.agents.chatbot): NL2SQL: Unexpected error: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - This model's maximum context length is 128000 tokens. However, your messages resulted in 260555 tokens (259233 in the messages, 1322 in the functions). Please reduce the length of the messages or functions.
Is this a regression?
- Yes, this used to work in a previous version.
Debug info
- Version: Alpha
- Python version:
- Operating System:
- Browser:
Additional Information
No response
Metadata
Metadata
Assignees
Labels
No labels