-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Fix: Handle dict objects in Anthropic streaming response #11032
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: Handle dict objects in Anthropic streaming response #11032
Conversation
Fix issue where dictionary objects in Anthropic streaming responses were not properly converted to SSE format strings before being yielded, causing AttributeError: 'dict' object has no attribute 'encode'
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, reviewed. Please add a test tests/litellm/proxy/anthropic_endpoints/test_endpoints.py
- Added STREAM_SSE_DATA_PREFIX constant in constants.py - Created return_anthropic_chunk helper function for better maintainability - Using safe_dumps from safe_json_dumps.py for improved JSON serialization - Added unit test for dictionary object handling in streaming response
@jgowdy-godaddy were you able to see the original issue ? I'm on an older version and don't see the issue Tried this curl command
|
* fix: handle dict objects in Anthropic streaming response Fix issue where dictionary objects in Anthropic streaming responses were not properly converted to SSE format strings before being yielded, causing AttributeError: 'dict' object has no attribute 'encode' * fix: refactor Anthropic streaming response handling - Added STREAM_SSE_DATA_PREFIX constant in constants.py - Created return_anthropic_chunk helper function for better maintainability - Using safe_dumps from safe_json_dumps.py for improved JSON serialization - Added unit test for dictionary object handling in streaming response * fix: correct patch path in anthropic_endpoints test
…0933) * fix(model_management_endpoints.py): if team model deleted - remove the alias routing Fixes issue where alias would cause incorrect routing to occur * fix(model_management_endpoints.py): remove model name from team on team model delete prevents misrouting * fix(vertex_llm_base.py): remove quota_project_id parameter from credential refresh request (#10915) Fixes #9863 * Enable structured JSON schema outputs in LM Studio (for validated responses) (#10929) - docs/my-website/docs/providers/lm_studio.md: add Structured Output section with JSON schema and Pydantic examples - litellm/llms/lm_studio/chat/transformation.py: extend map_openai_params to handle `response_format` mappings (`json_schema`, `json_object`) and move them to optional_params - litellm/utils.py: include `LM_STUDIO` in `supports_response_schema` list - tests/litellm/llms/lm_studio/test_lm_studio_chat_transformation.py: add tests for Pydantic model and dict-based JSON schema handling Co-authored-by: Earl St Sauver <[email protected]> * update sambanova models and parameters (#10900) * add sambanova to completion input params table * update sambanova supported args * update sambanova supported models * minor changes * fix sambanova model list * update sambanova models * update sambanova models * update sambanova docs * minor chnage sambanova url * update type to match OpenAIGPTConfig * minor change * fix cohere rerank provider (#10822) * add skip server startup flag to cli (#10665) * add flag * fix ruff linting * add unit test * fix ruff errors * fix lintings issues * fix linting errors * add global noqa for print * Allow passed in vertex_ai credentials to be authorized_user type (#10899) * fix: handle DB_USER, DB_PASSWORD, DB_HOST problem I faced, since this… (#10842) * fix: handle DB_USER, DB_PASSWORD, DB_HOST problem I faced, since this could be with special character, so better be url encoded, similar problem also happen in DATABASE_URL * test: added the test cases * add: test case of url with sepcial character * add keys and members (#10950) * Update github.md (#10946) Updated clarification in the use of the models form github. (Github uses the model name: <company>/<model-Name> while litellm wants github/<model-Name> Updated the example to a model that is actually supported / available on github right now * Add new documentation files for LiteLLM (#10961) - Created `llms-full.txt` with comprehensive details on LiteLLM features, usage, and supported models. - Added `llms.txt` for quick access to key links and resources related to LiteLLM, including guides, release notes, and integration documentation. * [Fix] Invitation Email does not include the invitation link (#10958) * fix: email invites should link to the invitation * fix: email invites should link to the invitation * fix: email invites should link to the invitation * bump: litellm proxy extras * refactor(vertex_llm_base.py): remove check on credential project id - allow for cross credential calling * test: update test due to cohere ssl issues * fix: fix model param mapping * fix: update test to reflect change * Enable key reassignment on UI + Show 'updated at' column for key in all keys table (#10960) * feat(key_edit_view.tsx): initial commit enabling reassigning keys to teams * style(key_edit_view.tsx): cleaner implementation with teams in dropdown * fix(all_keys_table.tsx): set max width to keys column * feat(all_keys_table.tsx): show last updated at column for key * test: update tests * Litellm dev 05 19 2025 p3 (#10965) * feat(model_info_view.tsx): enable updating model info for existing models on UI Fixes LIT-154 * fix(model_info_view.tsx): instantly show model info updates on UI * feat(proxy_server.py): enable flag on `/models` to include model access groups This enables admin to assign model access groups to keys/teams on UI * feat(ui/): add model access groups on ui dropdown when creating teams + keys * refactor(parallel_request_limiter_v2.py): Migrate multi instance rate limiting to OSS Closes #10052 * Validate migrating keys to teams + Fix mistral image url on async translation (#10966) * feat(key_management_endpoints.py): add validation checks for migrating key to team Ensures requests with migrated key can actually succeed Prevent migrated keys from failing in prod due to team missing required permissions * fix(mistral/): fix image url handling for mistral on async call * fix(key_management_endpoints.py): improve check for running team validation on key update * bump: version 1.70.1 → 1.70.2 * test: update test * build(ui/): new ui build * add cla to docs (#10963) * add cla to docs * cla docs clarity * build: ui/ new build * [Fix] List Guardrails - Show config.yaml guardrails on litellm ui (#10959) * fix: listing guardrails defined on litellm config * fix: list guardrails on litellm config * fix: list guardrails on litellm config * test: list guardrails on litellm config * fix: linting * Update litellm/proxy/guardrails/guardrail_endpoints.py Co-authored-by: Copilot <[email protected]> * fix: GuardrailInfoLiteLLMParamsResponse --------- Co-authored-by: Copilot <[email protected]> * fix: vertex show clear exception on failed refresh (#10969) * fix: vertex show clear exception on failed refresh * fix: show clear debug log * docs: cleanup * test: update tests * test: skip test - model EOL * [Feature] Add supports_computer_use to the model list (#10881) * Add support for supports_computer_use in model info * Corrected list of supports_computer_use models * Further fix computer use compatible claude models, fix existing test that predated supports_computer_use in the model list * Move computer use test case into existing test_utils file * Moved tests in to test_utils.py * [Feat] - Add Support for Showing Passthrough endpoint Error Logs on LiteLLM UI (#10990) * fix: add error logging for passthrough endpoints * feat: add error logging for passthrough endpoints * fix: post_call_failure_hook track errors on pt * fix: use constant for MAXIMUM_TRACEBACK_LINES_TO_LOG * docs MAXIMUM_TRACEBACK_LINES_TO_LOG * test: ensure failure callback triggered * fix: move _init_kwargs_for_pass_through_endpoint * added support to credential delete to support slashes in the curl (#10987) * added support to credential delete to support slashes in the curl * add support for get and update too * Add new gemini preview models + Fix cohere v2 embedding 'embedding_types' param mapping (#10991) * build(model_prices_and_context_window.json): add new gemini preview models Fixes #10985 * fix(cohere/embed): Fix encoding format <-> embedding types param mapping Fixes #10939 * fix(aim.py): fix syntax error * Litellm add new gemini models (#10998) * build(model_prices_and_context_window.json): add new gemini image gen model * build(model_prices_and_context_window.json): add more gemini models * [Feat] Prometheus - Track `route` on proxy_* metrics (#10992) * fix: trace route on prometheus metrics * fix: show route on prometheus metrics for total fails * test: trace route on metrics * fix: tests for route in prom metrics * test: fix test metrics * test: fix test_proxy_failure_metrics * fix: default role for JWT authentication (#10995) * fix: get_user_object * test: test_default_internal_user_params_with_get_user_object * Update litellm/proxy/auth/auth_checks.py Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]> * fix(internal_user_endpoints.py): allow resetting spend/max budget on … (#10993) * fix(internal_user_endpoints.py): allow resetting spend/max budget on user update Fixes #10495 * fix(internal_user_endpoints.py): correctly return set spend for user on /user/new * fix(auth_checks.py): check redis for key object before checking in-memory allows for quicker updates * feat(internal_user_endpoints.py): update cache object when user is updated + check redis on user values being updated * fix(auth_checks.py): use redis cache when user updated * fix: set default value of 'expires' to None * bump: version 1.70.2 → 1.70.3 * Improve response_id propagation logic and add tests for valid/empty ID handling in streaming. (#11006) * support vertex_ai global endpoints for chat (#10658) * fix(internal_user_endpoints.py): fix check * Ollama wildcard support (#10982) * Add Ollama wildcard support * Add Ollama-chatas well. * Fix missing methods. * Improve logs a bit. * Add tests * Add tests * spend rounded to 4 (#11013) * fix: fix linting error * fix(streaming_handler.py): fix check when response id already set * put organization and team buttons at the top (#10948) * feat: add xai/grok-3 pricing (#11028) * [Feat] Add Image Edits Support to LiteLLM (#11020) * refactor: use 1 file for image methods * refactor: use 1 file for image methods * feat: add stubs for image edits * fix: types for image edits * feat: add async image edits * feat: add base config for image edits * feat: add basic structure for image edits * feat: add ImageEditRequestUtils * feat: complete instrumentation of image edits * tes: test_openai_image_edit_litellm_sdk * tets: test_openai_image_edit_litellm_sdk * feat: get_provider_image_edit_config * feat: add OpenAIImageEditConfig * feat: working image edits * fixes: working image edits * fix: code qa * fix: using image edits * fix: linting errors * Updating the available VoyageAI models in the docs (#11003) * Refresh VoyageAI models and prices and context * Refresh VoyageAI models and prices and context * Refresh VoyageAI models and prices and context * Updating the available VoyageAI models in the docs * Updating the available VoyageAI models in the docs * fix(ui): call tool when no arguments needed (#11012) Co-authored-by: wagnerjt <[email protected]> * Verbose error on admin add (#10978) * Spend rounded to 4 for Organizations and Users page (#11023) * spend rounded to 4 * fixed for organization and users table * Fix: Handle dict objects in Anthropic streaming response (#11032) * fix: handle dict objects in Anthropic streaming response Fix issue where dictionary objects in Anthropic streaming responses were not properly converted to SSE format strings before being yielded, causing AttributeError: 'dict' object has no attribute 'encode' * fix: refactor Anthropic streaming response handling - Added STREAM_SSE_DATA_PREFIX constant in constants.py - Created return_anthropic_chunk helper function for better maintainability - Using safe_dumps from safe_json_dumps.py for improved JSON serialization - Added unit test for dictionary object handling in streaming response * fix: correct patch path in anthropic_endpoints test * feat: add Databricks Llama 4 Maverick model cost (#11008) Co-authored-by: Tommy PLANEL <[email protected]> * test: mark flaky test * Litellm dev 05 21 2025 p2 (#11039) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * test: update test * fix: fix linting error * fix: fix ruff linting error * test: fix check * Fixes the InvitationLink Prisma find_many query (#11031) Related: 3b6c6d0#r157675103 We should use "order", according to the prisma python docs https://prisma-client-py.readthedocs.io/en/stable/reference/limitations/#order-argument Also we are using "order" in other files of the project: https://github.com/search?q=repo%3ABerriAI%2Flitellm%20order%3D%7B&type=code * fix: fix linting error * Support passing `prompt_label` to langfuse (#11018) * fix: add prompt label support to prompt management hook * feat: support 'prompt_label' parameter for langfuse prompt management Closes #9003 (reply in thread) * fix(litellm_logging.py): deep copy optional params to avoid mutation while logging * fix(log-consistent-optional-param-values-across-providers): ensures params can be used for finetuning from providers * fix: fix linting error * test: update test * test: update langfuse tests * fix(litellm_logging.py): avoid deepcopying optional params might contain thread object * added cloding tags for </TabGroup> </Col> </Grid> + indentation changes (#11046) * Feat: add MCP to Responses API and bump openai python sdk (#11029) * feat: add MCP to responses API * feat: bump openai version to 1.75.0 * docs MCP + responses API * fixes: type checking * fixes: type checking * build: use latest openai 1.81.0 * fix: linting error * fix: linting error * fix: test * fix: linting errors * fix: test * fix: test * fix: linting * Revert "fix: linting" This reverts commit ebb19ff. * fix: linting * Model filter on logs (#11048) * add model filter * remove calling all models * bump: version 1.70.3 → 1.70.4 * docs fix ad hoc recognizer * docs fix example * [Feat] Add claude-4 model family (#11060) * add new claude-sonnet-4-2025051 * feat: add bedrock claude-4 models * add bedrock claude-4 models * add vertx_ai/claude-sonnet-4 * fix provider=bedrock_converse * feat: ensure thinking is supported for claude-4 model family * docs add claude-4 models * Revert "Support passing `prompt_label` to langfuse (#11018)" This reverts commit 2b50b43. * Revert "Revert "Support passing `prompt_label` to langfuse (#11018)"" This reverts commit 0be7e7d. * fix: fix checking optional params from logging object for function call * test: update groq test - change on their end * Litellm managed file updates combined (#11040) * Add LiteLLM Managed file support for `retrieve`, `list` and `cancel` finetuning jobs (#11033) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(fine_tuning_endpoints/endpoints.py): add managed files support to list finetuning jobs * feat(finetuning_endpoints/main): add managed files support for retrieving ft job Makes it easier to control permissions for ft endpoint * LiteLLM Managed Files - Enforce validation check if user can access finetuning job (#11034) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(fine_tuning_endpoints/endpoints.py): add managed files support to list finetuning jobs * feat(finetuning_endpoints/main): add managed files support for retrieving ft job Makes it easier to control permissions for ft endpoint * feat(managed_files.py): store create fine-tune / batch response object in db storing this allows us to filter files returned on list based on what user created * feat(managed_files.py): Ensures users can't retrieve / modify each others jobs * fix: fix check * fix: fix ruff check errors * test: update to handle testing * fix: suppress linting warning - openai 'seed' is none on azure * test: update tests * test: update test * (build) fix context window for claude 4 model family * [Fix] Reliability Fix - Removing code that was creating threads on errors (#11066) * fix: only init langfuse if active * fix: only init langfuse if active * fix: add initialized_langfuse_clients count * fix: add MAX_LANGFUSE_INITIALIZED_CLIENTS * fix: use safe init langfuse * test: init langfuse clients * test: test_langfuse_not_initialized_returns_none_early * docs MAX_LANGFUSE_INITIALIZED_CLIENTS * fix: use correct langfuse callback * fix: code qa * [Feat] Add Azure AD certificate-based authentication (#11069) * feat: add cert based auth for Azure get_azure_ad_token_provider * test: tests azure cert auth * fix update poetry * fix: fix linting * Update feature_request.yml * Update feature_request.yml (#11078) * adds tzdata (#10796) (#11052) With tzdata installed, the environment variable `TZ` will be respected by Python's datetime module. This means that users can specify the timezone they want LiteLLM to use. Co-authored-by: Simon Stone <[email protected]> * Fix proxy_cli.py: avoid overriding DATABASE_URL when it’s already provided. (#11076) * feat(helm): Add loadBalancerClass support for LoadBalancer services (#11064) * feat(helm): Add loadBalancerClass support for LoadBalancer services Adds the ability to specify a loadBalancerClass when using LoadBalancer service type. This enables integration with custom load balancer implementations like Tailscale. * fixup! feat(helm): Add loadBalancerClass support for LoadBalancer services * Add Azure Mistral Medium 25.05 (#11063) * Add Azure Mistral Medium 25.05 * fix provider * fix:Databricks Claude 3.7 Sonnet output token cost: $17.85/M instead of (#11007) $178.5/M Co-authored-by: Tommy PLANEL <[email protected]> * Fix/openrouter stream usage id 8913 (#11004) * Add handling and verification for 'usage' field in OpenRouter chat transformations and streaming responses. * Ensure consistent response ID by using valid ID from any chunk. * Remove redundant comments from OpenRouter chat transformation tests and logic. * Remove this from here as I'm opening a new pr * Reverting space * Remove redundant assertions from OpenRouter chat transformation test * feat: add embeddings to CustomLLM (#10980) * feat: add embeddings to CustomLLM * feat: add aembedding to custom llm * Enable switching between custom auth and litellm api key auth + Fix `/customer/update` for max budgets (#11070) * feat(user_api_key_auth.py): (enterprise) allow user to enable custom auth + litellm api key auth makes it easy to migrate to proxy * fix(proxy/_types.py): allow setting 'spend' for new customer * fix(customer_endpoints.py): fix updating max budget on `/customer/update` Fixes #6920 * test(test_customer_endpoints.py): add unit tests for customer update endpoint * fix: fix linting error * fix(custom_auth_auto.py): fix ruff check * fix(customer_endpoints.py): fix documentation * Litellm add file validation (#11081) * fix: cleanup print statement * feat(managed_files.py): add auth check on managed files Implemented for file retrieve + delete calls * feat(files_endpoints.py): support returning files by model name enables managed file support * feat(managed_files/): filter list of files by the ones created by user prevents user from seeing another file * test: update test * fix(files_endpoints.py): list_files - always default to provider based routing * build: add new table to prisma schema * [feature] ConfidentAI logging enabled for proxy and sdk (#10649) * async success implemented * fail async event * sync events added * docs added * docs added * test added * style * test * . * lock file genrated due to tenacity change * mypy errors * resolved comments * resolved comments * resolved comments * resolved comments * style * style * resolved comments * build(pyproject.toml): add langfuse dev dependency for tests * Proper github images (#10927) * feat: add seperate image URLs to distinguish types of release * feat: remove new nightly/dev image URLs, only keep stable * test: fix failing deepeval test * Add devstral-small-2505 model to pricing and context window configuration (#11103) - Added mistral/devstral-small-2505 with 128K context window - Pricing: bash.1/M input tokens, bash.3/M output tokens (same as Mistral Small 3.1) - Supports function calling, assistant prefill, and tool choice - Source: https://mistral.ai/news/devstral Co-authored-by: openhands <[email protected]> * test: test_openai_image_edit_litellm_sdk * use n 4 for mapped tests (#11109) * Fix/background health check (#10887) * fix: improve health check logic by deep copying model list on each iteration * test: add async test for background health check reflecting model list changes * fix: validate health check interval before executing background health check * fix: specify type for health check results dictionary * fix(user_api_key_auth.py): handle user custom auth set with no custom settings * bump: version 0.1.21 → 0.2.0 * ci(config.yml): run enterprise and litellm tests separately * fix: fix linting error * docs: add missing docs * [Feat] Add content policy violation error mapping for image editd (#11113) * feat: add image edit mapping for content policy violations * test fix * Expose `/list` and `/info` endpoints for Audit Log events (#11102) * feat(audit_logging_endpoints.py): expose list endpoint to show all audit logs make it easier for user to retrieve individual endpoints * feat(enterprise/): add audit logging endpoint * feat(audit_logging_endpoints.py): expose new GET `/audit/{id}` endpoint make it easier to retrieve view individual audit logs * feat(key_management_event_hooks.py): correctly show the key of the user who initiated the change * fix(key_management_event_hooks.py): add key rotations as an audit log event ' * test(test_audit_logging_endpoints.py): add simple unit testing for audit log endpoint * fix: testing fixes * fix: fix ruff check * [Feat] Use aiohttp transport by default - 97% lower median latency (#11097) * fix: add flag for disabling use_aiohttp_transport * feat: add _create_async_transport * feat: fixes for transport * add httpx-aiohttp * feat: fixes for transport * refactor: fixes for transport * build: fix deps * fixes: test fixes * fix: ensure aiohttp does not auto set content type * test: test fixes * feat: add LiteLLMAiohttpTransport * fix: fixes for responses API handling * test: fixes for responses API handling * test: fixes for responses API handling * feat: fixes for transport * fix: base embedding handler * test: test_async_http_handler_force_ipv4 * test: fix failing deepeval test * fix: add YARL for bedrock urls * fix: issues with transport * fix: comment out linting issues * test fix * test: XAI is unstable * test: fixes for using respx * test: XAI fixes * test: XAI fixes * test: infinity testing fixes * docs(config_settings.md): document param * test: test_openai_image_edit_litellm_sdk * test: remove deprecated test * bump respx==0.22.0 * test: test_xai_message_name_filtering * test: fix anthropic test after bumping httpx * use n 4 for mapped tests (#11109) * fix: use 1 session per event loop * test: test_client_session_helper * fix: linting error * fix: resolving GET requests on httpx 0.28.1 * test fixes proxy unit tests * fix: add ssl verify settings * fix: proxy unit tests * fix: refactor * tests: basic unit tests for aiohttp transports * tests: fixes xai --------- Co-authored-by: Krrish Dholakia <[email protected]> * test: cleanup redundant test * ui new build * bump: 1.70.5 * bump: version 1.70.5 → 1.71.0 * fix(pyproject.toml): bump litellm version * Logfire - fix(opentelemetry.py): Fix otel proxy server initialization (#11091) * fix(opentelemetry.py): Fix otel proxy server initialization Fixes #10349 (comment) * feat(router.py): allow ignoring invalid deployments on model load Prevents invalid models from preventing loading other valid models Fixes issue where on instance spin up invalid models were blocking valid models from being used * test: add additional unit testing * fix(user_api_key_auth.py): return abbreviated key in exception - make it easy to debug which key is invalid for client * docs(config_settings.md): document param * fix(user_api_key_auth.py): fix error string to match previous one * feat(handle_jwt.py): map user to team when added via jwt auth (#11108) * feat(handle_jwt.py): map user to team when added via jwt auth makes it easy to ensure user belongs to team * test: test_openai_image_edit_litellm_sdk * use n 4 for mapped tests (#11109) * Fix/background health check (#10887) * fix: improve health check logic by deep copying model list on each iteration * test: add async test for background health check reflecting model list changes * fix: validate health check interval before executing background health check * fix: specify type for health check results dictionary * fix(user_api_key_auth.py): handle user custom auth set with no custom settings * bump: version 0.1.21 → 0.2.0 * ci(config.yml): run enterprise and litellm tests separately * fix: fix linting error * docs: add missing docs * [Feat] Add content policy violation error mapping for image editd (#11113) * feat: add image edit mapping for content policy violations * test fix * Expose `/list` and `/info` endpoints for Audit Log events (#11102) * feat(audit_logging_endpoints.py): expose list endpoint to show all audit logs make it easier for user to retrieve individual endpoints * feat(enterprise/): add audit logging endpoint * feat(audit_logging_endpoints.py): expose new GET `/audit/{id}` endpoint make it easier to retrieve view individual audit logs * feat(key_management_event_hooks.py): correctly show the key of the user who initiated the change * fix(key_management_event_hooks.py): add key rotations as an audit log event ' * test(test_audit_logging_endpoints.py): add simple unit testing for audit log endpoint * fix: testing fixes * fix: fix ruff check * [Feat] Use aiohttp transport by default - 97% lower median latency (#11097) * fix: add flag for disabling use_aiohttp_transport * feat: add _create_async_transport * feat: fixes for transport * add httpx-aiohttp * feat: fixes for transport * refactor: fixes for transport * build: fix deps * fixes: test fixes * fix: ensure aiohttp does not auto set content type * test: test fixes * feat: add LiteLLMAiohttpTransport * fix: fixes for responses API handling * test: fixes for responses API handling * test: fixes for responses API handling * feat: fixes for transport * fix: base embedding handler * test: test_async_http_handler_force_ipv4 * test: fix failing deepeval test * fix: add YARL for bedrock urls * fix: issues with transport * fix: comment out linting issues * test fix * test: XAI is unstable * test: fixes for using respx * test: XAI fixes * test: XAI fixes * test: infinity testing fixes * docs(config_settings.md): document param * test: test_openai_image_edit_litellm_sdk * test: remove deprecated test * bump respx==0.22.0 * test: test_xai_message_name_filtering * test: fix anthropic test after bumping httpx * use n 4 for mapped tests (#11109) * fix: use 1 session per event loop * test: test_client_session_helper * fix: linting error * fix: resolving GET requests on httpx 0.28.1 * test fixes proxy unit tests * fix: add ssl verify settings * fix: proxy unit tests * fix: refactor * tests: basic unit tests for aiohttp transports * tests: fixes xai --------- Co-authored-by: Krrish Dholakia <[email protected]> * test: cleanup redundant test --------- Co-authored-by: Ishaan Jaff <[email protected]> Co-authored-by: JuHyun Bae <[email protected]> * fix(ui_sso.py): maintain backwards compatibility for older user id va… (#11106) * fix(ui_sso.py): maintain backwards compatibility for older user id variations Fixes issue in later SSO checks which only checked id from result * fix(internal_user_endpoints.py): handle trailing whitespace in new user email * fix(internal_user_endpoints.py): apply default_internal_user_settings on all new user calls (even when role not set) allows role undefined users to be assigned the correct role on sign up * feat(proxy_server.py): load default user settings from db - update litellm correctly updates the litellm module with default internal user settings ensures updated settings actually apply * test: add unit test * fix(internal_user_endpoints.py): fix internal user default param role * fix(ui_sso.py): fix linting error * test fix xai * bump: version 1.71.0 → 1.71.1 --------- Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Jorge Piedrahita Ortiz <[email protected]> Co-authored-by: Bryan Low <[email protected]> Co-authored-by: mohittalele <[email protected]> Co-authored-by: Paul Selden <[email protected]> Co-authored-by: Caffeine Coder <[email protected]> Co-authored-by: tanjiro <[email protected]> Co-authored-by: Daniel Staiger <[email protected]> Co-authored-by: Cole McIntosh <[email protected]> Co-authored-by: Ishaan Jaff <[email protected]> Co-authored-by: Jugal D. Bhatt <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: slytechnical <[email protected]> Co-authored-by: daarko10 <[email protected]> Co-authored-by: Søren Mathiasen <[email protected]> Co-authored-by: Matthias Dittrich <[email protected]> Co-authored-by: fzowl <[email protected]> Co-authored-by: Tyler Wagner <[email protected]> Co-authored-by: wagnerjt <[email protected]> Co-authored-by: Jay Gowdy <[email protected]> Co-authored-by: bepotp <[email protected]> Co-authored-by: Tommy PLANEL <[email protected]> Co-authored-by: jmorenoc-o <[email protected]> Co-authored-by: Simon Stone <[email protected]> Co-authored-by: Martin Liu <[email protected]> Co-authored-by: Gunjan Solanki <[email protected]> Co-authored-by: Emerson Gomes <[email protected]> Co-authored-by: Tornike Gurgenidze <[email protected]> Co-authored-by: Mayank <[email protected]> Co-authored-by: Kreato <[email protected]> Co-authored-by: Xingyao Wang <[email protected]> Co-authored-by: openhands <[email protected]> Co-authored-by: JuHyun Bae <[email protected]>
…#11121) * Correctly delete team model alias when team only model is deleted (#10933) * fix(model_management_endpoints.py): if team model deleted - remove the alias routing Fixes issue where alias would cause incorrect routing to occur * fix(model_management_endpoints.py): remove model name from team on team model delete prevents misrouting * fix(vertex_llm_base.py): remove quota_project_id parameter from credential refresh request (#10915) Fixes #9863 * Enable structured JSON schema outputs in LM Studio (for validated responses) (#10929) - docs/my-website/docs/providers/lm_studio.md: add Structured Output section with JSON schema and Pydantic examples - litellm/llms/lm_studio/chat/transformation.py: extend map_openai_params to handle `response_format` mappings (`json_schema`, `json_object`) and move them to optional_params - litellm/utils.py: include `LM_STUDIO` in `supports_response_schema` list - tests/litellm/llms/lm_studio/test_lm_studio_chat_transformation.py: add tests for Pydantic model and dict-based JSON schema handling Co-authored-by: Earl St Sauver <[email protected]> * update sambanova models and parameters (#10900) * add sambanova to completion input params table * update sambanova supported args * update sambanova supported models * minor changes * fix sambanova model list * update sambanova models * update sambanova models * update sambanova docs * minor chnage sambanova url * update type to match OpenAIGPTConfig * minor change * fix cohere rerank provider (#10822) * add skip server startup flag to cli (#10665) * add flag * fix ruff linting * add unit test * fix ruff errors * fix lintings issues * fix linting errors * add global noqa for print * Allow passed in vertex_ai credentials to be authorized_user type (#10899) * fix: handle DB_USER, DB_PASSWORD, DB_HOST problem I faced, since this… (#10842) * fix: handle DB_USER, DB_PASSWORD, DB_HOST problem I faced, since this could be with special character, so better be url encoded, similar problem also happen in DATABASE_URL * test: added the test cases * add: test case of url with sepcial character * add keys and members (#10950) * Update github.md (#10946) Updated clarification in the use of the models form github. (Github uses the model name: <company>/<model-Name> while litellm wants github/<model-Name> Updated the example to a model that is actually supported / available on github right now * Add new documentation files for LiteLLM (#10961) - Created `llms-full.txt` with comprehensive details on LiteLLM features, usage, and supported models. - Added `llms.txt` for quick access to key links and resources related to LiteLLM, including guides, release notes, and integration documentation. * [Fix] Invitation Email does not include the invitation link (#10958) * fix: email invites should link to the invitation * fix: email invites should link to the invitation * fix: email invites should link to the invitation * bump: litellm proxy extras * refactor(vertex_llm_base.py): remove check on credential project id - allow for cross credential calling * test: update test due to cohere ssl issues * fix: fix model param mapping * fix: update test to reflect change * Enable key reassignment on UI + Show 'updated at' column for key in all keys table (#10960) * feat(key_edit_view.tsx): initial commit enabling reassigning keys to teams * style(key_edit_view.tsx): cleaner implementation with teams in dropdown * fix(all_keys_table.tsx): set max width to keys column * feat(all_keys_table.tsx): show last updated at column for key * test: update tests * Litellm dev 05 19 2025 p3 (#10965) * feat(model_info_view.tsx): enable updating model info for existing models on UI Fixes LIT-154 * fix(model_info_view.tsx): instantly show model info updates on UI * feat(proxy_server.py): enable flag on `/models` to include model access groups This enables admin to assign model access groups to keys/teams on UI * feat(ui/): add model access groups on ui dropdown when creating teams + keys * refactor(parallel_request_limiter_v2.py): Migrate multi instance rate limiting to OSS Closes #10052 * Validate migrating keys to teams + Fix mistral image url on async translation (#10966) * feat(key_management_endpoints.py): add validation checks for migrating key to team Ensures requests with migrated key can actually succeed Prevent migrated keys from failing in prod due to team missing required permissions * fix(mistral/): fix image url handling for mistral on async call * fix(key_management_endpoints.py): improve check for running team validation on key update * bump: version 1.70.1 → 1.70.2 * test: update test * build(ui/): new ui build * add cla to docs (#10963) * add cla to docs * cla docs clarity * build: ui/ new build * [Fix] List Guardrails - Show config.yaml guardrails on litellm ui (#10959) * fix: listing guardrails defined on litellm config * fix: list guardrails on litellm config * fix: list guardrails on litellm config * test: list guardrails on litellm config * fix: linting * Update litellm/proxy/guardrails/guardrail_endpoints.py Co-authored-by: Copilot <[email protected]> * fix: GuardrailInfoLiteLLMParamsResponse --------- Co-authored-by: Copilot <[email protected]> * fix: vertex show clear exception on failed refresh (#10969) * fix: vertex show clear exception on failed refresh * fix: show clear debug log * docs: cleanup * test: update tests * test: skip test - model EOL * [Feature] Add supports_computer_use to the model list (#10881) * Add support for supports_computer_use in model info * Corrected list of supports_computer_use models * Further fix computer use compatible claude models, fix existing test that predated supports_computer_use in the model list * Move computer use test case into existing test_utils file * Moved tests in to test_utils.py * [Feat] - Add Support for Showing Passthrough endpoint Error Logs on LiteLLM UI (#10990) * fix: add error logging for passthrough endpoints * feat: add error logging for passthrough endpoints * fix: post_call_failure_hook track errors on pt * fix: use constant for MAXIMUM_TRACEBACK_LINES_TO_LOG * docs MAXIMUM_TRACEBACK_LINES_TO_LOG * test: ensure failure callback triggered * fix: move _init_kwargs_for_pass_through_endpoint * added support to credential delete to support slashes in the curl (#10987) * added support to credential delete to support slashes in the curl * add support for get and update too * Add new gemini preview models + Fix cohere v2 embedding 'embedding_types' param mapping (#10991) * build(model_prices_and_context_window.json): add new gemini preview models Fixes #10985 * fix(cohere/embed): Fix encoding format <-> embedding types param mapping Fixes #10939 * fix(aim.py): fix syntax error * Litellm add new gemini models (#10998) * build(model_prices_and_context_window.json): add new gemini image gen model * build(model_prices_and_context_window.json): add more gemini models * [Feat] Prometheus - Track `route` on proxy_* metrics (#10992) * fix: trace route on prometheus metrics * fix: show route on prometheus metrics for total fails * test: trace route on metrics * fix: tests for route in prom metrics * test: fix test metrics * test: fix test_proxy_failure_metrics * fix: default role for JWT authentication (#10995) * fix: get_user_object * test: test_default_internal_user_params_with_get_user_object * Update litellm/proxy/auth/auth_checks.py Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]> * fix(internal_user_endpoints.py): allow resetting spend/max budget on … (#10993) * fix(internal_user_endpoints.py): allow resetting spend/max budget on user update Fixes #10495 * fix(internal_user_endpoints.py): correctly return set spend for user on /user/new * fix(auth_checks.py): check redis for key object before checking in-memory allows for quicker updates * feat(internal_user_endpoints.py): update cache object when user is updated + check redis on user values being updated * fix(auth_checks.py): use redis cache when user updated * fix: set default value of 'expires' to None * bump: version 1.70.2 → 1.70.3 * Improve response_id propagation logic and add tests for valid/empty ID handling in streaming. (#11006) * support vertex_ai global endpoints for chat (#10658) * fix(internal_user_endpoints.py): fix check * Ollama wildcard support (#10982) * Add Ollama wildcard support * Add Ollama-chatas well. * Fix missing methods. * Improve logs a bit. * Add tests * Add tests * spend rounded to 4 (#11013) * fix: fix linting error * fix(streaming_handler.py): fix check when response id already set * put organization and team buttons at the top (#10948) * feat: add xai/grok-3 pricing (#11028) * [Feat] Add Image Edits Support to LiteLLM (#11020) * refactor: use 1 file for image methods * refactor: use 1 file for image methods * feat: add stubs for image edits * fix: types for image edits * feat: add async image edits * feat: add base config for image edits * feat: add basic structure for image edits * feat: add ImageEditRequestUtils * feat: complete instrumentation of image edits * tes: test_openai_image_edit_litellm_sdk * tets: test_openai_image_edit_litellm_sdk * feat: get_provider_image_edit_config * feat: add OpenAIImageEditConfig * feat: working image edits * fixes: working image edits * fix: code qa * fix: using image edits * fix: linting errors * Updating the available VoyageAI models in the docs (#11003) * Refresh VoyageAI models and prices and context * Refresh VoyageAI models and prices and context * Refresh VoyageAI models and prices and context * Updating the available VoyageAI models in the docs * Updating the available VoyageAI models in the docs * fix(ui): call tool when no arguments needed (#11012) Co-authored-by: wagnerjt <[email protected]> * Verbose error on admin add (#10978) * Spend rounded to 4 for Organizations and Users page (#11023) * spend rounded to 4 * fixed for organization and users table * Fix: Handle dict objects in Anthropic streaming response (#11032) * fix: handle dict objects in Anthropic streaming response Fix issue where dictionary objects in Anthropic streaming responses were not properly converted to SSE format strings before being yielded, causing AttributeError: 'dict' object has no attribute 'encode' * fix: refactor Anthropic streaming response handling - Added STREAM_SSE_DATA_PREFIX constant in constants.py - Created return_anthropic_chunk helper function for better maintainability - Using safe_dumps from safe_json_dumps.py for improved JSON serialization - Added unit test for dictionary object handling in streaming response * fix: correct patch path in anthropic_endpoints test * feat: add Databricks Llama 4 Maverick model cost (#11008) Co-authored-by: Tommy PLANEL <[email protected]> * test: mark flaky test * Litellm dev 05 21 2025 p2 (#11039) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * test: update test * fix: fix linting error * fix: fix ruff linting error * test: fix check * Fixes the InvitationLink Prisma find_many query (#11031) Related: 3b6c6d0#r157675103 We should use "order", according to the prisma python docs https://prisma-client-py.readthedocs.io/en/stable/reference/limitations/#order-argument Also we are using "order" in other files of the project: https://github.com/search?q=repo%3ABerriAI%2Flitellm%20order%3D%7B&type=code * fix: fix linting error * Support passing `prompt_label` to langfuse (#11018) * fix: add prompt label support to prompt management hook * feat: support 'prompt_label' parameter for langfuse prompt management Closes #9003 (reply in thread) * fix(litellm_logging.py): deep copy optional params to avoid mutation while logging * fix(log-consistent-optional-param-values-across-providers): ensures params can be used for finetuning from providers * fix: fix linting error * test: update test * test: update langfuse tests * fix(litellm_logging.py): avoid deepcopying optional params might contain thread object * added cloding tags for </TabGroup> </Col> </Grid> + indentation changes (#11046) * Feat: add MCP to Responses API and bump openai python sdk (#11029) * feat: add MCP to responses API * feat: bump openai version to 1.75.0 * docs MCP + responses API * fixes: type checking * fixes: type checking * build: use latest openai 1.81.0 * fix: linting error * fix: linting error * fix: test * fix: linting errors * fix: test * fix: test * fix: linting * Revert "fix: linting" This reverts commit ebb19ff. * fix: linting * Model filter on logs (#11048) * add model filter * remove calling all models * bump: version 1.70.3 → 1.70.4 * docs fix ad hoc recognizer * docs fix example * [Feat] Add claude-4 model family (#11060) * add new claude-sonnet-4-2025051 * feat: add bedrock claude-4 models * add bedrock claude-4 models * add vertx_ai/claude-sonnet-4 * fix provider=bedrock_converse * feat: ensure thinking is supported for claude-4 model family * docs add claude-4 models * Revert "Support passing `prompt_label` to langfuse (#11018)" This reverts commit 2b50b43. * Revert "Revert "Support passing `prompt_label` to langfuse (#11018)"" This reverts commit 0be7e7d. * fix: fix checking optional params from logging object for function call * test: update groq test - change on their end * Litellm managed file updates combined (#11040) * Add LiteLLM Managed file support for `retrieve`, `list` and `cancel` finetuning jobs (#11033) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(fine_tuning_endpoints/endpoints.py): add managed files support to list finetuning jobs * feat(finetuning_endpoints/main): add managed files support for retrieving ft job Makes it easier to control permissions for ft endpoint * LiteLLM Managed Files - Enforce validation check if user can access finetuning job (#11034) * feat: initial commit adding managed file support to fine tuning endpoints * feat(fine_tuning/endpoints.py): working call to openai finetuning route Uses litellm managed files for finetuning api support * feat(fine-tuning/main.py): refactor to use LiteLLMFineTuningJob pydantic object includes 'hidden_params' * fix: initial commit adding unified finetuning id support return a unified finetuning id we can use to understand which deployment to route the ft request to * test: fix test * feat(managed_files.py): return unified finetuning job id on create finetuning job enables retrieve, delete to work with litellm managed files * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(managed_files.py): support managed files for cancel ft job endpoint * feat(fine_tuning_endpoints/endpoints.py): add managed files support to list finetuning jobs * feat(finetuning_endpoints/main): add managed files support for retrieving ft job Makes it easier to control permissions for ft endpoint * feat(managed_files.py): store create fine-tune / batch response object in db storing this allows us to filter files returned on list based on what user created * feat(managed_files.py): Ensures users can't retrieve / modify each others jobs * fix: fix check * fix: fix ruff check errors * test: update to handle testing * fix: suppress linting warning - openai 'seed' is none on azure * test: update tests * test: update test * (build) fix context window for claude 4 model family * [Fix] Reliability Fix - Removing code that was creating threads on errors (#11066) * fix: only init langfuse if active * fix: only init langfuse if active * fix: add initialized_langfuse_clients count * fix: add MAX_LANGFUSE_INITIALIZED_CLIENTS * fix: use safe init langfuse * test: init langfuse clients * test: test_langfuse_not_initialized_returns_none_early * docs MAX_LANGFUSE_INITIALIZED_CLIENTS * fix: use correct langfuse callback * fix: code qa * [Feat] Add Azure AD certificate-based authentication (#11069) * feat: add cert based auth for Azure get_azure_ad_token_provider * test: tests azure cert auth * fix update poetry * fix: fix linting * Update feature_request.yml * Update feature_request.yml (#11078) * adds tzdata (#10796) (#11052) With tzdata installed, the environment variable `TZ` will be respected by Python's datetime module. This means that users can specify the timezone they want LiteLLM to use. Co-authored-by: Simon Stone <[email protected]> * Fix proxy_cli.py: avoid overriding DATABASE_URL when it’s already provided. (#11076) * feat(helm): Add loadBalancerClass support for LoadBalancer services (#11064) * feat(helm): Add loadBalancerClass support for LoadBalancer services Adds the ability to specify a loadBalancerClass when using LoadBalancer service type. This enables integration with custom load balancer implementations like Tailscale. * fixup! feat(helm): Add loadBalancerClass support for LoadBalancer services * Add Azure Mistral Medium 25.05 (#11063) * Add Azure Mistral Medium 25.05 * fix provider * fix:Databricks Claude 3.7 Sonnet output token cost: $17.85/M instead of (#11007) $178.5/M Co-authored-by: Tommy PLANEL <[email protected]> * Fix/openrouter stream usage id 8913 (#11004) * Add handling and verification for 'usage' field in OpenRouter chat transformations and streaming responses. * Ensure consistent response ID by using valid ID from any chunk. * Remove redundant comments from OpenRouter chat transformation tests and logic. * Remove this from here as I'm opening a new pr * Reverting space * Remove redundant assertions from OpenRouter chat transformation test * feat: add embeddings to CustomLLM (#10980) * feat: add embeddings to CustomLLM * feat: add aembedding to custom llm * Enable switching between custom auth and litellm api key auth + Fix `/customer/update` for max budgets (#11070) * feat(user_api_key_auth.py): (enterprise) allow user to enable custom auth + litellm api key auth makes it easy to migrate to proxy * fix(proxy/_types.py): allow setting 'spend' for new customer * fix(customer_endpoints.py): fix updating max budget on `/customer/update` Fixes #6920 * test(test_customer_endpoints.py): add unit tests for customer update endpoint * fix: fix linting error * fix(custom_auth_auto.py): fix ruff check * fix(customer_endpoints.py): fix documentation * Litellm add file validation (#11081) * fix: cleanup print statement * feat(managed_files.py): add auth check on managed files Implemented for file retrieve + delete calls * feat(files_endpoints.py): support returning files by model name enables managed file support * feat(managed_files/): filter list of files by the ones created by user prevents user from seeing another file * test: update test * fix(files_endpoints.py): list_files - always default to provider based routing * build: add new table to prisma schema * [feature] ConfidentAI logging enabled for proxy and sdk (#10649) * async success implemented * fail async event * sync events added * docs added * docs added * test added * style * test * . * lock file genrated due to tenacity change * mypy errors * resolved comments * resolved comments * resolved comments * resolved comments * style * style * resolved comments * build(pyproject.toml): add langfuse dev dependency for tests * Proper github images (#10927) * feat: add seperate image URLs to distinguish types of release * feat: remove new nightly/dev image URLs, only keep stable * test: fix failing deepeval test * Add devstral-small-2505 model to pricing and context window configuration (#11103) - Added mistral/devstral-small-2505 with 128K context window - Pricing: bash.1/M input tokens, bash.3/M output tokens (same as Mistral Small 3.1) - Supports function calling, assistant prefill, and tool choice - Source: https://mistral.ai/news/devstral Co-authored-by: openhands <[email protected]> * test: test_openai_image_edit_litellm_sdk * use n 4 for mapped tests (#11109) * Fix/background health check (#10887) * fix: improve health check logic by deep copying model list on each iteration * test: add async test for background health check reflecting model list changes * fix: validate health check interval before executing background health check * fix: specify type for health check results dictionary * fix(user_api_key_auth.py): handle user custom auth set with no custom settings * bump: version 0.1.21 → 0.2.0 * ci(config.yml): run enterprise and litellm tests separately * fix: fix linting error * docs: add missing docs * [Feat] Add content policy violation error mapping for image editd (#11113) * feat: add image edit mapping for content policy violations * test fix * Expose `/list` and `/info` endpoints for Audit Log events (#11102) * feat(audit_logging_endpoints.py): expose list endpoint to show all audit logs make it easier for user to retrieve individual endpoints * feat(enterprise/): add audit logging endpoint * feat(audit_logging_endpoints.py): expose new GET `/audit/{id}` endpoint make it easier to retrieve view individual audit logs * feat(key_management_event_hooks.py): correctly show the key of the user who initiated the change * fix(key_management_event_hooks.py): add key rotations as an audit log event ' * test(test_audit_logging_endpoints.py): add simple unit testing for audit log endpoint * fix: testing fixes * fix: fix ruff check * [Feat] Use aiohttp transport by default - 97% lower median latency (#11097) * fix: add flag for disabling use_aiohttp_transport * feat: add _create_async_transport * feat: fixes for transport * add httpx-aiohttp * feat: fixes for transport * refactor: fixes for transport * build: fix deps * fixes: test fixes * fix: ensure aiohttp does not auto set content type * test: test fixes * feat: add LiteLLMAiohttpTransport * fix: fixes for responses API handling * test: fixes for responses API handling * test: fixes for responses API handling * feat: fixes for transport * fix: base embedding handler * test: test_async_http_handler_force_ipv4 * test: fix failing deepeval test * fix: add YARL for bedrock urls * fix: issues with transport * fix: comment out linting issues * test fix * test: XAI is unstable * test: fixes for using respx * test: XAI fixes * test: XAI fixes * test: infinity testing fixes * docs(config_settings.md): document param * test: test_openai_image_edit_litellm_sdk * test: remove deprecated test * bump respx==0.22.0 * test: test_xai_message_name_filtering * test: fix anthropic test after bumping httpx * use n 4 for mapped tests (#11109) * fix: use 1 session per event loop * test: test_client_session_helper * fix: linting error * fix: resolving GET requests on httpx 0.28.1 * test fixes proxy unit tests * fix: add ssl verify settings * fix: proxy unit tests * fix: refactor * tests: basic unit tests for aiohttp transports * tests: fixes xai --------- Co-authored-by: Krrish Dholakia <[email protected]> * test: cleanup redundant test * ui new build * bump: 1.70.5 * bump: version 1.70.5 → 1.71.0 * fix(pyproject.toml): bump litellm version * Logfire - fix(opentelemetry.py): Fix otel proxy server initialization (#11091) * fix(opentelemetry.py): Fix otel proxy server initialization Fixes #10349 (comment) * feat(router.py): allow ignoring invalid deployments on model load Prevents invalid models from preventing loading other valid models Fixes issue where on instance spin up invalid models were blocking valid models from being used * test: add additional unit testing * fix(user_api_key_auth.py): return abbreviated key in exception - make it easy to debug which key is invalid for client * docs(config_settings.md): document param * fix(user_api_key_auth.py): fix error string to match previous one * feat(handle_jwt.py): map user to team when added via jwt auth (#11108) * feat(handle_jwt.py): map user to team when added via jwt auth makes it easy to ensure user belongs to team * test: test_openai_image_edit_litellm_sdk * use n 4 for mapped tests (#11109) * Fix/background health check (#10887) * fix: improve health check logic by deep copying model list on each iteration * test: add async test for background health check reflecting model list changes * fix: validate health check interval before executing background health check * fix: specify type for health check results dictionary * fix(user_api_key_auth.py): handle user custom auth set with no custom settings * bump: version 0.1.21 → 0.2.0 * ci(config.yml): run enterprise and litellm tests separately * fix: fix linting error * docs: add missing docs * [Feat] Add content policy violation error mapping for image editd (#11113) * feat: add image edit mapping for content policy violations * test fix * Expose `/list` and `/info` endpoints for Audit Log events (#11102) * feat(audit_logging_endpoints.py): expose list endpoint to show all audit logs make it easier for user to retrieve individual endpoints * feat(enterprise/): add audit logging endpoint * feat(audit_logging_endpoints.py): expose new GET `/audit/{id}` endpoint make it easier to retrieve view individual audit logs * feat(key_management_event_hooks.py): correctly show the key of the user who initiated the change * fix(key_management_event_hooks.py): add key rotations as an audit log event ' * test(test_audit_logging_endpoints.py): add simple unit testing for audit log endpoint * fix: testing fixes * fix: fix ruff check * [Feat] Use aiohttp transport by default - 97% lower median latency (#11097) * fix: add flag for disabling use_aiohttp_transport * feat: add _create_async_transport * feat: fixes for transport * add httpx-aiohttp * feat: fixes for transport * refactor: fixes for transport * build: fix deps * fixes: test fixes * fix: ensure aiohttp does not auto set content type * test: test fixes * feat: add LiteLLMAiohttpTransport * fix: fixes for responses API handling * test: fixes for responses API handling * test: fixes for responses API handling * feat: fixes for transport * fix: base embedding handler * test: test_async_http_handler_force_ipv4 * test: fix failing deepeval test * fix: add YARL for bedrock urls * fix: issues with transport * fix: comment out linting issues * test fix * test: XAI is unstable * test: fixes for using respx * test: XAI fixes * test: XAI fixes * test: infinity testing fixes * docs(config_settings.md): document param * test: test_openai_image_edit_litellm_sdk * test: remove deprecated test * bump respx==0.22.0 * test: test_xai_message_name_filtering * test: fix anthropic test after bumping httpx * use n 4 for mapped tests (#11109) * fix: use 1 session per event loop * test: test_client_session_helper * fix: linting error * fix: resolving GET requests on httpx 0.28.1 * test fixes proxy unit tests * fix: add ssl verify settings * fix: proxy unit tests * fix: refactor * tests: basic unit tests for aiohttp transports * tests: fixes xai --------- Co-authored-by: Krrish Dholakia <[email protected]> * test: cleanup redundant test --------- Co-authored-by: Ishaan Jaff <[email protected]> Co-authored-by: JuHyun Bae <[email protected]> * fix(ui_sso.py): maintain backwards compatibility for older user id va… (#11106) * fix(ui_sso.py): maintain backwards compatibility for older user id variations Fixes issue in later SSO checks which only checked id from result * fix(internal_user_endpoints.py): handle trailing whitespace in new user email * fix(internal_user_endpoints.py): apply default_internal_user_settings on all new user calls (even when role not set) allows role undefined users to be assigned the correct role on sign up * feat(proxy_server.py): load default user settings from db - update litellm correctly updates the litellm module with default internal user settings ensures updated settings actually apply * test: add unit test * fix(internal_user_endpoints.py): fix internal user default param role * fix(ui_sso.py): fix linting error * test fix xai * bump: version 1.71.0 → 1.71.1 --------- Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Jorge Piedrahita Ortiz <[email protected]> Co-authored-by: Bryan Low <[email protected]> Co-authored-by: mohittalele <[email protected]> Co-authored-by: Paul Selden <[email protected]> Co-authored-by: Caffeine Coder <[email protected]> Co-authored-by: tanjiro <[email protected]> Co-authored-by: Daniel Staiger <[email protected]> Co-authored-by: Cole McIntosh <[email protected]> Co-authored-by: Ishaan Jaff <[email protected]> Co-authored-by: Jugal D. Bhatt <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: slytechnical <[email protected]> Co-authored-by: daarko10 <[email protected]> Co-authored-by: Søren Mathiasen <[email protected]> Co-authored-by: Matthias Dittrich <[email protected]> Co-authored-by: fzowl <[email protected]> Co-authored-by: Tyler Wagner <[email protected]> Co-authored-by: wagnerjt <[email protected]> Co-authored-by: Jay Gowdy <[email protected]> Co-authored-by: bepotp <[email protected]> Co-authored-by: Tommy PLANEL <[email protected]> Co-authored-by: jmorenoc-o <[email protected]> Co-authored-by: Simon Stone <[email protected]> Co-authored-by: Martin Liu <[email protected]> Co-authored-by: Gunjan Solanki <[email protected]> Co-authored-by: Emerson Gomes <[email protected]> Co-authored-by: Tornike Gurgenidze <[email protected]> Co-authored-by: Mayank <[email protected]> Co-authored-by: Kreato <[email protected]> Co-authored-by: Xingyao Wang <[email protected]> Co-authored-by: openhands <[email protected]> Co-authored-by: JuHyun Bae <[email protected]> * test: update tests * fix: fix linting error --------- Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Earl St Sauver <[email protected]> Co-authored-by: Jorge Piedrahita Ortiz <[email protected]> Co-authored-by: Bryan Low <[email protected]> Co-authored-by: mohittalele <[email protected]> Co-authored-by: Paul Selden <[email protected]> Co-authored-by: Caffeine Coder <[email protected]> Co-authored-by: tanjiro <[email protected]> Co-authored-by: Daniel Staiger <[email protected]> Co-authored-by: Cole McIntosh <[email protected]> Co-authored-by: Ishaan Jaff <[email protected]> Co-authored-by: Jugal D. Bhatt <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: slytechnical <[email protected]> Co-authored-by: daarko10 <[email protected]> Co-authored-by: Søren Mathiasen <[email protected]> Co-authored-by: Matthias Dittrich <[email protected]> Co-authored-by: fzowl <[email protected]> Co-authored-by: Tyler Wagner <[email protected]> Co-authored-by: wagnerjt <[email protected]> Co-authored-by: Jay Gowdy <[email protected]> Co-authored-by: bepotp <[email protected]> Co-authored-by: Tommy PLANEL <[email protected]> Co-authored-by: jmorenoc-o <[email protected]> Co-authored-by: Simon Stone <[email protected]> Co-authored-by: Martin Liu <[email protected]> Co-authored-by: Gunjan Solanki <[email protected]> Co-authored-by: Emerson Gomes <[email protected]> Co-authored-by: Tornike Gurgenidze <[email protected]> Co-authored-by: Mayank <[email protected]> Co-authored-by: Kreato <[email protected]> Co-authored-by: Xingyao Wang <[email protected]> Co-authored-by: openhands <[email protected]> Co-authored-by: JuHyun Bae <[email protected]>
Fixes #10792
Summary
Test plan