Skip to content

v1.73.6.rc #12146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4,367 commits into
base: litellm_stable_branch
Choose a base branch
from
Open

v1.73.6.rc #12146

wants to merge 4,367 commits into from

Conversation

krrishdholakia
Copy link
Contributor

Title

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature
🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test

Changes

krrishdholakia and others added 30 commits June 19, 2025 15:36
* add pass through ui

* fix accordion for route path

* working route path renderer

* fix use sections

* clean up add pass through form

* docs fix add pass through routing

* clean up route preview

* add route preview
… prop to make blue model ID buttons clickable - Fix checkbox selection logic to use model names consistently - Add stopPropagation to prevent unwanted sort triggers on checkbox clicks - Now clicking Model ID opens model details, and select all works properly (#11898)
…HealthCheckComponent - Change wrapper from card styling to simple div with mb-6 spacing - Remove padding around table container for cleaner layout - Add proper icon-based actions in health check table - Use PlayIcon for new checks and RefreshIcon for re-running checks - Add loading animation with dots during health checks - Include proper tooltips for action buttons (#11897)
…low clients to specify MCP headers (#11890) (#11891)

* initial mcp auth with special header (#11890)

Co-authored-by: wagnerjt <[email protected]>

* add mcp auth header

* fixes MCP client for litellm proxy

* fixes loc of MCP types

* fixes use MCP client for auth to MCPs

* fix organization

* fix mcp auth header

* add MCP auth header to litellm auth

* fixes for MCP auth

* Add MCP auth to list tools

* fix MCP call tool

* fixes for MCP auth header

* tests for MCP transport

* TestMCPClientUnitTests

* docs MCP auth

* fix types

* docs fix

* fix MCP auth import

* fix code qa check

* test fix mcp auth token check

---------

Co-authored-by: wagnerjt <[email protected]>
* fix _get_ssl_context

* fixes for using HTTP handler
* Add deployment annotations

* Correct the indent and simplify if 0 annotations
* Enhance Mistral API: Add support for parallel tool calls and refine name handling in tool messages. Plus, introduce a new test for parallel tool calls in the Mistral model.

* tests

* make mypy happy

* Refine name handling in Mistral chat transformation: clarify conditions for removing the 'name' field based on message role and content.
* ui - fix 1

* fixes

* fix path prefix

* fix path
#11907)

* build(model_prices_and_context_window.json): mark all gemini-2.5 models as supporting pdf input

Closes #11881

* fix(anthropic_transformation.py): set custom llm provider custom property

Fixes #11861

* test: add unit test for checking supports_reasoning

* test: add test for vertex ai flow

* feat(bedrock/anthropic): ensure thinking param correctly passed for bedrock/invoke
…e) + Bedrock - handle `qs:..` in base64 file data + Tag Management - support adding public model names (#11908)

* fix(factory.py): handle qs:.. in mime type

Fixes #11839

* feat(litellm_proxy/): don't transform messages client-side

leave litellm proxy messages untouched - allow proxy to handle transformation

 prevents double transformation

* feat(tag_management_endpoints.py): support adding models to tag by adding model_name

Closes #11884

* test(test_tag_management_endpoints.py): add unit tests for adding new model by public model name

* test: update test
…-modal

Add success modal for health check responses
…naming patterns (#11914)

* fix(volcengine.py): add thinking param support

Closes #11879

* fix(gpt_transformation.py): handle azure custom names - e.g. `gpt-4-1`

Closes #11834
krrishdholakia and others added 20 commits June 28, 2025 13:24
* fix - refactor init to use a registry

* # noqa: PLR0915
* use 1 file for KeyManagementSystem

* move key management settings

* fix import locs

* test_proxy_types_not_imported

* test the import loc

* fix import item

* fix imports

* fix import loc

* fix imports
…able setting custom header tags (#12131)

* fix(anthropic/experimental_pass_through): use given model name when returning streaming chunks

don't harcode model name on streaming

confusing for user

* fix(anthropic/streaming_iterator.py): remove scope of import

* feat(litellm_logging.py): allow admin to specify additional headers for using as spend tags

Closes #12129

* test(test_litellm_logging.py): add unit tests

* feat(openweb_ui.md): add custom tag tutorial to docs

* docs(cost_tracking.md): add tag based usage UI screenshot

* test: update test

* fix: fix import
…y models on /v2/model/info + render team member budget correctly (#12144)

* fix(team_endpoints.py): prevent overwriting current list of team models on new model add

* fix(networking.tsx): fix default proxy base url

* fix(proxy_server.py): include team only models when retrieving all deployments on `/v2/model/info` helper util

ensures team only models are shown to user

* fix(router.py): check model name by team public model name when team id given

Fixes issue where team member could not see team only models when clicking into that team on `Models + Endpoints`

* fix(team_member_view.tsx): fix rendering team member budget, when budget is set

* test: update tests

* test: update unit test
release note cleanup
Copy link

vercel bot commented Jun 29, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 2, 2025 7:14pm

@CLAassistant
Copy link

CLAassistant commented Jun 29, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
10 out of 11 committers have signed the CLA.

✅ amarrella
✅ ohmeow
✅ krrishdholakia
✅ ishaan-jaff
✅ bougou
✅ zhangyoufu
✅ colesmcintosh
✅ NANDINI-star
✅ codeugar
✅ Mte90
❌ glgh
You have signed the CLA already but the status is still pending? Let us recheck it.

…12188)

* fix(rebuild-usage-object---ensure-cache_tokens-is-set): Ensures cache tokens is correctly set

Fixes #12149

* test(test_stream_chunk_builder_utils.py): add unit test to ensure cached tokens is part of stream chunk builder

Ensures standardized values are used
* fix(proxy_server.py): only rewrite server_root_path if path set

Fixes UI rendering issue on non-root images

* docs(custom_root_ui.md): clarify custom root path doesn't work on non-root images
…s false - allows storing early gemini finish reasons (#12250)

Fixes #12249
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.