Releases: BerriAI/litellm
v1.72.1.dev5
Full Changelog: v1.72.0.dev8...v1.72.1.dev5
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1.dev5
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 249.4223499843156 | 6.178247086819799 | 0.0 | 1849 | 0 | 210.08879999999408 | 4472.544064000032 |
Aggregated | Passed ✅ | 230.0 | 249.4223499843156 | 6.178247086819799 | 0.0 | 1849 | 0 | 210.08879999999408 | 4472.544064000032 |
v1.72.1-nightly
What's Changed
- [Feat]: Performance add DD profiler to monitor python profile of LiteLLM CPU% by @ishaan-jaff in #11375
- [Fix]: Performance - Don't run auth on /health/liveliness by @ishaan-jaff in #11378
- [Bug Fix] Create/Update team member api 500 errror by @hagan in #10479
- add gemini-embeddings-001 model prices and context window by @marty-sullivan in #11332
- [Performance]: Add debugging endpoint to track active /asyncio-tasks by @ishaan-jaff in #11382
- Add Claude 4 Sonnet & Opus, DeepSeek R1, and fix Llama Vision model pricing configurations by @colesmcintosh in #11339
- [Feat] Performance - Don't create 1 task for every hanging request alert by @ishaan-jaff in #11385
- UI / SSO - Update proxy admin id role in DB + Handle SSO redirects with custom root path by @krrishdholakia in #11384
- Anthropic - pass file url's as Document content type + Gemini - cache token tracking on streaming calls by @krrishdholakia in #11387
- Anthropic - Token tracking for Passthrough Batch API calls by @krrishdholakia in #11388
New Contributors
Full Changelog: v1.72.1.dev1...v1.72.1-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 200.0 | 221.13247205731108 | 6.238404517762989 | 0.0 | 1867 | 0 | 180.30491499996515 | 1429.3289209999784 |
Aggregated | Passed ✅ | 200.0 | 221.13247205731108 | 6.238404517762989 | 0.0 | 1867 | 0 | 180.30491499996515 | 1429.3289209999784 |
v1.72.0.dev8
Full Changelog: v1.72.0.dev6...v1.72.0.dev8
v1.72.0.dev6
Full Changelog: v1.72.0.rc1...v1.72.0.dev6
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.dev6
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 233.83265977118128 | 6.19070846062545 | 0.003340911203791392 | 1853 | 1 | 52.76460099992164 | 1123.6515319999398 |
Aggregated | Passed ✅ | 220.0 | 233.83265977118128 | 6.19070846062545 | 0.003340911203791392 | 1853 | 1 | 52.76460099992164 | 1123.6515319999398 |
v1.72.1.dev1
What's Changed
- Support returning virtual key in custom auth + Handle provider-specific optional params for embedding calls by @krrishdholakia in #11346
- Doc : Nvidia embedding models by @AnilAren in #11352
- feat: add cerebras/qwen-3-32b model pricing and context window by @colesmcintosh in #11373
- Fix Google/Vertex AI Gemini module linting errors - Remove unused imports by @colesmcintosh in #11374
Full Changelog: v1.72.0.dev3...v1.72.1.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.1.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 240.21992879346166 | 6.133456637850369 | 0.0 | 1835 | 0 | 194.88736400001017 | 1476.116710000042 |
Aggregated | Passed ✅ | 220.0 | 240.21992879346166 | 6.133456637850369 | 0.0 | 1835 | 0 | 194.88736400001017 | 1476.116710000042 |
v1.72.0.rc1
Full Changelog: v1.72.0.rc...v1.72.0.rc1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.rc1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 259.44241957315893 | 6.076019305391999 | 0.0 | 1818 | 0 | 213.83524799989573 | 1394.7160109999572 |
Aggregated | Passed ✅ | 240.0 | 259.44241957315893 | 6.076019305391999 | 0.0 | 1818 | 0 | 213.83524799989573 | 1394.7160109999572 |
v1.72.0.dev3
What's Changed
- Fix transcription model name mapping by @colesmcintosh in #11333
- [Feat] DD Trace - Add instrumentation for streaming chunks by @ishaan-jaff in #11338
- UI - Custom Server Root Path (Multiple Fixes) by @krrishdholakia in #11337
- [Perf] - Add Async + Batched S3 Logging by @ishaan-jaff in #11340
- fixes: expose flag to disable token counter by @ishaan-jaff in #11344
- Merge in - Gemini streaming - thinking content parsing - return in
reasoning_content
by @krrishdholakia in #11298
Full Changelog: v1.72.0.dev1...v1.72.0.dev3
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.dev3
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 236.14372402204256 | 6.217513201371485 | 0.0 | 1860 | 0 | 197.26605100004235 | 1261.3509589999694 |
Aggregated | Passed ✅ | 220.0 | 236.14372402204256 | 6.217513201371485 | 0.0 | 1860 | 0 | 197.26605100004235 | 1261.3509589999694 |
v1.72.0.dev2
What's Changed
- Litellm doc fixes 05 31 2025 by @krrishdholakia in #11305
Full Changelog: v1.72.0.rc...v1.72.0.dev2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.dev2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 180.0 | 205.5991118790273 | 6.324952965943181 | 0.0 | 1893 | 0 | 163.0808140000113 | 1519.4450000000188 |
Aggregated | Passed ✅ | 180.0 | 205.5991118790273 | 6.324952965943181 | 0.0 | 1893 | 0 | 163.0808140000113 | 1519.4450000000188 |
v1.72.0.dev1
What's Changed
- Litellm doc fixes 05 31 2025 by @krrishdholakia in #11305
- Converted action buttons to sticky footer action buttons by @NANDINI-star in #11293
- Add support for DataRobot as a provider in LiteLLM by @mjnitz02 in #10385
- fix: remove dupe server_id MCP config servers by @wagnerjt in #11327
- Add unit tests for Cohere Embed v4.0 model by @colesmcintosh in #11329
- Add presidio_language yaml configuration support for guardrails by @colesmcintosh in #11331
- [Fix] Fix SCIM running patch operation case sensitivity by @ishaan-jaff in #11335
New Contributors
Full Changelog: v1.72.0.rc...v1.72.0.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 200.0 | 219.56366307474462 | 6.258606219963298 | 0.0 | 1873 | 0 | 180.63117100007275 | 1544.1692160000002 |
Aggregated | Passed ✅ | 200.0 | 219.56366307474462 | 6.258606219963298 | 0.0 | 1873 | 0 | 180.63117100007275 | 1544.1692160000002 |
v1.72.0.rc
What's Changed
- Rate Limiting: Check all slots on redis, Reduce number of cache writes by @krrishdholakia in #11299
Full Changelog: v1.72.0-nightly...v1.72.0.rc
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.72.0.rc
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 239.90858607890516 | 6.227705163442919 | 0.0 | 1863 | 0 | 193.717753000044 | 1627.433018000005 |
Aggregated | Passed ✅ | 220.0 | 239.90858607890516 | 6.227705163442919 | 0.0 | 1863 | 0 | 193.717753000044 | 1627.433018000005 |