Skip to content

Anthropic - pass file url's as Document content type + Gemini - cache token tracking on streaming calls #11387

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 4, 2025

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Jun 4, 2025

  • fix(anthropic/): fix regression when passing file url's to the 'file_id' parameter

add test and ensure anthropic file url's are correctly sent as 'document' blocks

  • fix(vertex_and_google_ai_studio.py): Use same usage calculation function as non-streaming

Closes #10667

  • test(test_vertex_and_google_ai_studio_gemini.py): update test

…id' parameter

add test and ensure anthropic file url's are correctly sent as 'document' blocks
Copy link

vercel bot commented Jun 4, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 4, 2025 4:29am

@krrishdholakia krrishdholakia changed the title Litellm dev 06 03 2025 p2 Anthropic - pass file url's as Document content type + Gemini - cache token tracking on streaming calls Jun 4, 2025
@krrishdholakia krrishdholakia merged commit 3bd1286 into main Jun 4, 2025
30 of 46 checks passed
@krrishdholakia krrishdholakia deleted the litellm_dev_06_03_2025_p2 branch June 4, 2025 04:37
stefan-- pushed a commit to stefan--/litellm that referenced this pull request Jun 12, 2025
… token tracking on streaming calls (BerriAI#11387)

* fix(anthropic/): fix regression when passing file url's to the 'file_id' parameter

add test and ensure anthropic file url's are correctly sent as 'document' blocks

* fix(vertex_and_google_ai_studio.py): Use same usage calculation function as non-streaming

Closes BerriAI#10667

* test(test_vertex_and_google_ai_studio_gemini.py): update test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: return cache token usage in gemini streaming response
1 participant