-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Add new model provider Novita AI #7582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new model provider Novita AI #7582
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
@ishaan-jaff Hi! I believe it’s ready for review and would be great to have it merged soon if everything looks good to you. Please let me know if there’s anything I should adjust. |
model: str, | ||
drop_params: bool, | ||
) -> dict: | ||
mapped_openai_params = super().map_openai_params( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like redundant code just inherit and pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
litellm/main.py
Outdated
@@ -2973,6 +2973,49 @@ def completion( # type: ignore # noqa: PLR0915 | |||
) | |||
return response | |||
response = model_response | |||
elif custom_llm_provider == "novita": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no need for this if block, just register as litellm.openai_compatible_providers
in init
and it will route it into the openai block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add a custom header in this if block. I'm not sure where else to add it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should add it in the validate_environment in the config, that's where headers can be set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this case, it would be better to have it use base_llm_http_handler.py
Line 2286 in 4a6dcd6
response = base_llm_http_handler.completion( |
This is where we are moving all llm calls to - this way all your custom logic exists only within your transformation.py file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
litellm/main.py
Outdated
@@ -3923,6 +3965,7 @@ async def atext_completion( | |||
or custom_llm_provider == "huggingface" | |||
or custom_llm_provider == "ollama" | |||
or custom_llm_provider == "vertex_ai" | |||
or custom_llm_provider == "novita" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for this, see line below - just register as an openai_compatible_provider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
litellm/utils.py
Outdated
@@ -2879,6 +2882,7 @@ def get_optional_params( # noqa: PLR0915 | |||
and custom_llm_provider != "bedrock" | |||
and custom_llm_provider != "ollama_chat" | |||
and custom_llm_provider != "openrouter" | |||
and custom_llm_provider != "novita" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for this, just register as an openai compatible provider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -4650,6 +4650,46 @@ def test_humanloop_completion(monkeypatch): | |||
messages=[{"role": "user", "content": "Tell me a joke."}], | |||
) | |||
|
|||
def test_completion_novita_ai(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test will fail in prod. please make this a mock test, see hosted_vllm/
example
Hi @jasonhp thank you for your PR
I would expect to see
|
@krrishdholakia Hello, thank you for reviewing. I have fixed the problems you mentioned. And I fixed all unit tests and added e2e tests. I have run ![]() |
…2025_p1' into feat/novita
@krrishdholakia Hello! All conflicts have been resolved. |
7a92a03
into
BerriAI:litellm_contributor_prs_03_24_2025_p1
Yup - thank you for your work on this |
@krrishdholakia Hello! I wonder when this |
@krrishdholakia Hi Krish, can you help merge these changes to the main branch? |
* Add new model provider Novita AI (#7582) * feat: add new model provider Novita AI * feat: use deepseek r1 model for examples in Novita AI docs * fix: fix tests * fix: fix tests for novita * fix: fix novita transformation * ci: fix ci yaml * fix: fix novita transformation and test (#10056) --------- Co-authored-by: Jason <[email protected]>
Title
Add new model provider Novita AI
Type
🆕 New Feature
📖 Documentation
Changes
[REQUIRED] Testing - Attach a screenshot of any new tests passing locally
If UI changes, send a screenshot/GIF of working UI fixes