You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Logfire - fix(opentelemetry.py): Fix otel proxy server initialization (#11091)
* fix(opentelemetry.py): Fix otel proxy server initialization
Fixes#10349 (comment)
* feat(router.py): allow ignoring invalid deployments on model load
Prevents invalid models from preventing loading other valid models
Fixes issue where on instance spin up invalid models were blocking valid models from being used
* test: add additional unit testing
* fix(user_api_key_auth.py): return abbreviated key in exception - make it easy to debug which key is invalid for client
* docs(config_settings.md): document param
* fix(user_api_key_auth.py): fix error string to match previous one
e.message="Authentication Error, Invalid proxy server token passed. Received API Key = {}, Key Hash (Token) ={}. Unable to find token in cache or `LiteLLM_VerificationTokenTable`".format(
771
+
abbreviated_api_key, api_key
772
+
)
773
+
raisee
765
774
# update end-user params on valid token
766
775
# These can change per request - it's important to update them here
Copy file name to clipboardExpand all lines: litellm/router.py
+82-59Lines changed: 82 additions & 59 deletions
Original file line number
Diff line number
Diff line change
@@ -255,6 +255,7 @@ def __init__( # noqa: PLR0915
255
255
router_general_settings: Optional[
256
256
RouterGeneralSettings
257
257
] =RouterGeneralSettings(),
258
+
ignore_invalid_deployments: bool=False,
258
259
) ->None:
259
260
"""
260
261
Initialize the Router class with the given parameters for caching, reliability, and routing strategy.
@@ -287,6 +288,7 @@ def __init__( # noqa: PLR0915
287
288
routing_strategy_args (dict): Additional args for latency-based routing. Defaults to {}.
288
289
alerting_config (AlertingConfig): Slack alerting configuration. Defaults to None.
289
290
provider_budget_config (ProviderBudgetConfig): Provider budget configuration. Use this to set llm_provider budget limits. example $100/day to OpenAI, $100/day to Azure, etc. Defaults to None.
291
+
ignore_invalid_deployments (bool): Ignores invalid deployments, and continues with other deployments. Default is to raise an error.
0 commit comments