Replies: 1 comment
-
it should work, so could you check again? (otherwise, we wouldn't have that documentation you mentioned)
You misunderstood something; The config you mentioned "custom-endpoint-prefixes" is for clients<>Gateway communication which is nothing to do with any specific AI providers. On the other hand, the AIServiceBackend.Schema config is per AIServiceBackend configuration where the difference among AI providers comes into play. The latter is what you are interested in, and can be configured to have multiple openai compat backends with different prefixes. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
First of all, great work on the project and on maintaining such a great pace.
I would like to use OpenRouter as a provider. Since it’s OpenAI-compatible, I thought it might be possible. It worked to some extent, but I still have one issue: OpenRouter’s path is api/v1, and instead of sending requests to openrouter.ai/api/v1/chat/completions, the requests go to openrouter.ai/v1/chat/completions.
I tried setting AIServiceBackend schema.version to "api/v1", but nothing changed.
I also saw here (https://aigateway.envoyproxy.io/docs/capabilities/llm-integrations/supported-endpoints/#custom-endpoint-prefixes
) that it’s possible to add custom prefixes — but as I understand it, this would apply a prefix to all OpenAI schema requests, not only OpenRouter.
Do you think there is a way I can make this work in the current state of things?
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions