-
Notifications
You must be signed in to change notification settings - Fork 119
Description
Describe the bug
When a Foundary resource endpoint is configured in the obot server using the Azure OpenAI model provider with MS Entra authentication, the configuration fails even when valid values are provided for all required configuration parameters. The obot server reports a “No deployments configured” error during validation and fails to complete setup. Able to configure the same Foundry resource endpoint with API key authentication successfully, in which case we are required to specify deployments as part of the configuration.
To Reproduce
Steps to reproduce the behavior:
- Have a Foundary resource created with a few models created in the Azure portal
Models configured in the the foundary:
Add Cognitive Services OpenAI User role assignment to an existing MS Entra app.
-
From the obot server, configure the above foundary using the
Azure OpenAImodel provider by using the MS Entra Authentication method. Provide valid values for all the required fields - Endpoint, Client ID, Client Secret, Tenant ID, Subscription ID and Resource Group -
Azure OpenAIprovider fails to get configured with the following errors:
Following errors seen in obot sever logs:
2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>
time="2026-01-08T00:25:33Z" level=error msg="failed to run tool [validate] cmd [/bin/sh -c exec ${GPTSCRIPT_TOOL_DIR}/bin/gptscript-go-tool validate]: exit status 1" logger=-ai/[email protected]/pkg/engine
time="2026-01-08T00:25:33Z" level=info msg="Handled request: method POST, path /run" id=fa49f394-a2ae-460b-bf17-3d5878225622
time="2026-01-08T00:25:33Z" level=error msg="failed to save state: ERROR: {\"error\":\"No deployments configured\"}\n2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>\n: exit status 1" logger=/app/pkg/invoke/invoker.go
time="2026-01-08T00:25:33Z" level=error msg="run failed: failed to stream: ERROR: {\"error\":\"No deployments configured\"}\n2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>\n: exit status 1" logger=/app/pkg/invoke/invoker.go
Expected behavior
Should be able to configure Azure Open AI Model Provider successfully using MS Entra authentication for a foundary resource configured with models.
Note - I am able to configure the same foundary resource using API key authentication, which requires us to provide the deployments as part of the auth provider configuration.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
