Skip to content

Azure Open AI Model Provider - Model Provider fails to get configured because it does not fetch any models from the Foundry endpoint when using MS Entra Authentication. #5449

@sangee2004

Description

@sangee2004

Describe the bug
When a Foundary resource endpoint is configured in the obot server using the Azure OpenAI model provider with MS Entra authentication, the configuration fails even when valid values are provided for all required configuration parameters. The obot server reports a “No deployments configured” error during validation and fails to complete setup. Able to configure the same Foundry resource endpoint with API key authentication successfully, in which case we are required to specify deployments as part of the configuration.

To Reproduce
Steps to reproduce the behavior:

  1. Have a Foundary resource created with a few models created in the Azure portal

Foundary Entry:
Image

Models configured in the the foundary:

Image

Add Cognitive Services OpenAI User role assignment to an existing MS Entra app.

Image
  1. From the obot server, configure the above foundary using the Azure OpenAI model provider by using the MS Entra Authentication method. Provide valid values for all the required fields - Endpoint, Client ID, Client Secret, Tenant ID, Subscription ID and Resource Group

  2. Azure OpenAI provider fails to get configured with the following errors:

Image

Following errors seen in obot sever logs:

2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>
time="2026-01-08T00:25:33Z" level=error msg="failed to run tool [validate] cmd [/bin/sh -c exec ${GPTSCRIPT_TOOL_DIR}/bin/gptscript-go-tool validate]: exit status 1" logger=-ai/[email protected]/pkg/engine
time="2026-01-08T00:25:33Z" level=info msg="Handled request: method POST, path /run" id=fa49f394-a2ae-460b-bf17-3d5878225622
time="2026-01-08T00:25:33Z" level=error msg="failed to save state: ERROR: {\"error\":\"No deployments configured\"}\n2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>\n: exit status 1" logger=/app/pkg/invoke/invoker.go
time="2026-01-08T00:25:33Z" level=error msg="run failed: failed to stream: ERROR: {\"error\":\"No deployments configured\"}\n2026/01/08 00:25:33 ERROR No deployments configured logger=/tools/azure-openai-model-provider/validate error=<nil>\n: exit status 1" logger=/app/pkg/invoke/invoker.go

Expected behavior
Should be able to configure Azure Open AI Model Provider successfully using MS Entra authentication for a foundary resource configured with models.

Note - I am able to configure the same foundary resource using API key authentication, which requires us to provide the deployments as part of the auth provider configuration.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

Status

Next Up

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions