You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/models.md
+172Lines changed: 172 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -71,3 +71,175 @@ spanish_agent = Agent(
71
71
model_settings=ModelSettings(temperature=0.5),
72
72
)
73
73
```
74
+
75
+
## Using LiteLLM Provider
76
+
77
+
The SDK includes built-in support for [LiteLLM](https://docs.litellm.ai/), a unified interface for multiple LLM providers. LiteLLM provides a proxy server that exposes an OpenAI-compatible API for various LLM providers including OpenAI, Anthropic, Azure, AWS Bedrock, Google, and more.
78
+
79
+
### Basic Usage
80
+
81
+
```python
82
+
from agents import Agent, Runner, LiteLLMProvider, RunConfig
83
+
import asyncio
84
+
85
+
# Create a LiteLLM provider
86
+
provider = LiteLLMProvider(
87
+
api_key="your-litellm-api-key", # or set LITELLM_API_KEY env var
88
+
base_url="http://localhost:8000", # or set LITELLM_API_BASE env var
89
+
)
90
+
91
+
# Create an agent using a specific model
92
+
agent = Agent(
93
+
name="Assistant",
94
+
instructions="You are a helpful assistant.",
95
+
model="claude-3", # Will be routed to Anthropic by the provider
96
+
)
97
+
98
+
# Create a run configuration with the provider
99
+
run_config = RunConfig(model_provider=provider)
100
+
101
+
asyncdefmain():
102
+
result =await Runner.run(
103
+
agent,
104
+
input="Hello!",
105
+
run_config=run_config # Pass the provider through run_config
106
+
)
107
+
print(result.final_output)
108
+
109
+
if__name__=="__main__":
110
+
asyncio.run(main())
111
+
```
112
+
113
+
### Environment Variables
114
+
115
+
The LiteLLM provider supports configuration through environment variables:
116
+
117
+
```bash
118
+
# LiteLLM configuration
119
+
export LITELLM_API_KEY="your-litellm-api-key"
120
+
export LITELLM_API_BASE="http://localhost:8000"
121
+
export LITELLM_MODEL="gpt-4"# Default model (optional)
122
+
123
+
# Provider-specific keys (examples)
124
+
export OPENAI_API_KEY="sk-..."
125
+
export ANTHROPIC_API_KEY="sk-ant-..."
126
+
export AZURE_API_KEY="..."
127
+
export AWS_ACCESS_KEY_ID="..."
128
+
export AWS_SECRET_ACCESS_KEY="..."
129
+
```
130
+
131
+
### Model Routing
132
+
133
+
The provider automatically routes model names to their appropriate providers:
134
+
135
+
```python
136
+
# Create the LiteLLM provider
137
+
provider = LiteLLMProvider(
138
+
api_key="your-litellm-api-key",
139
+
base_url="http://localhost:8000"
140
+
)
141
+
142
+
# Create a run configuration with the provider
143
+
run_config = RunConfig(model_provider=provider)
144
+
145
+
# Models are automatically routed based on their names
146
+
openai_agent = Agent(
147
+
name="OpenAI Agent",
148
+
instructions="Using GPT-4",
149
+
model="gpt-4", # Will be routed to OpenAI
150
+
)
151
+
152
+
anthropic_agent = Agent(
153
+
name="Anthropic Agent",
154
+
instructions="Using Claude",
155
+
model="claude-3", # Will be routed to Anthropic
156
+
)
157
+
158
+
azure_agent = Agent(
159
+
name="Azure Agent",
160
+
instructions="Using Azure OpenAI",
161
+
model="azure/gpt-4", # Explicitly using Azure
162
+
)
163
+
164
+
# Run any of the agents with the provider
165
+
result =await Runner.run(openai_agent, input="Hello!", run_config=run_config)
166
+
```
167
+
168
+
You can also explicitly specify providers using prefixes:
169
+
170
+
-`openai/` - OpenAI models
171
+
-`anthropic/` - Anthropic models
172
+
-`azure/` - Azure OpenAI models
173
+
-`aws/` - AWS Bedrock models
174
+
-`cohere/` - Cohere models
175
+
-`replicate/` - Replicate models
176
+
-`huggingface/` - Hugging Face models
177
+
-`mistral/` - Mistral AI models
178
+
-`gemini/` - Google Gemini models
179
+
-`groq/` - Groq models
180
+
181
+
### Advanced Configuration
182
+
183
+
The provider supports additional configuration options:
184
+
185
+
```python
186
+
provider = LiteLLMProvider(
187
+
api_key="your-litellm-api-key",
188
+
base_url="http://localhost:8000",
189
+
model_name="gpt-4", # Default model
190
+
use_responses=True, # Use OpenAI Responses API format
191
+
extra_headers={ # Additional headers
192
+
"x-custom-header": "value"
193
+
},
194
+
drop_params=True, # Drop unsupported params for specific models
195
+
)
196
+
```
197
+
198
+
### Using Multiple Providers
199
+
200
+
You can use different providers for different agents in your workflow:
201
+
202
+
```python
203
+
from agents import Agent, Runner, OpenAIProvider, LiteLLMProvider, RunConfig
204
+
import asyncio
205
+
206
+
# OpenAI provider for direct OpenAI API access
207
+
openai_provider = OpenAIProvider()
208
+
209
+
# LiteLLM provider for other models
210
+
litellm_provider = LiteLLMProvider(
211
+
api_key="your-litellm-api-key",
212
+
base_url="http://localhost:8000"
213
+
)
214
+
215
+
# Create agents with different model names
216
+
triage_agent = Agent(
217
+
name="Triage",
218
+
instructions="Route requests to appropriate agents",
219
+
model="gpt-3.5-turbo", # Will be routed by the provider
220
+
)
221
+
222
+
analysis_agent = Agent(
223
+
name="Analysis",
224
+
instructions="Perform detailed analysis",
225
+
model="claude-3", # Will be routed by the provider
The LiteLLM provider makes it easy to use multiple LLM providers while maintaining a consistent interface and the full feature set of the Agents SDK including handoffs, tools, and tracing.
0 commit comments