Skip to content

Commit 7be2a48

Browse files
Added examples and documentation for LiteLLM model provider & exposed LiteLLMProvider
1 parent 656bcdb commit 7be2a48

File tree

5 files changed

+436
-1
lines changed

5 files changed

+436
-1
lines changed

docs/models.md

Lines changed: 172 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,3 +71,175 @@ spanish_agent = Agent(
7171
model_settings=ModelSettings(temperature=0.5),
7272
)
7373
```
74+
75+
## Using LiteLLM Provider
76+
77+
The SDK includes built-in support for [LiteLLM](https://docs.litellm.ai/), a unified interface for multiple LLM providers. LiteLLM provides a proxy server that exposes an OpenAI-compatible API for various LLM providers including OpenAI, Anthropic, Azure, AWS Bedrock, Google, and more.
78+
79+
### Basic Usage
80+
81+
```python
82+
from agents import Agent, Runner, LiteLLMProvider, RunConfig
83+
import asyncio
84+
85+
# Create a LiteLLM provider
86+
provider = LiteLLMProvider(
87+
api_key="your-litellm-api-key", # or set LITELLM_API_KEY env var
88+
base_url="http://localhost:8000", # or set LITELLM_API_BASE env var
89+
)
90+
91+
# Create an agent using a specific model
92+
agent = Agent(
93+
name="Assistant",
94+
instructions="You are a helpful assistant.",
95+
model="claude-3", # Will be routed to Anthropic by the provider
96+
)
97+
98+
# Create a run configuration with the provider
99+
run_config = RunConfig(model_provider=provider)
100+
101+
async def main():
102+
result = await Runner.run(
103+
agent,
104+
input="Hello!",
105+
run_config=run_config # Pass the provider through run_config
106+
)
107+
print(result.final_output)
108+
109+
if __name__ == "__main__":
110+
asyncio.run(main())
111+
```
112+
113+
### Environment Variables
114+
115+
The LiteLLM provider supports configuration through environment variables:
116+
117+
```bash
118+
# LiteLLM configuration
119+
export LITELLM_API_KEY="your-litellm-api-key"
120+
export LITELLM_API_BASE="http://localhost:8000"
121+
export LITELLM_MODEL="gpt-4" # Default model (optional)
122+
123+
# Provider-specific keys (examples)
124+
export OPENAI_API_KEY="sk-..."
125+
export ANTHROPIC_API_KEY="sk-ant-..."
126+
export AZURE_API_KEY="..."
127+
export AWS_ACCESS_KEY_ID="..."
128+
export AWS_SECRET_ACCESS_KEY="..."
129+
```
130+
131+
### Model Routing
132+
133+
The provider automatically routes model names to their appropriate providers:
134+
135+
```python
136+
# Create the LiteLLM provider
137+
provider = LiteLLMProvider(
138+
api_key="your-litellm-api-key",
139+
base_url="http://localhost:8000"
140+
)
141+
142+
# Create a run configuration with the provider
143+
run_config = RunConfig(model_provider=provider)
144+
145+
# Models are automatically routed based on their names
146+
openai_agent = Agent(
147+
name="OpenAI Agent",
148+
instructions="Using GPT-4",
149+
model="gpt-4", # Will be routed to OpenAI
150+
)
151+
152+
anthropic_agent = Agent(
153+
name="Anthropic Agent",
154+
instructions="Using Claude",
155+
model="claude-3", # Will be routed to Anthropic
156+
)
157+
158+
azure_agent = Agent(
159+
name="Azure Agent",
160+
instructions="Using Azure OpenAI",
161+
model="azure/gpt-4", # Explicitly using Azure
162+
)
163+
164+
# Run any of the agents with the provider
165+
result = await Runner.run(openai_agent, input="Hello!", run_config=run_config)
166+
```
167+
168+
You can also explicitly specify providers using prefixes:
169+
170+
- `openai/` - OpenAI models
171+
- `anthropic/` - Anthropic models
172+
- `azure/` - Azure OpenAI models
173+
- `aws/` - AWS Bedrock models
174+
- `cohere/` - Cohere models
175+
- `replicate/` - Replicate models
176+
- `huggingface/` - Hugging Face models
177+
- `mistral/` - Mistral AI models
178+
- `gemini/` - Google Gemini models
179+
- `groq/` - Groq models
180+
181+
### Advanced Configuration
182+
183+
The provider supports additional configuration options:
184+
185+
```python
186+
provider = LiteLLMProvider(
187+
api_key="your-litellm-api-key",
188+
base_url="http://localhost:8000",
189+
model_name="gpt-4", # Default model
190+
use_responses=True, # Use OpenAI Responses API format
191+
extra_headers={ # Additional headers
192+
"x-custom-header": "value"
193+
},
194+
drop_params=True, # Drop unsupported params for specific models
195+
)
196+
```
197+
198+
### Using Multiple Providers
199+
200+
You can use different providers for different agents in your workflow:
201+
202+
```python
203+
from agents import Agent, Runner, OpenAIProvider, LiteLLMProvider, RunConfig
204+
import asyncio
205+
206+
# OpenAI provider for direct OpenAI API access
207+
openai_provider = OpenAIProvider()
208+
209+
# LiteLLM provider for other models
210+
litellm_provider = LiteLLMProvider(
211+
api_key="your-litellm-api-key",
212+
base_url="http://localhost:8000"
213+
)
214+
215+
# Create agents with different model names
216+
triage_agent = Agent(
217+
name="Triage",
218+
instructions="Route requests to appropriate agents",
219+
model="gpt-3.5-turbo", # Will be routed by the provider
220+
)
221+
222+
analysis_agent = Agent(
223+
name="Analysis",
224+
instructions="Perform detailed analysis",
225+
model="claude-3", # Will be routed by the provider
226+
)
227+
228+
# Run with OpenAI provider
229+
openai_config = RunConfig(model_provider=openai_provider)
230+
result_triage = await Runner.run(
231+
triage_agent,
232+
input="Analyze this data",
233+
run_config=openai_config
234+
)
235+
236+
# Run with LiteLLM provider
237+
litellm_config = RunConfig(model_provider=litellm_provider)
238+
result_analysis = await Runner.run(
239+
analysis_agent,
240+
input="Perform detailed analysis of this data",
241+
run_config=litellm_config
242+
)
243+
```
244+
245+
The LiteLLM provider makes it easy to use multiple LLM providers while maintaining a consistent interface and the full feature set of the Agents SDK including handoffs, tools, and tracing.

examples/litellm/README.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# LiteLLM Provider Examples
2+
3+
This directory contains examples demonstrating how to use the LiteLLM provider with the Agents SDK.
4+
5+
## Prerequisites
6+
7+
1. Install and run the LiteLLM proxy server:
8+
```bash
9+
pip install litellm
10+
litellm --model ollama/llama2 --port 8000
11+
```
12+
13+
2. Set up environment variables:
14+
```bash
15+
# LiteLLM configuration
16+
export LITELLM_API_KEY="your-litellm-api-key" # If required by your proxy
17+
export LITELLM_API_BASE="http://localhost:8000"
18+
19+
# Provider API keys (as needed)
20+
export OPENAI_API_KEY="sk-..."
21+
export ANTHROPIC_API_KEY="sk-ant-..."
22+
export GEMINI_API_KEY="..."
23+
```
24+
25+
## Examples
26+
27+
### Multi-Provider Workflow (`multi_provider_workflow.py`)
28+
29+
This example demonstrates using multiple LLM providers in a workflow:
30+
31+
1. A triage agent (using OpenAI directly) determines the task type
32+
2. Based on the task type, it routes to specialized agents:
33+
- Summarization tasks → Claude (via LiteLLM)
34+
- Coding tasks → GPT-4 (via LiteLLM)
35+
- Creative tasks → Gemini (via LiteLLM)
36+
37+
The example uses `RunConfig` to specify which provider to use for each agent:
38+
39+
```python
40+
# For OpenAI provider
41+
openai_config = RunConfig(model_provider=openai_provider)
42+
result = await Runner.run(triage_agent, input="...", run_config=openai_config)
43+
44+
# For LiteLLM provider
45+
litellm_config = RunConfig(model_provider=litellm_provider)
46+
result = await Runner.run(target_agent, input="...", run_config=litellm_config)
47+
```
48+
49+
To run:
50+
```bash
51+
python examples/litellm/multi_provider_workflow.py
52+
```
53+
54+
The example will process three different types of requests to demonstrate the routing:
55+
1. A summarization request about the French Revolution
56+
2. A coding request to implement a Fibonacci sequence
57+
3. A creative writing request about a time-traveling coffee cup
58+
59+
## Notes
60+
61+
- The LiteLLM provider automatically routes model names to their appropriate providers (e.g., `claude-3` → Anthropic, `gpt-4` → OpenAI)
62+
- You can explicitly specify providers using prefixes (e.g., `anthropic/claude-3`, `openai/gpt-4`)
63+
- The provider handles passing API keys and configuration through headers
64+
- All Agents SDK features (handoffs, tools, tracing) work with the LiteLLM provider
65+
- Use `RunConfig` to specify which provider to use when calling `Runner.run()`
Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,140 @@
1+
"""
2+
This example demonstrates using multiple LLM providers in a workflow using LiteLLM.
3+
It creates a workflow where:
4+
1. A triage agent (using OpenAI directly) determines the task type
5+
2. Based on the task type, it routes to:
6+
- A summarization agent using Claude via LiteLLM
7+
- A coding agent using GPT-4 via LiteLLM
8+
- A creative agent using Gemini via LiteLLM
9+
"""
10+
11+
import asyncio
12+
import os
13+
from typing import Literal
14+
15+
from agents import Agent, Runner, OpenAIProvider, LiteLLMProvider, RunConfig
16+
from agents.agent_output import AgentOutputSchema
17+
from pydantic import BaseModel
18+
19+
20+
class TaskType(BaseModel):
21+
"""The type of task to be performed."""
22+
task: Literal["summarize", "code", "creative"]
23+
explanation: str
24+
25+
26+
class TaskOutput(BaseModel):
27+
"""The output of the task."""
28+
result: str
29+
provider_used: str
30+
31+
32+
# Set up providers
33+
openai_provider = OpenAIProvider(
34+
api_key=os.getenv("OPENAI_API_KEY")
35+
)
36+
37+
litellm_provider = LiteLLMProvider(
38+
api_key=os.getenv("LITELLM_API_KEY"),
39+
base_url=os.getenv("LITELLM_API_BASE", "http://localhost:8000")
40+
)
41+
42+
# Create specialized agents for different tasks
43+
triage_agent = Agent(
44+
name="Triage Agent",
45+
instructions="""
46+
You are a triage agent that determines the type of task needed.
47+
- For text analysis, summarization, or understanding tasks, choose 'summarize'
48+
- For programming, coding, or technical tasks, choose 'code'
49+
- For creative writing, storytelling, or artistic tasks, choose 'creative'
50+
""",
51+
model="gpt-3.5-turbo",
52+
output_schema=AgentOutputSchema(TaskType),
53+
)
54+
55+
summarize_agent = Agent(
56+
name="Summarization Agent",
57+
instructions="""
58+
You are a summarization expert using Claude's advanced comprehension capabilities.
59+
Provide clear, concise summaries while preserving key information.
60+
Always include "Provider Used: Anthropic Claude" in your response.
61+
""",
62+
model="claude-3", # Will be routed to Anthropic
63+
output_schema=AgentOutputSchema(TaskOutput),
64+
)
65+
66+
code_agent = Agent(
67+
name="Coding Agent",
68+
instructions="""
69+
You are a coding expert using GPT-4's technical capabilities.
70+
Provide clean, well-documented code solutions.
71+
Always include "Provider Used: OpenAI GPT-4" in your response.
72+
""",
73+
model="gpt-4", # Will be routed to OpenAI
74+
output_schema=AgentOutputSchema(TaskOutput),
75+
)
76+
77+
creative_agent = Agent(
78+
name="Creative Agent",
79+
instructions="""
80+
You are a creative writer using Gemini's creative capabilities.
81+
Create engaging, imaginative content.
82+
Always include "Provider Used: Google Gemini" in your response.
83+
""",
84+
model="gemini-pro", # Will be routed to Google
85+
output_schema=AgentOutputSchema(TaskOutput),
86+
)
87+
88+
89+
async def process_request(user_input: str) -> str:
90+
"""Process a user request using the appropriate agent."""
91+
92+
# First, triage the request with OpenAI provider
93+
openai_config = RunConfig(model_provider=openai_provider)
94+
triage_result = await Runner.run(
95+
triage_agent,
96+
input=f"What type of task is this request? {user_input}",
97+
run_config=openai_config
98+
)
99+
task_type = triage_result.output
100+
101+
# Route to the appropriate agent with LiteLLM provider
102+
target_agent = {
103+
"summarize": summarize_agent,
104+
"code": code_agent,
105+
"creative": creative_agent,
106+
}[task_type.task]
107+
108+
# Process with the specialized agent using LiteLLM provider
109+
litellm_config = RunConfig(model_provider=litellm_provider)
110+
result = await Runner.run(
111+
target_agent,
112+
input=user_input,
113+
run_config=litellm_config
114+
)
115+
116+
return f"""
117+
Task Type: {task_type.task}
118+
Reason: {task_type.explanation}
119+
Result: {result.output.result}
120+
Provider Used: {result.output.provider_used}
121+
"""
122+
123+
124+
async def main():
125+
"""Run example requests through the workflow."""
126+
requests = [
127+
"Can you summarize the key points of the French Revolution?",
128+
"Write a Python function to calculate the Fibonacci sequence.",
129+
"Write a short story about a time-traveling coffee cup.",
130+
]
131+
132+
for request in requests:
133+
print(f"\nProcessing request: {request}")
134+
print("-" * 80)
135+
result = await process_request(request)
136+
print(result)
137+
138+
139+
if __name__ == "__main__":
140+
asyncio.run(main())

0 commit comments

Comments
 (0)