-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Comparing changes
Open a pull request
base repository: openai/openai-agents-python
base: v0.0.12
head repository: openai/openai-agents-python
compare: main
Commits on Apr 23, 2025
-
Adding extra_headers parameters to ModelSettings (#550)
jonnyk20 authoredApr 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 111fc9e - Browse repository at this point
Copy the full SHA 111fc9eView commit details -
Examples: Fix financial_research_agent instructions (#573)
seratch authoredApr 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 178020e - Browse repository at this point
Copy the full SHA 178020eView commit details -
Allow cancel out of the streaming result (#579)
Fix for #574 @rm-openai I'm not sure how to add a test within the repo but I have pasted a test script below that seems to work ```python import asyncio from openai.types.responses import ResponseTextDeltaEvent from agents import Agent, Runner async def main(): agent = Agent( name="Joker", instructions="You are a helpful assistant.", ) result = Runner.run_streamed(agent, input="Please tell me 5 jokes.") num_visible_event = 0 async for event in result.stream_events(): if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent): print(event.data.delta, end="", flush=True) num_visible_event += 1 print(num_visible_event) if num_visible_event == 3: result.cancel() if __name__ == "__main__": asyncio.run(main()) ````
handrew authoredApr 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for a113fea - Browse repository at this point
Copy the full SHA a113feaView commit details
Commits on Apr 24, 2025
-
Create to_json_dict for ModelSettings (#582)
Now that `ModelSettings` has `Reasoning`, a non-primitive object, `dataclasses.as_dict()` wont work. It will raise an error when you try to serialize (e.g. for tracing). This ensures the object is actually serializable.
rm-openai authoredApr 24, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 3755ea8 - Browse repository at this point
Copy the full SHA 3755ea8View commit details -
Prevent MCP ClientSession hang (#580)
Per https://modelcontextprotocol.io/specification/draft/basic/lifecycle#timeouts "Implementations SHOULD establish timeouts for all sent requests, to prevent hung connections and resource exhaustion. When the request has not received a success or error response within the timeout period, the sender SHOULD issue a cancellation notification for that request and stop waiting for a response. SDKs and other middleware SHOULD allow these timeouts to be configured on a per-request basis." I picked 5 seconds since that's the default for SSE
njbrake authoredApr 24, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for af80e3a - Browse repository at this point
Copy the full SHA af80e3aView commit details -
Fix stream error using LiteLLM (#589)
In response to issue #587 , I implemented a solution to first check if `refusal` and `usage` attributes exist in the `delta` object. I added a unit test similar to `test_openai_chatcompletions_stream.py`. Let me know if I should change something. --------- Co-authored-by: Rohan Mehta <rm@openai.com>
Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for e11b822 - Browse repository at this point
Copy the full SHA e11b822View commit details -
More tests for cancelling streamed run (#590)
rm-openai authoredApr 24, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 45eb41f - Browse repository at this point
Copy the full SHA 45eb41fView commit details -
rm-openai authored
Apr 24, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 3bbc7c4 - Browse repository at this point
Copy the full SHA 3bbc7c4View commit details -
Add usage to context in streaming (#595)
rm-openai authoredApr 24, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 8fd7773 - Browse repository at this point
Copy the full SHA 8fd7773View commit details -
Make the TTS voices type exportable (#577)
When using the voice agent in typed code, it is suboptimal and error prone to type the TTS voice variables in your code independently. With this commit we are making the type exportable so that developers can just use that and be future-proof. Example of usage in code: ``` DEFAULT_TTS_VOICE: TTSModelSettings.TTSVoice = "alloy" ... tts_voice: TTSModelSettings.TTSVoice = DEFAULT_TTS_VOICE ... output = await VoicePipeline( workflow=workflow, config=VoicePipelineConfig( tts_settings=TTSModelSettings( buffer_size=512, transform_data=transform_data, voice=tts_voice, instructions=tts_instructions, )) ).run(audio_input) ``` --------- Co-authored-by: Rohan Mehta <rm@openai.com>
Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for aa197e1 - Browse repository at this point
Copy the full SHA aa197e1View commit details
Commits on Apr 25, 2025
-
docs: add FutureAGI to tracing documentation (#592)
Hi Team! This PR adds FutureAGI to the tracing documentation as one of the automatic tracing processors for OpenAI agents SDK. 
NVJKKartik authoredApr 25, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 4187fba - Browse repository at this point
Copy the full SHA 4187fbaView commit details
Commits on Apr 29, 2025
-
Addresses #614
pakrym-oai authoredApr 29, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for db0ee9d - Browse repository at this point
Copy the full SHA db0ee9dView commit details
Commits on Apr 30, 2025
-
pakrym-oai authored
Apr 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for f976349 - Browse repository at this point
Copy the full SHA f976349View commit details
Commits on May 14, 2025
-
Fixed a bug for "detail" attribute in input image (#685)
When an input image is given as input, the code tries to access the 'detail' key, that may not be present as noted in #159. With this pull request, now it tries to access the key, otherwise set the value to `None`. @pakrym-oai or @rm-openai let me know if you want any changes.
DanieleMorotti authoredMay 14, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 2c46dae - Browse repository at this point
Copy the full SHA 2c46daeView commit details -
feat: pass extra_body through to LiteLLM acompletion (#638)
**Purpose** Allow arbitrary `extra_body` parameters (e.g. `cached_content`) to be forwarded into the LiteLLM call. Useful for context caching in Gemini models ([docs](https://ai.google.dev/gemini-api/docs/caching?lang=python)). **Example usage** ```python import os from agents import Agent, ModelSettings from agents.extensions.models.litellm_model import LitellmModel cache_name = "cachedContents/34jopukfx5di" # previously stored context gemini_model = LitellmModel( model="gemini/gemini-1.5-flash-002", api_key=os.getenv("GOOGLE_API_KEY") ) agent = Agent( name="Cached Gemini Agent", model=gemini_model, model_settings=ModelSettings( extra_body={"cached_content": cache_name} ) )
AshokSaravanan222 authoredMay 14, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 1994f9d - Browse repository at this point
Copy the full SHA 1994f9dView commit details -
Added missing word "be" in prompt instructions. This is unlikely to change the agent functionality in most cases, but optimal clarity in prompt language is a best practice.
leohpark authoredMay 14, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 02b6e70 - Browse repository at this point
Copy the full SHA 02b6e70View commit details -
feat: Streamable HTTP support (#643)
Co-authored-by: aagarwal25 <akshit_agarwal@intuit.com>
Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 1847008 - Browse repository at this point
Copy the full SHA 1847008View commit details
Commits on May 15, 2025
-
rm-openai authored
May 15, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 5fe096d - Browse repository at this point
Copy the full SHA 5fe096dView commit details
Commits on May 18, 2025
-
Adding an AGENTS.md file for Codex use
dkundel-openai authoredMay 18, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for c282324 - Browse repository at this point
Copy the full SHA c282324View commit details -
Added mcp 'instructions' attribute to the server (#706)
Added the `instructions` attribute to the MCP servers to solve #704 . Let me know if you want to add an example to the documentation.
DanieleMorotti authoredMay 18, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 003cbfe - Browse repository at this point
Copy the full SHA 003cbfeView commit details
Commits on May 19, 2025
-
Add Galileo to external tracing processors list (#662)
franz101 authoredMay 19, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 428c9a6 - Browse repository at this point
Copy the full SHA 428c9a6View commit details
Commits on May 20, 2025
-
Dev/add usage details to Usage class (#726)
PR to enhance the `Usage` object and related logic, to support more granular token accounting, matching the details available in the [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses) . Specifically, it: - Adds `input_tokens_details` and `output_tokens_details` fields to the `Usage` dataclass, storing detailed token breakdowns (e.g., `cached_tokens`, `reasoning_tokens`). - Flows this change through - Updates and extends tests to match - Adds a test for the Usage.add method ### Motivation - Aligns the SDK’s usage with the latest OpenAI responses API Usage object - Supports downstream use cases that require fine-grained token usage data (e.g., billing, analytics, optimization) requested by startups --------- Co-authored-by: Wulfie Bain <wulfie@openai.com>
Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 466b44d - Browse repository at this point
Copy the full SHA 466b44dView commit details
Commits on May 21, 2025
-
Upgrade openAI sdk version (#730)
rm-openai authoredMay 21, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for ce2e2a4 - Browse repository at this point
Copy the full SHA ce2e2a4View commit details -
rm-openai authored
May 21, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 9fa5c39 - Browse repository at this point
Copy the full SHA 9fa5c39View commit details -
Add support for local shell, image generator, code interpreter tools (#…
…732)
rm-openai authoredMay 21, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 079764f - Browse repository at this point
Copy the full SHA 079764fView commit details -
rm-openai authored
May 21, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 1992be3 - Browse repository at this point
Copy the full SHA 1992be3View commit details -
fix Gemini token validation issue with LiteLLM (#735)
Fix for #734
handrew authoredMay 21, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 1364f44 - Browse repository at this point
Copy the full SHA 1364f44View commit details
Commits on May 23, 2025
-
Fix visualization recursion with cycle detection (#737)
## Summary - avoid infinite recursion in visualization by tracking visited agents - test cycle detection in graph utility ## Testing - `make mypy` - `make tests` Resolves #668
rm-openai authoredMay 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for db462e3 - Browse repository at this point
Copy the full SHA db462e3View commit details -
Update MCP and tool docs (#736)
## Summary - mention MCPServerStreamableHttp in MCP server docs - document CodeInterpreterTool, HostedMCPTool, ImageGenerationTool and LocalShellTool - update Japanese translations
rm-openai authoredMay 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for a96108e - Browse repository at this point
Copy the full SHA a96108eView commit details -
Fix Gemini API content filter handling (#746)
## Summary - avoid AttributeError when Gemini API returns `None` for chat message - return empty output if message is filtered - add regression test ## Testing - `make format` - `make lint` - `make mypy` - `make tests` Towards #744
rm-openai authoredMay 23, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 6e078bf - Browse repository at this point
Copy the full SHA 6e078bfView commit details
Commits on May 29, 2025
-
Add Portkey AI as a tracing provider (#785)
This PR adds Portkey AI as a tracing provider. Portkey helps you take your OpenAI agents from prototype to production. Portkey turns your experimental OpenAI Agents into production-ready systems by providing: - Complete observability of every agent step, tool use, and interaction - Built-in reliability with fallbacks, retries, and load balancing - Cost tracking and optimization to manage your AI spend - Access to 1600+ LLMs through a single integration - Guardrails to keep agent behavior safe and compliant - Version-controlled prompts for consistent agent performance Towards #786
siddharthsambharia-portkey authoredMay 29, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for d46e2ec - Browse repository at this point
Copy the full SHA d46e2ecView commit details -
Added RunErrorDetails object for MaxTurnsExceeded exception (#743)
### Summary Introduced the `RunErrorDetails` object to get partial results from a run interrupted by `MaxTurnsExceeded` exception. In this proposal the `RunErrorDetails` object contains all the fields from `RunResult` with `final_output` set to `None` and `output_guardrail_results` set to an empty list. We can decide to return less information. @rm-openai At the moment the exception doesn't return the `RunErrorDetails` object for the streaming mode. Do you have any suggestions on how to deal with it? In the `_check_errors` function of `agents/result.py` file. ### Test plan I have not implemented any tests currently, but if needed I can implement a basic test to retrieve partial data. ### Issue number This PR is an attempt to solve issue #719 ### Checks - [✅ ] I've added new tests (if relevant) - [ ] I've added/updated the relevant documentation - [ ✅] I've run `make lint` and `make format` - [ ✅] I've made sure tests pass
DanieleMorotti authoredMay 29, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 7196862 - Browse repository at this point
Copy the full SHA 7196862View commit details -
sarmadgulzar authored
May 29, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 47fa8e8 - Browse repository at this point
Copy the full SHA 47fa8e8View commit details
Commits on May 30, 2025
-
Small fix for litellm model (#789)
Small fix: Removing `import litellm.types` as its outside the try except block for importing litellm so the import error message isn't displayed, and the line actually isn't needed. I was reproducing a GitHub issue and came across this in the process.
robtinn authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for b699d9a - Browse repository at this point
Copy the full SHA b699d9aView commit details -
Fix typo in assertion message for handoff function (#780)
### Overview This PR fixes a typo in the assert statement within the `handoff` function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for accuracy and clarity. ### Changes - Corrected the word “on_input” to “on_handoff” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
Rehan-Ul-Haq authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 16fb29c - Browse repository at this point
Copy the full SHA 16fb29cView commit details -
Fix typo: Replace 'two' with 'three' in /docs/mcp.md (#757)
The documentation in `docs/mcp.md` listed three server types (stdio, HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of servers" in the heading. This PR fixes the numerical discrepancy. **Changes:** - Modified from "two kinds of servers" to "three kinds of servers". - File: `docs/mcp.md` (line 11).
luochang212 authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 0a28d71 - Browse repository at this point
Copy the full SHA 0a28d71View commit details -
Update input_guardrails.py (#774)
Changed the function comment as input_guardrails only deals with input messages
venkatnaveen7 authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for ad80f78 - Browse repository at this point
Copy the full SHA ad80f78View commit details -
docs: fix typo in docstring for is_strict_json_schema method (#775)
### Overview This PR fixes a small typo in the docstring of the `is_strict_json_schema` abstract method of the `AgentOutputSchemaBase` class in `agent_output.py`. ### Changes - Corrected the word “valis” to “valid” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
Rehan-Ul-Haq authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 6438350 - Browse repository at this point
Copy the full SHA 6438350View commit details -
Add comment to handoff_occured misspelling (#792)
People keep trying to fix this, but its a breaking change.
rm-openai authoredMay 30, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for cfe9099 - Browse repository at this point
Copy the full SHA cfe9099View commit details
Commits on Jun 2, 2025
-
Fix #777 by handling MCPCall events in RunImpl (#799)
This pull request resolves #777; If you think we should introduce a new item type for MCP call output, please let me know. As other hosted tools use this event, I believe using the same should be good to go tho.
seratch authoredJun 2, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 3e7b286 - Browse repository at this point
Copy the full SHA 3e7b286View commit details -
Ensure item.model_dump only contains JSON serializable types (#801)
The EmbeddedResource from MCP tool call contains a field with type AnyUrl that is not JSON-serializable. To avoid this exception, use item.model_dump(mode="json") to ensure a JSON-serializable return value.
westhood authoredJun 2, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 775d3e2 - Browse repository at this point
Copy the full SHA 775d3e2View commit details -
Don't cache agent tools during a run (#803)
### Summary: Towards #767. We were caching the list of tools for an agent, so if you did `agent.tools.append(...)` from a tool call, the next call to the model wouldn't include the new tool. THis is a bug. ### Test Plan: Unit tests. Note that now MCP tools are listed each time the agent runs (users can still cache the `list_tools` however).
rm-openai authoredJun 2, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for d4c7a23 - Browse repository at this point
Copy the full SHA d4c7a23View commit details -
Only start tracing worker thread on first span/trace (#804)
Closes #796. Shouldn't start a busy waiting thread if there aren't any traces. Test plan ``` import threading assert threading.active_count() == 1 import agents assert threading.active_count() == 1 ```
rm-openai authoredJun 2, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 995af4d - Browse repository at this point
Copy the full SHA 995af4dView commit details
Commits on Jun 3, 2025
-
Add is_enabled to FunctionTool (#808)
### Summary: Allows a user to do `function_tool(is_enabled=<some_callable>)`; the callable is called when the agent runs. This allows you to dynamically enable/disable a tool based on the context/env. The meta-goal is to allow `Agent` to be effectively immutable. That enables some nice things down the line, and this allows you to dynamically modify the tools list without mutating the agent. ### Test Plan: Unit tests
rm-openai authoredJun 3, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 4046fcb - Browse repository at this point
Copy the full SHA 4046fcbView commit details
Commits on Jun 4, 2025
-
bump version
rm-openai authoredJun 4, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 204bec1 - Browse repository at this point
Copy the full SHA 204bec1View commit details -
Add REPL run_demo_loop helper (#811)
rm-openai authoredJun 4, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 4a529e6 - Browse repository at this point
Copy the full SHA 4a529e6View commit details -
rm-openai authored
Jun 4, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 05db7a6 - Browse repository at this point
Copy the full SHA 05db7a6View commit details -
Add release documentation (#814)
## Summary - describe semantic versioning and release steps - add release page to documentation nav ## Testing - `make format` - `make lint` - `make mypy` - `make tests` - `make build-docs` ------ https://chatgpt.com/codex/tasks/task_i_68409d25afdc83218ad362d10c8a80a1
rm-openai authoredJun 4, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for 5c7c678 - Browse repository at this point
Copy the full SHA 5c7c678View commit details
Commits on Jun 9, 2025
-
Fix handoff transfer message JSON (#818)
## Summary - ensure `Handoff.get_transfer_message` emits valid JSON - test transfer message validity ## Testing - `make format` - `make lint` - `make mypy` - `make tests` ------ https://chatgpt.com/codex/tasks/task_i_68432f925b048324a16878d28e850841
jhills20 authoredJun 9, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for c98e234 - Browse repository at this point
Copy the full SHA c98e234View commit details -
docs: custom output extraction (#817)
In deep agent workflows, each sub‐agent automatically performs an LLM step to summarize its tool calls before returning to its parent. This leads to: 1. Excessive latency: every nested agent invokes the LLM, compounding delays. 2. Loss of raw tool data: summaries may strip out details the top‐level agent needs. We discovered that `Agent.as_tool(...)` already accepts an (undocumented) `custom_output_extractor` parameter. By providing a callback, a parent agent can override what the sub‐agent returns e.g. hand back raw tool outputs or a custom slice so that only the final agent does summarization. --- This PR adds a “Custom output extraction” section to the Markdown docs under “Agents as tools,” with a minimal code example.
jleguina authoredJun 9, 2025 Loading Loading status checks…Configuration menu - View commit details
-
Copy full SHA for dcb88e6 - Browse repository at this point
Copy the full SHA dcb88e6View commit details
There are no files selected for viewing
This file was deleted.
Uh oh!
There was an error while loading. Please reload this page.
This file was deleted.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.