Skip to content

[RL] add pause and continue generation for async rl training #7419

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jul 5, 2025

Conversation

zhuzilin
Copy link
Collaborator

@zhuzilin zhuzilin commented Jun 21, 2025

Motivation

When doing asynchronous RL training with external agentic environment, the env will keep on sending generation request to sglang server, which will hinder weight updation. Therefore, we need to lock the generation during weight updation.

This PR is trying to implement the following asyncronous agent training with slime:

  1. We will pass the address of the sglang router to the agent environment as an openai compatible endpoint, and the agent env will keep on sending requests to the RL frameworks:
image
  1. When updating weights from training side to sglang, we need to pause the sglang servers, as the agent env has no idea of the RL training process.
image

Thank you for your time on reviewing this PR :)

Modifications

Checklist

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zhuzilin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical control mechanism to the SGLang server, enabling the pausing and resuming of generation requests. This functionality is specifically designed to support asynchronous Reinforcement Learning (RL) training by preventing ongoing generation from interfering with model weight updates, thereby ensuring stability and correctness during the training process.

Highlights

  • New API Endpoints: Two new HTTP POST endpoints, /pause_generation and /continue_generation, have been added to the SGLang server's HTTP API. These endpoints allow external systems to programmatically control the generation process.
  • Generation Control Logic: The TokenizerManager now includes internal state (_updating flag) and an asyncio.Condition (_cond) to manage the pause/continue functionality, ensuring proper synchronization.
  • Blocking Generation Requests: The generate_request method in TokenizerManager has been modified to asynchronously wait on the _cond if the _updating flag is set to True, effectively pausing new generation requests until the flag is reset.
  • Pause/Continue Implementation: New pause_generation and continue_generation methods have been added to TokenizerManager. pause_generation sets _updating to True and aborts current requests, while continue_generation sets _updating to False and notifies all waiting generation requests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces new endpoints to pause and continue generation for asynchronous RL training. It also adds corresponding methods to the TokenizerManager to control the generation process. The code changes look good overall, but there are a few suggestions for improvement.

zhyncs and others added 4 commits June 22, 2025 18:04
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@zhaochenyang20
Copy link
Collaborator

Fix lint. And, what is the pause and continue? What's the relation with abort?

@app.post("/pause_generation")
async def pause_generation(request: Request):
"""Pause generation."""
return ORJSONResponse(content={"message": "Generation paused successfully.", "status": "ok"}, status_code=200)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you missed some code there, it seems like this api did nothing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you are relying on the logic of writing logs and letting other process to read it, please write more detailed docs, otherwise it would be confusing to future developers

@hebiao064
Copy link
Collaborator

Fix lint. And, what is the pause and continue? What's the relation with abort?

The brief overview of how slime/sglang/agent env works, sharing it out just for reference and my understanding:
Screenshot 2025-07-02 at 11 21 55 PM

Note this is based on my imagination, I would suggest Slime team to provide a clear workflow diagram to demonstrate why we need pause/continue

@zhuzilin
Copy link
Collaborator Author

zhuzilin commented Jul 4, 2025

This PR is trying to implement the following asyncronous agent training with slime:

  1. We will pass the address of the sglang router to the agent environment as an openai compatible endpoint, and the agent env will keep on sending requests to the RL frameworks:
image
  1. When updating weights from training side to sglang, we need to pause the sglang servers, as the agent env has no idea of the RL training process.
image

Copy link
Collaborator

@hebiao064 hebiao064 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@zhyncs zhyncs merged commit af46f29 into sgl-project:main Jul 5, 2025
95 of 106 checks passed
ping1jing2 pushed a commit to ping1jing2/sglang that referenced this pull request Jul 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants