Skip to content

Conversation

@nwangfw
Copy link
Collaborator

@nwangfw nwangfw commented Aug 4, 2025

Pull Request Description

added pd deployment and curl example in the quickstart doc.

  • used official vllm:v0.9.1 image and PyNcclConnector provided by vllm pd example.

Related Issues

Resolves: #[1375]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @nwangfw, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've enhanced the quickstart documentation by providing a comprehensive example for deploying and testing a Prefill-Decode (PD) disaggregation model. This includes a new YAML configuration file for the model's deployment and updated documentation with clear instructions and a curl example to demonstrate its usage.

Highlights

  • New PD Model Deployment Example: I've added a new YAML configuration file, pd-model.yaml, which defines a StormService for deploying a vLLM Prefill-Decode (PD) disaggregation model. This file specifies both 'prefill' and 'decode' roles, each configured to run a specific vLLM OpenAI API server instance for the deepseek-ai/DeepSeek-R1-Distill-Llama-8B model.
  • Documentation for PD Model Deployment: I've updated the quickstart.rst documentation to include a new section that guides users on how to deploy the Prefill-Decode (PD) disaggregation model using the newly added pd-model.yaml. This makes it easier for users to get started with this specific deployment strategy.
  • Curl Example for Testing PD Disaggregation: To complement the new deployment instructions, I've also added a curl command example to the quickstart.rst documentation. This example demonstrates how to interact with and test the deployed PD disaggregation model by including the necessary routing-strategy: pd header in the API request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a valuable example for deploying a Prefill-Decode (PD) disaggregation model to the quickstart documentation. The changes are generally clear and provide a concrete example for users. My review includes a few suggestions to enhance the documentation's clarity and the example YAML's maintainability. Specifically, I recommend a minor wording adjustment in the quickstart guide for clarity and a significant refactoring of the pd-model.yaml to eliminate code duplication using YAML anchors and aliases. I've also highlighted a potential issue with the container image being from a private registry, which could be a blocker for users.

.. note::

To test PD disaggregation, add the ``routing-strategy`` header to ``pd``. For example:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The phrasing "add the routing-strategy header to pd" is slightly ambiguous. It could be misinterpreted by users. For improved clarity, I suggest rephrasing to make it explicit that pd is the value for the routing-strategy header.

    To test PD disaggregation, set the ``routing-strategy`` header to ``pd``. For example:

--uvicorn-log-level warning \
--model deepseek-ai/DeepSeek-R1-Distill-Llama-8B \
--served-model-name deepseek-r1-distill-llama-8b \
--kv-transfer-config '{"kv_connector":"PyNcclConnector","kv_role":"kv_both"}'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you use PyNcclConnector?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to follow this PD example from vllm https://github.com/vllm-project/vllm/blob/main/examples/online_serving/disaggregated_prefill.sh, Do you suggest that we use mooncake with RDMA instead?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no mooncake in vllm.. why do not you follow our own guidance? https://github.com/vllm-project/aibrix/tree/main/samples/disaggregation/vllm

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we want to use official vllm image in Quickstart section so that people can try very quickly. Sure, we can use our images and nixl connector then.

Copy link
Collaborator

@Jeffwan Jeffwan Aug 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you are not sure, please ask in advance. you can give reference on the image build. I do not understand why that's a concern on your side

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jeffwan Updated. Feel free to check it one more time.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great

Signed-off-by: Ning Wang <[email protected]>
@Jeffwan Jeffwan merged commit f19399a into vllm-project:main Aug 4, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants