Skip to content

Conversation

@DwyaneShi
Copy link
Collaborator

@DwyaneShi DwyaneShi commented Aug 8, 2025

Pull Request Description

[Please provide a clear and concise description of your changes here]

Related Issues

Resolves: part of #1407

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@DwyaneShi DwyaneShi requested review from Jeffwan and nwangfw August 8, 2025 22:41
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @DwyaneShi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses fixes for vLLM NIXL-based P/D (Prefill/Decode) samples. The core changes involve updating several YAML configuration files to include necessary environment variables for vLLM's NIXL side channel and Python hashing, and modifying the disagg_proxy_server.py script to properly handle request IDs and NIXL-specific KV transfer parameters for distributed inference.

Highlights

  • vLLM Configuration Updates: I've updated various vLLM sample YAML files (1p1d.yaml, pool.yaml, replica.yaml, pd-model.yaml, and a regression test file) to include PYTHONHASHSEED and VLLM_NIXL_SIDE_CHANNEL_HOST environment variables. This ensures consistent behavior and proper side channel communication for NIXL-based setups.
  • Enhanced Request Handling in Proxy Server: I've modified disagg_proxy_server.py to generate a unique X-Request-Id for each incoming request. This ID is now propagated through the send_request_to_service and stream_service_response functions, which is crucial for tracing and debugging requests in a distributed environment.
  • NIXL KV Transfer Parameter Management: The send_request_to_service function now explicitly sets kv_transfer_params for prefillers, disabling streaming and ensuring remote decode. It also extracts and attaches kv_transfer_params from the prefiller's response to the original request data for subsequent decode clients, facilitating the state transfer between prefill and decode stages.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes NIXL-based prefill/decode samples for vLLM by updating YAML configurations and the disaggregation proxy server. The YAML changes add necessary environment variables for reproducibility and networking. The proxy server is updated to handle NIXL-specific kv_transfer_params, passing them from the prefill response to the subsequent decode request. My main feedback is to refactor the proxy server code to make the data flow more explicit by avoiding side effects, which will improve maintainability.

Comment on lines +128 to 135
# extract nixl-specific kv_transfer_params returned from prefillers and
# attach to the req_data for decode clients
response_json = response.json()
kv_transfer_params = response_json.get('kv_transfer_params', {})
if kv_transfer_params:
req_data["kv_transfer_params"] = kv_transfer_params
req_data["kv_transfer_params"]["remote_host"] = client.base_url.host
return response
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function modifies the req_data dictionary in-place, which is a side effect that can make the code harder to reason about. A cleaner approach is to return the kv_transfer_params and let the caller update req_data. This makes the data flow explicit. Since the response object is not used by the callers, the function's return value can be changed to facilitate this refactoring.

    # extract nixl-specific kv_transfer_params returned from prefillers
    response_json = response.json()
    kv_transfer_params = response_json.get('kv_transfer_params', {})
    if kv_transfer_params:
        kv_transfer_params["remote_host"] = client.base_url.host
    return kv_transfer_params

prefill_client, decode_client = select_random_clients()

await send_request_to_service(prefill_client, "/completions", req_data)
await send_request_to_service(prefill_client, "/completions", req_id, req_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To accompany the suggested change in send_request_to_service that removes side effects, this call should be updated to receive the kv_transfer_params and update req_data explicitly.

        kv_transfer_params = await send_request_to_service(prefill_client, "/completions", req_id, req_data)
        if kv_transfer_params:
            req_data["kv_transfer_params"] = kv_transfer_params

prefill_client, decode_client = select_random_clients()

await send_request_to_service(prefill_client, "/chat/completions", req_data)
await send_request_to_service(prefill_client, "/chat/completions", req_id, req_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To accompany the suggested change in send_request_to_service that removes side effects, this call should be updated to receive the kv_transfer_params and update req_data explicitly.

        kv_transfer_params = await send_request_to_service(prefill_client, "/chat/completions", req_id, req_data)
        if kv_transfer_params:
            req_data["kv_transfer_params"] = kv_transfer_params

Copy link
Collaborator

@Jeffwan Jeffwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

@Jeffwan Jeffwan merged commit 081152c into vllm-project:main Aug 8, 2025
14 checks passed
Jeffwan pushed a commit to Jeffwan/aibrix that referenced this pull request Aug 18, 2025
Jeffwan added a commit that referenced this pull request Aug 18, 2025
…se-0.4 branch (#1468)

* Select PD workers in same roleset (#1409)

* Select PD workers in same roleset
* nit
* update ut
---------

Signed-off-by: Varun Gupta <[email protected]>

* [Bug] fix webhook config output when using make manifests (#1412)

fix webhook config output when using make manifests

Signed-off-by: googs1025 <[email protected]>

* [Fix] Fix vLLM NIXL-based P/D samples (#1425)

Signed-off-by: Haiyang Shi <[email protected]>
Co-authored-by: Haiyang Shi <[email protected]>

* [Fix] Disable GGA in NIXL samples (#1436)

[Fix] Fix NIXL samples

Explicitly set UCX_TLS to let UCX not use GGA (GPU Direct) transport

Signed-off-by: Haiyang Shi <[email protected]>
Co-authored-by: Haiyang Shi <[email protected]>

* Fix P/D disaggregation router to follow Nixl kv_transfer_params (#1429)

- Add kv_transfer_params configuration to prefill requests and decode requests

Signed-off-by: Jiaxin Shan <[email protected]>

* [Bug] Corrected naming convention for AIBRIX_MODEL_GPU_PROFILE_CACHING_FLAG (#1427)

Corrected naming convention for AIBRIX_MODEL_GPU_PROFILE_CACHING_FLAG

Signed-off-by: Jonathon Shea <[email protected]>

* [Bug] stormservice's headless service not set ownerRef (#1442)

* fix: stormservice's headless service not set ownerRef

Signed-off-by: dajun.cui <[email protected]>

* fix: patch ut test for service sync

Signed-off-by: dajun.cui <[email protected]>

---------

Signed-off-by: dajun.cui <[email protected]>

* [Bug] stormservice's headless service need set PublishNotReadyAddresses (#1441)

* fix: stormservice's headless service need set PublishNotReadyAddresses

Signed-off-by: dajun.cui <[email protected]>

* fix: isServiceEqual check PublishNotReadyAddresses

Signed-off-by: dajun.cui <[email protected]>

---------

Signed-off-by: dajun.cui <[email protected]>

---------

Signed-off-by: Varun Gupta <[email protected]>
Signed-off-by: googs1025 <[email protected]>
Signed-off-by: Haiyang Shi <[email protected]>
Signed-off-by: Jiaxin Shan <[email protected]>
Signed-off-by: Jonathon Shea <[email protected]>
Signed-off-by: dajun.cui <[email protected]>
Co-authored-by: Varun Gupta <[email protected]>
Co-authored-by: CYJiang <[email protected]>
Co-authored-by: Haiyang Shi <[email protected]>
Co-authored-by: Haiyang Shi <[email protected]>
Co-authored-by: Jonathon Shea <[email protected]>
Co-authored-by: cuidajun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants