Skip to content

Xinchi/fuser #75

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Xinchi/fuser #75

wants to merge 9 commits into from

Conversation

de1star
Copy link
Collaborator

@de1star de1star commented Jun 23, 2025

Contributing Guidelines

We have prepared a pre-commit hook to enforce consistent code formatting across the project. If your code complies with the standards, you should not see any errors, you can clean up your code following the steps below:

  1. Install the required dependencies:
    pip install ruff pre-commit
  1. Then, run the following command before commit:
    pre-commit run --all-files
  1. Finally, please double-check your code to ensure it complies with the following additional specifications as much as possible:
  • Avoid hard-coding local paths: Make sure your submissions do not include hard-coded local paths, as these paths are specific to individual development environments and can cause compatibility issues. Use relative paths or configuration files instead.
  • Clear error handling: Implement clear error-handling mechanisms in your code so that error messages can accurately indicate the location of the problem, possible causes, and suggested solutions, facilitating quick debugging.
  • Detailed comments and documentation: Add comments to complex code sections and provide comprehensive documentation to explain the functionality of the code, input-output requirements, and potential error scenarios.

Thank you for your contributions!

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @de1star, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural change by implementing a new 'PipeFusion' pipeline parallelism strategy for the WAN model. This involves creating a series of dedicated wrappers to manage distributed weights, inference, and KV cache, allowing the model to scale more effectively across multiple devices. The changes also include necessary updates to configuration files and execution scripts to support and utilize this new parallelism.

Highlights

  • New Parallelism Strategy: PipeFusion: Introduced a new pipeline parallelism strategy named 'PipeFusion' for the WAN model's attention mechanism. This strategy aims to distribute computation across multiple devices more efficiently.
  • Pipeline Parallel Wrappers: Implemented a set of dedicated wrappers (PipelineParallelWanTransformerInferWrapper, PipelineParallelWanTransformerWeightsWrapper, PipelineParallelWanModelWrapper, PipelineParallelWanRunnerWrapper) to encapsulate and manage the distributed components of the WAN model, including weight splitting and inference logic across pipeline stages.
  • Distributed KV Cache Management: Added a PipelineParallelKVCacheManager to handle Key-Value (KV) cache operations in a distributed pipeline parallel setting, enabling efficient caching and retrieval of attention keys and values across different patches and blocks.
  • Patch-based Rotary Embeddings: Included a new utility function compute_freqs_by_patch to correctly calculate rotary embeddings for individual patches within the distributed inference process, ensuring proper positional encoding.
  • Configuration and Script Updates: Updated the wan_t2v_dist.json configuration to enable the 'pipefusion' attention type and modified various run scripts (run_wan_i2v.sh, run_wan_i2v_dist.sh, run_wan_t2v.sh, run_wan_t2v_dist.sh) to reflect the new parallelism, including path adjustments, CUDA device settings, and profiling commands.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces pipeline parallelism for the WAN model, specifically for inference, by wrapping existing components like the transformer inferencer, weights, model, and runner. The changes include new Python files for these wrappers and updates to existing files to integrate the new pipefusion parallel attention type. The core logic for splitting weights and managing KV cache across pipeline stages appears sound. However, there are several hardcoded paths in the shell scripts that violate the project's stated guidelines, and a missing configuration file reference. Additionally, some minor issues related to code clarity and unused variables were identified.

Comment on lines +4 to +5
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-I2V-14B-480P'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The pull request description explicitly states: "Avoid hard-coding local paths: Make sure your submissions do not include hard-coded local paths, as these paths are specific to individual development environments and can cause compatibility issues. Use relative paths or configuration files instead." These paths should be made configurable (e.g., via environment variables or command-line arguments) rather than hardcoded.

Suggested change
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-I2V-14B-480P'
lightx2v_path=${LIGHTX2V_PATH:-}
model_path=${MODEL_PATH:-}

Comment on lines +4 to +5
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-I2V-14B-480P'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The pull request description explicitly states: "Avoid hard-coding local paths: Make sure your submissions do not include hard-coded local paths, as these paths are specific to individual development environments and can cause compatibility issues. Use relative paths or configuration files instead." These paths should be made configurable (e.g., via environment variables or command-line arguments) rather than hardcoded.

Suggested change
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-I2V-14B-480P'
lightx2v_path=${LIGHTX2V_PATH:-}
model_path=${MODEL_PATH:-}


# check section
if [ -z "${CUDA_VISIBLE_DEVICES}" ]; then
cuda_devices=1,2,3,4
cuda_devices=3,4,6,7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding cuda_devices directly in the script violates the guideline of avoiding hard-coded local paths and configurations. This should be made configurable, for example, by checking an environment variable or providing a default that can be easily overridden.

Suggested change
cuda_devices=3,4,6,7
cuda_devices=${CUDA_VISIBLE_DEVICES:-3,4,6,7}

# self.key_cache_list[block_index][patch_index] = key
# self.value_cache_list[block_index][patch_index] = value
self.update_kv_cache(key, value, patch_index, block_index)
# TODO certainfy cat dim!!!!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This TODO comment indicates an unverified critical part of the code. The torch.cat operation's dim parameter is crucial for correctness. Please verify the dimension and remove the TODO comment once confirmed.

Comment on lines +4 to +5
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-T2V-1.3B'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The pull request description explicitly states: "Avoid hard-coding local paths: Make sure your submissions do not include hard-coded local paths, as these paths are specific to individual development environments and can cause compatibility issues. Use relative paths or configuration files instead." These paths should be made configurable (e.g., via environment variables or command-line arguments) rather than hardcoded.

Suggested change
lightx2v_path='/home/huangxinchi/workspace/temp/lightx2v'
model_path='/data/nvme0/yongyang/models/x2v_models/wan/Wan2.1-T2V-1.3B'
lightx2v_path=${LIGHTX2V_PATH:-}
model_path=${MODEL_PATH:-}

).reshape(seq_len, 1, -1)

freqs_i = pad_freqs(freqs_i, s * patch_num)
s_per_rank = s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable name s_per_rank is misleading in compute_freqs_by_patch. This function operates on patches, not ranks, and s represents the sequence length per patch. A more appropriate name would be s_per_patch or simply seq_len_per_patch to avoid confusion with distributed ranks.

Suggested change
s_per_rank = s
s_per_patch = s

# if self.rank == 0:
# import pdb; pdb.set_trace()
for patch_index, x in enumerate(xs):
self.patch_index = patch_index
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Setting self.patch_index as a class attribute within a loop can be problematic if the infer method were to be called concurrently or if patch_index was expected to be stable across the entire infer call. While this might not be an issue in the current distributed setup where each rank processes its own infer call sequentially, it's generally safer to pass such context-specific variables as arguments to the methods that need them (e.g., _infer_self_attn_cached).

Comment on lines +54 to +55
# for rank_step in range(self.world_size):
# if rank_step == 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Please remove commented-out code. If this code is no longer needed, it should be deleted. If it's temporarily disabled for debugging or future reference, consider using version control history or a more descriptive comment.

self.runner = runner
self.config = config

self.wramup_steps = self.config.get("wramup_steps", 40)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a typo in the variable name wramup_steps. It should be warmup_steps for clarity and consistency.

Suggested change
self.wramup_steps = self.config.get("wramup_steps", 40)
self.warmup_steps = self.config.get("warmup_steps", 40)

self.patch_num = patch_num
self.key_cache_list = [[None for i in range(self.patch_num)] for j in range(self.kv_cache_len)]
self.value_cache_list = [[None for i in range(self.patch_num)] for j in range(self.kv_cache_len)]
self.cur_index = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The self.cur_index attribute is initialized but never used in the current implementation. Please remove unused variables to keep the codebase clean and reduce potential confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant