-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Support tuning DeepEP configs #6742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini here, providing a summary of this pull request. This PR introduces a new benchmarking script aimed at tuning the performance configurations for the DeepEP library's Buffer
class, specifically focusing on the dispatch
and combine
operations. The script generates synthetic data, performs correctness checks for the data movement operations, and then iteratively tests different chunk sizes for NVLink and RDMA transfers to identify optimal settings for performance. The best-performing configurations are saved to a JSON file.
Highlights
- New Benchmarking Script: Adds a new Python script (
tuning_deepep.py
) dedicated to benchmarking and tuning DeepEP's distributed data movement kernels. - DeepEP Configuration Tuning: Implements logic to test various
deep_ep.Config
parameters, particularlynum_max_nvl_chunked_send_tokens
andnum_max_rdma_chunked_send_tokens
, to find optimal values for performance. - Correctness Checks: Includes assertions to verify the correctness of the data after
dispatch
andcombine
operations, ensuring the tuning process is based on functional kernels. - Utility Functions: Introduces a utility file (
deepep_utils.py
) containing helper functions for distributed setup, tensor manipulation (including FP8 casting), and general benchmarking copied from the DeepEP project's test suite. - Configuration Output: Saves the identified best configurations for
dispatch
andcombine
operations to a JSON file for later use.
Changelog
Click here to see the changelog
- benchmark/kernels/deepep/deepep_utils.py
- New file added, containing utility functions for distributed initialization, tensor comparison, FP8 casting, unique element handling, score grouping, and benchmarking/profiling.
- Includes context managers for suppressing stdout/stderr during profiling.
- benchmark/kernels/deepep/tuning_deepep.py
- New file added, implementing the main tuning script.
- Sets up distributed environment and DeepEP buffer.
- Generates synthetic data simulating MoE token dispatch/combine scenarios.
- Performs correctness tests for dispatch and combine with various settings (FP8/BF16, with/without top-k, async/sync).
- Includes tuning loops to iterate through different NVLink and RDMA chunk sizes for dispatch and combine.
- Measures and reports performance (GB/s) for different configurations.
- Saves the best-performing configurations to a JSON file.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Tuning kernels tight,
Find the fastest chunk size,
Data flies just right.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces two new Python scripts, deepep_utils.py
(copied from an external source) and tuning_deepep.py
, aimed at supporting the tuning of DeepEP configurations. The initiative to add tuning capabilities is valuable.
The deepep_utils.py
file, being a copy, brings in utility functions for distributed setup, benchmarking, and tensor manipulations. The tuning_deepep.py
script utilizes these utilities to perform comprehensive tests and tune performance parameters for dispatch and combine operations in DeepEP, under various configurations (FP8/BF16, top-k, async modes).
While the core logic for benchmarking and tuning appears functional, there are several areas for improvement regarding robustness, clarity, and configurability, as detailed in the specific comments. Additionally, the pull request description is currently empty and the checklist is not filled out, which should be addressed to provide context and track progress.
Given the medium-severity issues identified, I recommend addressing them to enhance the quality and usability of these scripts before merging.
Summary of Findings
- Missing Type Hints: The
args
parameter indeepep_utils.py:init_dist
andtuning_deepep.py:test_main
lacks type hinting (e.g.,argparse.Namespace
). Adding this would improve code clarity and help with static analysis. (Severity: low, not commented due to settings) - Magic Numbers: Several magic numbers are present (e.g.,
1e-4
,448.0
inper_token_cast_to_fp8
, L2 cache size inbench
, buffer sizes intuning_deepep.py
). Defining these as named constants could enhance readability and maintainability. (Severity: low, not commented due to settings) - Implicit FP8 Type Check: In
tuning_deepep.py
,isinstance(current_x, tuple)
is used to determine ifcurrent_x
is FP8. This is a bit implicit. A more explicit method (e.g., a dedicated flag or type) could improve readability. (Severity: low, not commented due to settings) - Unconventional No-op: The use of
else ()
for a no-op in conditional statements (e.g.,event.current_stream_wait() if async_mode else ()
) is slightly unconventional. A simpleif async_mode: event.current_stream_wait()
is more common. (Severity: low, not commented due to settings)
Merge Readiness
This pull request adds valuable tuning scripts for DeepEP. However, there are several medium-severity issues related to robustness, clarity, and configurability that should be addressed. Specifically, improvements to parsing logic in deepep_utils.py
and handling of configurations, loop structures, and hardcoded values in tuning_deepep.py
would enhance the quality of these scripts.
Additionally, the PR description should be filled out, and the checklist completed to provide better context for reviewers.
I recommend that these changes be made before merging. As an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members.
class suppress_stdout_stderr: | ||
def __enter__(self): | ||
self.outnull_file = open(os.devnull, "w") | ||
self.errnull_file = open(os.devnull, "w") | ||
|
||
self.old_stdout_fileno_undup = sys.stdout.fileno() | ||
self.old_stderr_fileno_undup = sys.stderr.fileno() | ||
|
||
self.old_stdout_fileno = os.dup(sys.stdout.fileno()) | ||
self.old_stderr_fileno = os.dup(sys.stderr.fileno()) | ||
|
||
self.old_stdout = sys.stdout | ||
self.old_stderr = sys.stderr | ||
|
||
os.dup2(self.outnull_file.fileno(), self.old_stdout_fileno_undup) | ||
os.dup2(self.errnull_file.fileno(), self.old_stderr_fileno_undup) | ||
|
||
sys.stdout = self.outnull_file | ||
sys.stderr = self.errnull_file | ||
return self | ||
|
||
def __exit__(self, *_): | ||
sys.stdout = self.old_stdout | ||
sys.stderr = self.old_stderr | ||
|
||
os.dup2(self.old_stdout_fileno, self.old_stdout_fileno_undup) | ||
os.dup2(self.old_stderr_fileno, self.old_stderr_fileno_undup) | ||
|
||
os.close(self.old_stdout_fileno) | ||
os.close(self.old_stderr_fileno) | ||
|
||
self.outnull_file.close() | ||
self.errnull_file.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suppress_stdout_stderr
class uses low-level file descriptor manipulations (os.dup
, os.dup2
) to suppress output. While this is effective for C-level library outputs, have you considered if contextlib.redirect_stdout
and contextlib.redirect_stderr
(available in Python 3.4+) could offer a simpler, standard library-based solution if only Python-level output needs suppression?
If C-level output suppression is a firm requirement (e.g., from underlying CUDA libraries or C extensions), the current approach is understandable. However, if not, using contextlib
could improve readability and reduce complexity. What are your thoughts on this trade-off?
assert ( | ||
sum([name in line for line in prof_lines]) == 1 | ||
), f"Errors of the kernel {name} in the profiling table" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion sum([name in line for line in prof_lines]) == 1
checks if each kernel name appears exactly once in the profiler output. This could be fragile if a kernel name is a substring of another kernel name or appears in descriptive text within the profiler output lines.
Could this lead to false positives or negatives? Perhaps a more robust check, like ensuring the name is a whole word or matches a more specific pattern in the line, would be safer?
time_str = line.split()[-2] | ||
for unit, scale in units.items(): | ||
if unit in time_str: | ||
kernel_times.append(float(time_str.replace(unit, "")) / scale) | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing the time_str
using line.split()[-2]
assumes a fixed format for the profiler output table. If the Kineto profiler's table format changes in future PyTorch versions (e.g., more columns added, different spacing), this parsing logic might break.
Would it be more resilient to parse based on column headers or use regular expressions if the format is somewhat stable but allows for minor variations?
output_data = {} | ||
|
||
# Tune dispatch performance | ||
best_dispatch_results = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable best_dispatch_results
is initialized to None
here and is later used to construct dispatch_config
(around line 357). While the current loop structure (line 296, processing x_e4m3
which is a tuple) ensures best_dispatch_results
is updated (around line 344-356), this reliance on loop order and specific data types for initialization can be a bit fragile.
If, for instance, the isinstance(current_x, tuple)
check (line 342) didn't behave as expected or the loop order changed, best_dispatch_results
might remain None
or not be the expected list of three integers, potentially leading to errors when best_dispatch_results[0]
, [1]
, [2]
are accessed.
Consider initializing best_dispatch_results
to a default valid structure (e.g., a list of default SMs/chunk sizes) or adding an explicit check before its use to ensure it has been properly populated. This would make the logic more robust to future changes. What are your thoughts on this?
assert num_local_ranks == 8 and num_ranks > 8 | ||
torch.manual_seed(rank) | ||
|
||
for i in (num_sms,): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loop for i in (num_sms,):
iterates only once with i
taking the value of num_sms
.
If the intention is to run test_main
for a single num_sms
value, the loop is redundant and could be simplified to a direct call. If the plan was to iterate over multiple num_sms
values, the tuple should contain those values (e.g., for i in (num_sms_val1, num_sms_val2):
).
Could you clarify the intent here? Simplifying this would improve code readability.
# If only one num_sms value is intended:
# test_main(
# num_sms, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
# )
# if local_rank == 0:
# print("", flush=True)
# Or, if multiple values were intended, define them in the iterable:
# for sms_value in [24, 48]: # Example values
# test_main(
# sms_value, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
# )
# if local_rank == 0:
# print("", flush=True)
# Current code:
for i in (num_sms,):
test_main(
i, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
)
if local_rank == 0:
print("", flush=True)
args = parser.parse_args() | ||
print(f"Start system with {args=}") | ||
|
||
num_processes = 8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The num_processes
variable is hardcoded to 8
. This implies the script is primarily designed for execution on a single node with 8 GPUs.
For broader usability and testing on different hardware configurations, would it be beneficial to make num_processes
configurable, perhaps via a command-line argument similar to other parameters like --nnodes
or --num-sms
? This would make the tuning script more flexible.
num_processes = 8 | |
num_processes = args.num_gpus_per_node # Example: if added to argparse | |
# Or, if keeping it simple for now, ensure this assumption is documented. | |
# num_processes = 8 # Assumes 8 GPUs per node |
Motivation
copied and modified from DeepEP unit tests
Modifications
Checklist