-
Notifications
You must be signed in to change notification settings - Fork 1
add node_rank label #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: epic/furion-20250528
Are you sure you want to change the base?
add node_rank label #85
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Freddy9527, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request enhances the metrics collection system by incorporating the node rank. This change is crucial for accurately monitoring and debugging performance in distributed environments where the system spans multiple nodes. The scheduler now calculates the node rank and passes it to the metrics collector, which uses it to label and organize collected statistics.
Highlights
- Metrics Labeling: Added the
node_rank
label to the metrics collected by the scheduler. This allows for better monitoring and analysis of performance and resource usage on a per-node basis in distributed setups. - Metrics Collection Structure: Modified the internal storage of labeled gauges in the
SchedulerMetricsCollector
to be keyed by a tuple(node_rank, dp_rank)
instead of justdp_rank
. This ensures metrics are correctly associated with the specific node and data parallel rank. - Stats Aggregation: Included the
node_rank
in the tensor used to aggregate scheduler statistics across processes, increasing the tensor size from 15 to 16 elements. This allows the receiving process (rank 0) to identify the node rank associated with each set of statistics.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a node_rank
label to the metrics system, primarily affecting how SchedulerMetricsCollector
is initialized and how it labels metrics in multi-node environments. Key changes include calculating node_rank
in the scheduler and passing it to the collector, and updating the collector to use this new rank information.
My review has identified a few points:
- A hardcoded value for
gpus_per_node
inscheduler.py
could limit flexibility. - There's a potentially critical issue in
metrics/collector.py
where thegen_throughput
Gauge was removed but is still referenced, which will likely cause a runtime error. - An outdated error message in
metrics/collector.py
needs updating. - A minor redundancy in label handling was also noted.
Addressing these points will help ensure the robustness and clarity of the new metrics labeling.
@@ -553,12 +553,19 @@ def init_metrics(self): | |||
self.stats = SchedulerStats() | |||
if self.enable_metrics: | |||
engine_type = "unified" | |||
gpus_per_node=8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The number of GPUs per node (gpus_per_node
) is hardcoded to 8. This value might not be suitable for all deployment environments, as the number of GPUs per node can vary significantly across different hardware setups.
To enhance flexibility and adaptability, it's recommended to make gpus_per_node
configurable. This could be achieved by:
- Adding it as a server argument.
- Reading it from an environment variable.
This change would allow the metrics collection to accurately reflect node_rank
in diverse infrastructures.
if "node_rank" not in labels: | ||
labels["node_rank"] = str(node_rank) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conditional block:
if "node_rank" not in labels:
labels["node_rank"] = str(node_rank)
appears to be effectively dead code. The labels
dictionary, when passed to the SchedulerMetricsCollector
constructor from scheduler.py
(line 568), already includes the node_rank
key with its string value: "node_rank": str(node_rank)
.
Consequently, the condition "node_rank" not in labels
on line 278 will always evaluate to false, meaning line 279 will never be executed. This block can likely be removed for improved code clarity and conciseness.
else: | ||
local_info = self._stats_to_tensor(stats) | ||
if local_info.size(0) != 15: | ||
if local_info.size(0) != 16: | ||
raise ValueError(f"local_info.size(0) != 15: {local_info.size(0)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message string in the ValueError
still refers to an expected size of 15 for local_info
:
f"local_info.size(0) != 15: {local_info.size(0)}"
However, the actual check performed on line 343 is if local_info.size(0) != 16:
. To avoid confusion and ensure the error message accurately reflects the condition being checked, it should be updated to refer to the new expected size of 16.
raise ValueError(f"local_info.size(0) != 15: {local_info.size(0)}") | |
raise ValueError(f"local_info.size(0) != 16: {local_info.size(0)}") |
Motivation
Modifications
Checklist