Skip to content

Conversation

@Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Aug 20, 2025

Pull Request Description

  • Extract lightweight EngineMetricsFetcher with endpoint-based parameters
  • Clean up MetricFetcher interface to remove mixed abstraction levels
  • Separate autoscaler business logic from HTTP utility layer

The PR refactor the fetching abstraction:

  MetricClient (autoscaler business logic)
      ↓ uses
  MetricFetcher (autoscaler's metrics interface)
      ↓ delegates to
  EngineMetricsFetcher (pure HTTP utility)

the EngineMetricsFetcher is also shared by router, which fetch engine metrics. Now, they use same library.

Related Issues

Resolves: part of #1422

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Jeffwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the metrics fetching and processing logic within the autoscaler. It introduces a new, centralized EngineMetricsFetcher to streamline the retrieval of typed metrics directly from inference engine pods. This change aims to improve the modularity and maintainability of the codebase by separating the autoscaler's business logic from the low-level HTTP utility layer, leading to more robust and reliable metric collection.

Highlights

  • Centralized Engine Metrics Fetching: A new EngineMetricsFetcher is implemented in pkg/metrics/engine_fetcher.go to provide a unified and typed way to fetch metrics from inference engine pods, replacing ad-hoc HTTP calls.
  • Improved Abstraction and Decoupling: The MetricFetcher interface and its implementations (especially RestMetricsFetcher) are refactored to delegate metric retrieval to the new EngineMetricsFetcher, cleaning up the autoscaler's core logic.
  • Enhanced Metric Processing in Cache: The pkg/cache/cache_metrics.go now leverages the EngineMetricsFetcher for updating pod metrics, deprecating older, less structured methods and enabling more comprehensive metric handling.
  • Robust Metric Collection and Validation: New validation checks for metric values (e.g., NaN, infinite, out-of-range) are added in pkg/controller/podautoscaler/metrics/utils.go, along with improved error reporting for partial metric collection failures, increasing the system's resilience.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the metrics fetching mechanism for the autoscaler, introducing a centralized EngineMetricsFetcher. The changes are a significant improvement, separating concerns and creating a more robust foundation. My review focuses on further refining the implementation by leveraging standard library functions, improving correctness in metric fetching logic, and reducing code duplication. Key suggestions include using net.SplitHostPort for endpoint parsing, resolving an issue with hardcoded engine types, and centralizing helper functions.

@Jeffwan Jeffwan force-pushed the jiaxin/autoscaler-improvement branch from 58b2c4e to e758178 Compare August 20, 2025 08:52
@Jeffwan Jeffwan marked this pull request as draft August 20, 2025 08:54
@Jeffwan Jeffwan force-pushed the jiaxin/autoscaler-improvement branch from e758178 to d331693 Compare August 20, 2025 11:36
@Jeffwan Jeffwan marked this pull request as ready for review August 20, 2025 11:54
@Jeffwan Jeffwan force-pushed the jiaxin/autoscaler-improvement branch from d331693 to 8269389 Compare August 21, 2025 08:02
@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Aug 21, 2025

/cc @googs1025 please help review this change. I plan to do more refactor on autoscaler to unblock #1260

- Extract lightweight EngineMetricsFetcher with endpoint-based parameters
- Clean up MetricFetcher interface to remove mixed abstraction levels
- Separate autoscaler business logic from HTTP utility layer

Signed-off-by: Jiaxin Shan <[email protected]>

test: add unit tests for EngineMetricsFetcher

Signed-off-by: Jiaxin Shan <[email protected]>
- Centralize getEngineTypeFromPod function in pkg/metrics/utils.go to eliminate code duplication
- Replace custom endpoint parsing with Go standard library net.SplitHostPort()
- Fix fake pod creation architecture flaw by using real pod metadata in FetchPodMetrics
- Fix inconsistent error handling by returning nil instead of empty maps on failures
- Migrate APA and KPA scalers from deprecated UpdatePodListMetric to UpdateMetrics API

Signed-off-by: Jiaxin Shan <[email protected]>
@Jeffwan Jeffwan force-pushed the jiaxin/autoscaler-improvement branch from 8269389 to d2e71d7 Compare August 22, 2025 07:46
@Jeffwan Jeffwan merged commit a2fa03b into vllm-project:main Aug 22, 2025
14 checks passed
@Jeffwan Jeffwan deleted the jiaxin/autoscaler-improvement branch August 22, 2025 11:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant