Skip to content

Conversation

@Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Oct 1, 2025

Pull Request Description

  • Make ScalingContext the single source of truth for PA configuration
  • Remove configurable stable-window and also introduce cool-down window sematics
  • Remove configuration duplication

Related Issues

Resolves: part of #1422

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jeffwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the autoscaling logic by centralizing all PodAutoscaler configuration into a single ScalingContext. This change eliminates configuration duplication, simplifies the codebase, and introduces more robust scaling behaviors, including fluctuation tolerances and cooldown windows for both KPA/APA and HPA strategies. The primary goal is to provide a more consistent and controllable autoscaling experience.

Highlights

  • Configuration Centralization: The ScalingContext has been established as the single source of truth for all PodAutoscaler (PA) configuration, consolidating various scaling parameters into one interface and removing redundant configuration extraction logic.
  • Removal of Duplicated Configuration: The dedicated config package and its ConfigExtractor have been removed, streamlining the configuration process and eliminating duplication by moving configuration parsing directly into the ScalingContext.
  • Enhanced Scaling Behaviors: New scaling parameters such as UpFluctuationTolerance, DownFluctuationTolerance, ScaleUpCooldownWindow, ScaleDownCooldownWindow, and PanicThreshold have been introduced and integrated into the ScalingContext, allowing for more nuanced and stable scaling decisions.
  • Cooldown Window Implementation: A new mechanism for applying cooldown windows to KPA/APA scaling recommendations has been implemented, storing a history of recommendations to prevent rapid, erratic scaling, similar to Kubernetes HPA's stabilization logic.
  • HPA Behavior Customization: The HorizontalPodAutoscaler (HPA) behavior can now be customized directly from PodAutoscaler annotations, allowing users to define stabilization windows and scaling rates for HPA-managed autoscaling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant and valuable refactoring that centralizes the PodAutoscaler configuration into ScalingContext, removes duplicated configuration, and introduces cooldown window semantics. The changes make the system more maintainable and align with the stated goals. However, I've identified a few issues, including a critical bug in the new KPA tolerance logic, a potential memory leak in the recommendation stabilization logic, and some code duplication that goes against the PR's intent. Addressing these points will greatly improve the quality and correctness of this contribution.

Signed-off-by: Jiaxin Shan <[email protected]>
@Jeffwan Jeffwan force-pushed the jiaxin/autoscaler-improvement-parameters branch from b5789e4 to 58ba8e3 Compare October 1, 2025 22:02
@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Oct 1, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant and well-executed refactoring that centralizes PodAutoscaler configuration into ScalingContext, effectively removing duplication and simplifying the overall design. The introduction of cooldown window semantics for KPA/APA and configurable behavior for HPA are excellent feature enhancements that will improve stability and control. My review focuses on a few opportunities to further improve maintainability by reducing code duplication and enhancing observability for configuration fallbacks.

Signed-off-by: Jiaxin Shan <[email protected]>
@Jeffwan Jeffwan merged commit 52419ec into vllm-project:main Oct 1, 2025
14 checks passed
@Jeffwan Jeffwan deleted the jiaxin/autoscaler-improvement-parameters branch October 1, 2025 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant