Skip to content

Conversation

@dczhu
Copy link
Collaborator

@dczhu dczhu commented Dec 15, 2025

Pull Request Description

  • Add SHFS (Shared File System) for PD+reuse test
  • kv_transfer/kv_connector: Add aibrix_pd_reuse_connector to support PD + reuse
  • Add AIBrixPDReuseConnector test yaml

Related Issues

Resolves: #[Insert issue number(s)]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@dczhu dczhu requested a review from Jeffwan December 15, 2025 05:25
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dczhu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new AIBrixPDReuseConnector designed to enhance KV cache management within the aibrix_kvcache system. The connector integrates capabilities for both prefiller-decoder separation and KV cache reuse, leveraging a Shared File System (SHFS) as a transparent L2 storage layer. This allows for more flexible and efficient handling of KV cache blocks across different components of a distributed inference system, ultimately aiming to improve performance and resource utilization.

Highlights

  • New Connector: Introduced the aibrix_pd_reuse_connector to support advanced KV cache management strategies within the system.
  • Prefiller-Decoder (PD) Separation: Enables the separation of prefiller and decoder instances, allowing KV cache to be efficiently transferred between them.
  • KV Cache Reuse: Facilitates the reuse of KV cache blocks, which can significantly improve inference efficiency by avoiding redundant computations.
  • Shared File System (SHFS) Integration: Implements a SHFSConnector to utilize a Shared File System as a transparent L2 storage backend for KV cache blocks, making them accessible across different processes or machines.
  • New Environment Variable: Added AIBRIX_KV_CACHE_OL_SHFS_ROOT to configure the root path for the Shared File System (SHFS) cache.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dczhu dczhu requested a review from DwyaneShi December 15, 2025 05:25
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new SHFSConnector for shared file system L2 cache and a AIBrixPDReuseConnector to support prefiller-decoder separation with KV cache reuse. The overall implementation is good, but there are several areas for improvement. I've identified a bug in directory creation within the SHFSConnector, significant performance issues in the mget/mput implementations, and several instances of redundant or unclear code. My review comments provide specific suggestions to address these points, aiming to improve correctness, performance, and maintainability.

def open(self) -> Status:
"""Open a connection by ensuring the root directory exists."""
try:
ensure_dir_exist(str(self.root_path))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The ensure_dir_exist function creates the parent directory of the given path, not the path itself. Since self.root_path is the directory that needs to be created, you should use os.makedirs(self.root_path, exist_ok=True) instead. This is a bug as the method does not behave as its docstring suggests.

Suggested change
ensure_dir_exist(str(self.root_path))
os.makedirs(self.root_path, exist_ok=True)

Comment on lines 147 to 157
statuses = []

for i, (key, mr) in enumerate(zip(keys, mrs)):
status = await self.get(key, mr)
statuses.append(status)
if not status.is_ok() and not status.is_not_found():
logger.error(f"SHFS mget[{i}] failed: {status}")

return statuses
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of mget iterates and awaits get calls sequentially. This negates the benefit of an async mget method, which should perform operations in parallel. Using asyncio.gather will execute the file reads concurrently, significantly improving performance for multiple keys.

Suggested change
statuses = []
for i, (key, mr) in enumerate(zip(keys, mrs)):
status = await self.get(key, mr)
statuses.append(status)
if not status.is_ok() and not status.is_not_found():
logger.error(f"SHFS mget[{i}] failed: {status}")
return statuses
import asyncio
tasks = [self.get(key, mr) for key, mr in zip(keys, mrs)]
statuses = await asyncio.gather(*tasks)
for i, status in enumerate(statuses):
if not status.is_ok() and not status.is_not_found():
logger.error(f"SHFS mget[{i}] failed: {status}")
return list(statuses)

Comment on lines 164 to 174
statuses = []

for i, (key, mr) in enumerate(zip(keys, mrs)):
status = await self.put(key, mr)
statuses.append(status)
if not status.is_ok():
logger.error(f"SHFS mput[{i}] failed: {status}")

return statuses
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to mget, the mput method is implemented as a sequential loop. This is inefficient. To leverage the asynchronicity and improve performance, you should use asyncio.gather to perform the file write operations concurrently.

Suggested change
statuses = []
for i, (key, mr) in enumerate(zip(keys, mrs)):
status = await self.put(key, mr)
statuses.append(status)
if not status.is_ok():
logger.error(f"SHFS mput[{i}] failed: {status}")
return statuses
import asyncio
tasks = [self.put(key, mr) for key, mr in zip(keys, mrs)]
statuses = await asyncio.gather(*tasks)
for i, status in enumerate(statuses):
if not status.is_ok():
logger.error(f"SHFS mput[{i}] failed: {status}")
return list(statuses)

Comment on lines +150 to +152
AIBRIX_KV_CACHE_OL_SHFS_ROOT: str = os.path.expanduser(
os.path.join(os.path.expanduser("~"), ".kv_cache_ol", "shfs")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The outer os.path.expanduser is redundant here. os.path.join(os.path.expanduser("~"), ...) already produces an absolute path, so wrapping it in another os.path.expanduser has no effect. Removing it will make the code cleaner.

Suggested change
AIBRIX_KV_CACHE_OL_SHFS_ROOT: str = os.path.expanduser(
os.path.join(os.path.expanduser("~"), ".kv_cache_ol", "shfs")
)
AIBRIX_KV_CACHE_OL_SHFS_ROOT: str = os.path.join(
os.path.expanduser("~"), ".kv_cache_ol", "shfs"
)

root = envs.AIBRIX_KV_CACHE_OL_SHFS_ROOT

# Create full path: root/conn_id
full_path = os.path.join(os.path.expanduser(root), conn_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The root variable, obtained from envs.AIBRIX_KV_CACHE_OL_SHFS_ROOT, is already expanded in envs.py. The os.path.expanduser call here is redundant and can be removed for clarity.

Suggested change
full_path = os.path.join(os.path.expanduser(root), conn_id)
full_path = os.path.join(root, conn_id)

Comment on lines 117 to 140
def get_batches(
self,
keys: Sequence[bytes],
mrs: Sequence[MemoryRegion | Sequence[MemoryRegion]],
batch_size: int,
) -> Sequence[
Sequence[tuple[bytes, MemoryRegion | Sequence[MemoryRegion]]]
]:
"""Get batches for mput/mget operations."""
batches = []
current_batch = []

for key, mr in zip(keys, mrs):
current_batch.append((key, mr))
if len(current_batch) >= batch_size:
batches.append(current_batch)
current_batch = []

if current_batch:
batches.append(current_batch)

return batches
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The get_batches method appears to be unused within the SHFSConnector. If this method is not intended for future use, consider removing it to reduce dead code and improve maintainability.

Comment on lines 355 to 356
+ import os
+ from aibrix_kvcache import envs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Imports should be at the top level of the module, not inside a function. This adheres to Python's style guide (PEP 8) and avoids the overhead of repeated imports in a potentially hot path. Please move import os and from aibrix_kvcache import envs to the top of the file.

Comment on lines 858 to 928
+ if is_prefiller_with_remote_decode:
+ # Prefiller: If entire KV cache exists in SHFS, skip loading (decoder will load it)
+ if exists_status.is_ok() and num_existing_tokens >= aligned_query_len:
+ return 0 # Skip loading, decoder will handle it
+ elif is_decoder_with_remote_prefill:
+ # Decoder: Always try to load from SHFS if exists (this is the main path for PD separation)
+ if exists_status.is_ok() and num_existing_tokens >= aligned_query_len:
+ # Continue to acquire (will load from SHFS)
+ pass
+ else:
+ # KV cache reuse only (no PD separation): Use threshold to avoid loading very small chunks
+ threshold = max(
+ OFFLOADING_CONNECTOR_SKIP_THRESHOLD * self.engine_block_ntokens,
+ self.cache_block_ntokens,
+ )
+ if aligned_query_len < threshold:
+ return 0
+ # For kvcache reuse, proceed to load if exists
+ if exists_status.is_ok() and num_existing_tokens >= aligned_query_len:
+ # KV cache reuse, entire cache exists
+ pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The conditional logic here can be simplified. The if statements that only contain a pass statement are redundant because the code would fall through to the loading logic anyway. Removing them and restructuring the conditions will make the code more readable and easier to maintain.

        if is_prefiller_with_remote_decode:
            # Prefiller: If entire KV cache exists in SHFS, skip loading (decoder will load it)
            if exists_status.is_ok() and num_existing_tokens >= aligned_query_len:
                return 0  # Skip loading, decoder will handle it
        elif not is_decoder_with_remote_prefill:
            # KV cache reuse only (no PD separation): Use threshold to avoid loading very small chunks
            threshold = max(
                OFFLOADING_CONNECTOR_SKIP_THRESHOLD * self.engine_block_ntokens,
                self.cache_block_ntokens,
            )
            if aligned_query_len < threshold:
                return 0

@dczhu dczhu force-pushed the dczhu/pd-reuse branch 2 times, most recently from ccdeb16 to e78cfae Compare December 18, 2025 04:11
@DwyaneShi
Copy link
Collaborator

@dczhu there are some format issues, please check the errors of the failed workflow action and help fix them

+ self._scheduler_meta = AIBrixPDReuseConnectorMetadata({})
+
+ # Track requests that need PD transfer
+ self._reqs_need_send: dict[str, float] = {} # req_id -> expiration_time
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems unused?

+ assert config.kv_transfer_config.engine_id is not None
+ self.engine_id = config.kv_transfer_config.engine_id
+
+ self.side_channel_host = getattr(vllm.envs, 'VLLM_NIXL_SIDE_CHANNEL_HOST', '127.0.0.1')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this kvcache-oriented architecture, since P and D will not talk directly, do we still need the side channel as nixl do?

+ if not params:
+ return
+
+ # Handle PD separation: update metadata if needed
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PD separation -> PD disaggregation

+ from aibrix_kvcache import envs
+ l2_backend = os.getenv("AIBRIX_KV_CACHE_OL_L2_CACHE_BACKEND", "").strip().upper()
+
+ needs_async_load = l2_backend not in ["SHFS", ""]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right now the decoder will also rely on start_load_kv_before_update to load kvcache, which is not async no matter which L2 backend we are using, so let's remove L2 backend check here for now.

@dczhu dczhu force-pushed the dczhu/pd-reuse branch 2 times, most recently from 253804d to 5f24073 Compare December 20, 2025 02:09
@dczhu
Copy link
Collaborator Author

dczhu commented Dec 20, 2025

Thanks @DwyaneShi for the detailed review! I updated the connector file and also the test image (which is reflected in the test yaml files).

@dczhu
Copy link
Collaborator Author

dczhu commented Dec 20, 2025

Regarding prefiller's partial block at the tail, I'll need to add support for st/ld, probably in a separate feature enhancement PR. @DwyaneShi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants