Skip to content

[Bugfix] Revert max_prompt_len validation for decoder-only models. #16741

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 17, 2025

Conversation

davidheineman
Copy link
Contributor

@davidheineman davidheineman commented Apr 16, 2025

A bugfix (#16156) improved input validation for multi-modal models.

However for decoder-only models it changed:

if len(prompt_ids) > max_prompt_len:

to

if len(prompt_ids) >= max_prompt_len:

This change breaks inputs for decoder-only models that provide inputs equal to max_prompt_len, which is valid behavior. This breaks LLM eval frameworks like OLMES and the LM Eval harness. See:

FIX #16445

This will revert this change to the original behavior (len(prompt_ids) > max_prompt_len). This fixes the v1 engine as well.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: David Heineman <[email protected]>
Signed-off-by: David Heineman <[email protected]>
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) April 17, 2025 02:36
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 17, 2025
@vllm-bot vllm-bot merged commit 607029e into vllm-project:main Apr 17, 2025
63 of 65 checks passed
lionelvillard pushed a commit to lionelvillard/vllm that referenced this pull request Apr 17, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: VLLM_USE_V1=0 is needed if prompt length equals max model length
3 participants