-
-
Notifications
You must be signed in to change notification settings - Fork 8.5k
[Misc] Benchmarks for audio models #16505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
PS: these datasets need to be granted access manually on hf. Posting here in case we decide to run performance checks on them: |
benchmarks/benchmark_serving.py
Outdated
"Multi-modal content is only supported on 'openai-chat' and " \ | ||
"'openai-audio' backend.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we abstract this to a class-level flag on the dataset class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also need this for the backends
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually that may take more effort, let's just merge this PR first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can make a list like OPENAI_COMPATIBLE_BACKENDS
but I wouldn't tie it to the dataset
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually maybe not even that because current check is on the specific backend key/name rather than on the function it uses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually that may take more effort, let's just merge this PR first
right I missed this comment I was answering to an old version lol
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py
Show resolved
Hide resolved
Can you merge from main to try to fix the CI errors? |
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
8f7be52
to
9f13aff
Compare
Signed-off-by: NickLucche <[email protected]> Signed-off-by: Yang Wang <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]> Signed-off-by: Agata Dobrzyniewicz <[email protected]>
Signed-off-by: NickLucche <[email protected]> Signed-off-by: Mu Huai <[email protected]>
Implements feature requested in #16354.
Test with:
It's still a draft because I want to sweep through the datasets first, butreviews are welcome!cc @DarkLight1337
FIX #16354