We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
A high-throughput and memory-efficient inference and serving engine for LLMs
Python 65.9k 12.1k
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Python 2.4k 331
Common recipes to run vLLM
Jupyter Notebook 291 108
A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM
Python 171 22
Intelligent Router for Mixture-of-Models
Go 2.5k 350
A framework for efficient model inference with omni-modality models
Community maintained hardware plugin for vLLM on Ascend
TPU inference for vLLM, with unified JAX and PyTorch support.
Community maintained hardware plugin for vLLM on AWS Neuron
This repo hosts code for vLLM CI & Performance Benchmark infrastructure.
Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
Community maintained hardware plugin for vLLM on Apple Silicon