Skip to content

Pinned Loading

  1. vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 43.8k 6.7k

  2. llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.2k 112

Repositories

Showing 10 of 16 repositories
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1,195 Apache-2.0 112 35 (8 issues need help) 45 Updated Apr 9, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 43,832 Apache-2.0 6,705 1,622 (18 issues need help) 557 Updated Apr 9, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    Python 436 Apache-2.0 74 95 23 Updated Apr 9, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    Jupyter Notebook 3,409 Apache-2.0 319 146 (11 issues need help) 10 Updated Apr 8, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    Python 18 Apache-2.0 11 20 (5 issues need help) 6 Updated Apr 8, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    Python 1,001 Apache-2.0 136 42 (2 issues need help) 20 Updated Apr 9, 2025
  • HTML 6 14 0 1 Updated Apr 8, 2025
  • buildkite-ci Public
    HCL 8 21 0 6 Updated Apr 7, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    Python 58 BSD-3-Clause 1,599 0 10 Updated Apr 5, 2025
  • vllm-openvino Public
    2 Apache-2.0 0 1 0 Updated Mar 19, 2025