Skip to content

Latest commit

 

History

History
19 lines (10 loc) · 961 Bytes

support-backends.md

File metadata and controls

19 lines (10 loc) · 961 Bytes

All Kinds of Supported Inference Backends

If you want to integrate more backends into llmaz, please refer to this PR. It's always welcomed.

llama.cpp

llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

SGLang

SGLang is yet another fast serving framework for large language models and vision language models.

Text-Generation-Inference

text-generation-inference is a Rust, Python and gRPC server for text generation inference. Used in production at Hugging Face to power Hugging Chat, the Inference API and Inference Endpoint.

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs