Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TensorRT-LLM Release 0.16.0
Key Features and Enhancements
examples/recurrentgemma/README.md
.examples/llama/README.md
.max_num_tokens
dynamic tuning feature, which can be enabled by setting--enable_max_num_tokens_tuning
togptManagerBenchmark
.max_num_tokens
andmax_batch_size
arguments to control the runtime parameters.extended_runtime_perf_knob_config
to enable various performance configurations.AutoAWQ
checkpoints support for Qwen. Refer to the “INT4-AWQ” section inexamples/qwen/README.md
.AutoAWQ
andAutoGPTQ
Hugging Face checkpoints support for LLaMA. (Is it possible load quantized model from huggingface? #2458)allottedTimeMs
to the C++Request
class to support per-request timeout.API Changes
enable_xqa
argument fromtrtllm-build
.--use_embedding_sharing
from convert checkpoints scripts.if __name__ == "__main__"
entry point is required for both single-GPU and multi-GPU cases when using theLLM
API.enable_chunked_prefill
flag to theLlmArgs
of theLLM
API.trtllm-build
command.Model Updates
examples/multimodal/README.md
.examples/multimodal
.examples/sdxl/README.md
. Thanks for the contribution from @Zars19 in Support SDXL and its distributed inference #1514.Fixed Issues
sampling_params
to only be setup ifend_id
is None andtokenizer
is not None in theLLM
API. Thanks to the contribution from @mfuntowicz in [LLM] sampling_params should be setup only if end_id is None and tokenizer is not None #2573.Infrastructure Changes
nvcr.io/nvidia/pytorch:24.11-py3
.nvcr.io/nvidia/tritonserver:24.11-py3
.Known Issues
export NCCL_P2P_LEVEL=SYS
.