Skip to content

Kaihui-intel/auto-round

 
 

Repository files navigation

AutoRound

Advanced Weight-Only Quantization Algorithm for LLMs

python version license

AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It's tailored for a wide range of models and consistently delivers noticeable improvements, often significantly outperforming SignRound with the cost of more tuning time for quantization.

Our method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, which competes impressively against recent methods without introducing any additional inference overhead. The below image presents an overview of AutoRound.

What's New

Prerequisites

  • Python 3.9 or higher

Installation

Build from Source

pip install -r requirements.txt
python setup.py install

Install from pypi

pip install auto-round

Model quantization

Gaudi2/ CPU/ GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

from auto_round import AutoRound

bits, group_size, sym = 4, 128, False
##device:Optional["auto", None, "hpu", "cpu", "cuda"]
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, sym=sym, device=None)
autoround.quantize()
output_dir = "./tmp_autoround"
autoround.save_quantized(output_dir)
Detailed Hyperparameters
  • model: The PyTorch model to be quantized.

  • tokenizer: An optional tokenizer for processing input data. If none, a dataset must be provided.

  • bits (int): Number of bits for quantization (default is 4).

  • group_size (int): Size of the quantization group (default is 128).

  • sym (bool): Whether to use symmetric quantization (default is False).

  • enable_quanted_input (bool): Whether to use the output of the previous quantized block as the input for the current block for tuning (default is True).

  • enable_minmax_tuning (bool): Whether to enable weight min-max tuning (default is True).

  • iters (int): Number of tuning iterations (default is 200).

  • lr (float): The learning rate for rounding value (default is None, it will be set to 1.0/iters automatically).

  • minmax_lr (float): The learning rate for min-max tuning (default is None, it will be set to lr automatically).

  • n_samples (int): Number of samples for tuning (default is 512).

  • seqlen (int): Data length of the sequence for tuning (default is 2048).

  • batch_size (int): Batch size for training (default is 8).

  • scale_dtype (str): The data type of quantization scale to be used (default is "float16"), different kernels have different choices.

  • amp (bool): Whether to use automatic mixed precision (default is True).

  • n_blocks (int): Packing several blocks as one for tuning together (default is 1).

  • gradient_accumulate_steps (int): Number of gradient accumulation steps (default is 1).

  • low_gpu_mem_usage (bool): Whether to save GPU memory at the cost of ~20% more tuning time (default is True).

  • dataset Union[str, list, tuple, torch.utils.data.DataLoader]: The dataset name for tuning (default is "NeelNanda/pile-10k"). Local json file and combination of datasets have been supported, e.g. "./tmp.json,NeelNanda/pile-10k:train, mbpp:train+validation+test"

  • weight_config (dict): Configuration for weight quantization (default is an empty dictionary), mainly for mixed bits or mixed precision.

  • device: The device to be used for tuning. The default is set to 'auto', allowing for automatic detection.

Model inference

Please run the quantization code first.

CPU

##Install the latest https://github.com/intel/intel-extension-for-transformers from source first.
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

Support List

Model Supported
Intel/neural-chat-7b-v3-3 HF-int4-model, accuracy, recipe, example
Intel/neural-chat-7b-v3-1 HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-v0.1 HF-int4-model, accuracy, recipe, example
microsoft/phi-2 HF-int4-model, accuracy, recipe, example
tiiuae/falcon-7b HF-int4-model, accuracy, recipe, example
google/gemma-2b HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-Instruct-v0.2 HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b-it HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-Instruct-v0.1 HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-v0.1 HF-int4-model (under review), accuracy, recipe, example
meta-llama/Meta-Llama-3-8B-Instruct accuracy, recipe, example
meta-llama/Llama-2-7b-chat-hf accuracy, recipe, example
Qwen/Qwen1.5-7B-Chat accuracy, sym recipe, asym recipe , example
baichuan-inc/Baichuan2-7B-Chat accuracy, recipe, example
01-ai/Yi-6B-Chat accuracy, recipe, example
facebook/opt-2.7b accuracy, recipe, example
bigscience/bloom-3b accuracy, recipe, example
EleutherAI/gpt-j-6b accuracy, recipe, example
Salesforce/codegen25-7b-multi example
huggyllama/llama-7b example
mosaicml/mpt-7b example
THUDM/chatglm3-6b example
MBZUAI/LaMini-GPT-124M example
EleutherAI/gpt-neo-125m example
databricks/dolly-v2-3b example
stabilityai/stablelm-base-alpha-3b example

Comparison with other methods

We provide a comprehensive analysis with other methods in our accuracy data section. In summary, our approach achieved superior performance compared to GPTQ, scoring 30/32, AWQ with 27/32, HQQ with 15/16, and OmniQuant with a perfect score of 16/16 across llamv1/llamav2/mistral-7b on W4G-1, W4G128, W3G128, and W2G128, based on the average accuracies of 11 zero-shot tasks.

Tips

1 Consider increasing tuning steps to achieve better results, albeit with increased tuning time.

2 Setting 'enable_quanted_input' to False has been observed to occasionally yield improved results.

3 Setting 'minmax_lr' to 2.0/iters has been observed to occasionally yield improved results.

Reference

If you find SignRound useful for your research, please cite our paper:

@article{cheng2023optimize,
  title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}

About

SOTA Weight-only Quantization Algorithm for LLMs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 2.5%