Brevitas 是 AMD 用于神经网络量化的库。🤗 Optimum-AMD 与 Brevitas 集成,以便更轻松地通过 Brevitas 量化 Transformers 模型。
此集成还允许将通过 Brevitas 量化的模型导出到 ONNX。
有关量化的复习,请查看 此文档。
请参阅 ~BrevitasQuantizer 和 ~BrevitasQuantizationConfig 以获取所有可用选项。
支持的模型
目前,仅测试和支持以下架构
- Llama
- OPT
动态量化
from optimum.amd import BrevitasQuantizationConfig, BrevitasQuantizer
from transformers import AutoTokenizer
# Prepare the quantizer, specifying its configuration and loading the model.
qconfig = BrevitasQuantizationConfig(
is_static=False,
apply_gptq=False,
apply_weight_equalization=False,
activations_equalization=False,
weights_symmetric=True,
activations_symmetric=False,
)
quantizer = BrevitasQuantizer.from_pretrained("facebook/opt-125m")
model = quantizer.quantize(qconfig)
静态量化
from optimum.amd import BrevitasQuantizationConfig, BrevitasQuantizer
from transformers import AutoTokenizer
# Prepare the quantizer, specifying its configuration and loading the model.
qconfig = BrevitasQuantizationConfig(
is_static=True,
apply_gptq=False,
apply_weight_equalization=True,
activations_equalization=False,
weights_symmetric=True,
activations_symmetric=False,
)
quantizer = BrevitasQuantizer.from_pretrained("facebook/opt-125m")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
# Load the data for calibration and evaluation.
calibration_dataset = get_dataset_for_model(
"facebook/opt-125m",
qconfig=qconfig,
dataset_name="wikitext2",
tokenizer=tokenizer,
nsamples=128,
seqlen=512,
split="train",
)
model = quantizer.quantize(qconfig, calibration_dataset)
将 Brevitas 模型导出到 ONNX
可以使用 Optimum 将 Brevitas 模型导出到 ONNX
import torch
from optimum.amd.brevitas.export import onnx_export_from_quantized_model
# Export to ONNX through optimum.exporters.
onnx_export_from_quantized_model(model, "llm_quantized_onnx")
完整示例
完整示例可在 https://github.com/huggingface/optimum-amd/tree/main/examples/quantization/brevitas 找到。
< > 更新 于 GitHub