Optimum 文档
优化
并获得增强的文档体验
开始使用
优化
Optimum Intel 可用于应用流行的压缩技术,例如量化、剪枝和知识蒸馏。
训练后优化
可以使用我们的 INCQuantizer
轻松地将训练后压缩技术(如动态和静态量化)应用于您的模型。请注意,量化目前仅支持 CPU(仅提供 CPU 后端),因此在以下示例中,我们将不使用 GPU / CUDA。
动态量化
您可以使用以下命令行轻松地在模型上添加动态量化
optimum-cli inc quantize --model distilbert-base-cased-distilled-squad --output quantized_distilbert
在应用训练后量化时,还可以指定精度容限以及调整后的评估函数,以便找到满足指定约束的量化模型。这可以针对动态量化和静态量化完成。
import evaluate
from optimum.intel import INCQuantizer
from datasets import load_dataset
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
from neural_compressor.config import AccuracyCriterion, TuningCriterion, PostTrainingQuantConfig
model_name = "distilbert-base-cased-distilled-squad"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
eval_dataset = load_dataset("squad", split="validation").select(range(64))
task_evaluator = evaluate.evaluator("question-answering")
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
def eval_fn(model):
qa_pipeline.model = model
metrics = task_evaluator.compute(model_or_pipeline=qa_pipeline, data=eval_dataset, metric="squad")
return metrics["f1"]
# Set the accepted accuracy loss to 5%
accuracy_criterion = AccuracyCriterion(tolerable_loss=0.05)
# Set the maximum number of trials to 10
tuning_criterion = TuningCriterion(max_trials=10)
quantization_config = PostTrainingQuantConfig(
approach="dynamic", accuracy_criterion=accuracy_criterion, tuning_criterion=tuning_criterion
)
quantizer = INCQuantizer.from_pretrained(model, eval_fn=eval_fn)
quantizer.quantize(quantization_config=quantization_config, save_directory="dynamic_quantization")
静态量化
以相同的方式,我们可以应用静态量化,为此我们还需要生成校准数据集,以便执行校准步骤。
from functools import partial
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from neural_compressor.config import PostTrainingQuantConfig
from optimum.intel import INCQuantizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# The directory where the quantized model will be saved
save_dir = "static_quantization"
def preprocess_function(examples, tokenizer):
return tokenizer(examples["sentence"], padding="max_length", max_length=128, truncation=True)
# Load the quantization configuration detailing the quantization we wish to apply
quantization_config = PostTrainingQuantConfig(approach="static")
quantizer = INCQuantizer.from_pretrained(model)
# Generate the calibration dataset needed for the calibration step
calibration_dataset = quantizer.get_calibration_dataset(
"glue",
dataset_config_name="sst2",
preprocess_function=partial(preprocess_function, tokenizer=tokenizer),
num_samples=100,
dataset_split="train",
)
quantizer = INCQuantizer.from_pretrained(model)
# Apply static quantization and save the resulting model
quantizer.quantize(
quantization_config=quantization_config,
calibration_dataset=calibration_dataset,
save_directory=save_dir,
)
指定量化方案
对于训练后量化,可以使用 SmoothQuant 方法。与其他训练后静态量化方法相比,此方法通常可以提高模型的准确性。这是通过数学等效变换将难度从激活迁移到权重来实现的。
- quantization_config = PostTrainingQuantConfig(approach="static")
+ recipes={"smooth_quant": True, "smooth_quant_args": {"alpha": 0.5, "folding": True}}
+ quantization_config = PostTrainingQuantConfig(approach="static", backend="ipex", recipes=recipes)
有关更多详细信息,请参阅 INC 文档 和使用该方法量化的 模型 列表。
分布式精度感知调优
模型量化中的一个挑战是确定平衡准确性和性能的最佳配置。分布式调优通过在多个节点上并行化这个耗时的过程来加速这个过程,从而在线性扩展中加速调优过程。
要使用分布式调优,请将 quant_level
设置为 1
并使用 mpirun
运行它。
- quantization_config = PostTrainingQuantConfig(approach="static")
+ quantization_config = PostTrainingQuantConfig(approach="static", quant_level=1)
mpirun -np <number_of_processes> <RUN_CMD>
有关更多详细信息,请参阅 INC 文档 和 文本分类 示例。
训练期间优化
INCTrainer
类提供了一个 API,用于在训练模型的同时结合不同的压缩技术,例如知识蒸馏、剪枝和量化。INCTrainer
与 🤗 Transformers Trainer
非常相似,只需在代码中进行最少的更改即可替换它。
量化
要在训练期间应用量化,您只需创建适当的配置并将其传递给 INCTrainer
。
import evaluate
import numpy as np
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, default_data_collator
- from transformers import Trainer
+ from optimum.intel import INCModelForSequenceClassification, INCTrainer
+ from neural_compressor import QuantizationAwareTrainingConfig
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
dataset = load_dataset("glue", "sst2")
dataset = dataset.map(lambda examples: tokenizer(examples["sentence"], padding=True, max_length=128), batched=True)
metric = evaluate.load("glue", "sst2")
compute_metrics = lambda p: metric.compute(predictions=np.argmax(p.predictions, axis=1), references=p.label_ids)
# The directory where the quantized model will be saved
save_dir = "quantized_model"
# The configuration detailing the quantization process
+ quantization_config = QuantizationAwareTrainingConfig()
- trainer = Trainer(
+ trainer = INCTrainer(
model=model,
+ quantization_config=quantization_config,
args=TrainingArguments(save_dir, num_train_epochs=1.0, do_train=True, do_eval=False),
train_dataset=dataset["train"].select(range(300)),
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
train_result = trainer.train()
metrics = trainer.evaluate()
trainer.save_model()
- model = AutoModelForSequenceClassification.from_pretrained(save_dir)
+ model = INCModelForSequenceClassification.from_pretrained(save_dir)
剪枝
以相同的方式,可以通过指定详细说明所需剪枝过程的剪枝配置来应用剪枝。要了解有关不同支持方法的更多信息,您可以参考 Neural Compressor 文档。目前,剪枝应用于线性和卷积层,而不是其他层(如嵌入层)。重要的是要提到,配置中定义的剪枝稀疏性将应用于这些层,因此不会导致全局模型稀疏性。
- from transformers import Trainer
+ from optimum.intel import INCTrainer
+ from neural_compressor import WeightPruningConfig
# The configuration detailing the pruning process
+ pruning_config = WeightPruningConfig(
+ pruning_type="magnitude",
+ start_step=0,
+ end_step=15,
+ target_sparsity=0.2,
+ pruning_scope="local",
+ )
- trainer = Trainer(
+ trainer = INCTrainer(
model=model,
+ pruning_config=pruning_config,
args=TrainingArguments(save_dir, num_train_epochs=1.0, do_train=True, do_eval=False),
train_dataset=dataset["train"].select(range(300)),
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
train_result = trainer.train()
metrics = trainer.evaluate()
trainer.save_model()
model = AutoModelForSequenceClassification.from_pretrained(save_dir)
知识蒸馏
知识蒸馏也可以以相同的方式应用。要了解有关不同支持方法的更多信息,您可以参考 Neural Compressor 文档
- from transformers import Trainer
+ from optimum.intel import INCTrainer
+ from neural_compressor import DistillationConfig
+ teacher_model_id = "textattack/bert-base-uncased-SST-2"
+ teacher_model = AutoModelForSequenceClassification.from_pretrained(teacher_model_id)
+ distillation_config = DistillationConfig(teacher_model=teacher_model)
- trainer = Trainer(
+ trainer = INCTrainer(
model=model,
+ distillation_config=distillation_config,
args=TrainingArguments(save_dir, num_train_epochs=1.0, do_train=True, do_eval=False),
train_dataset=dataset["train"].select(range(300)),
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
train_result = trainer.train()
metrics = trainer.evaluate()
trainer.save_model()
model = AutoModelForSequenceClassification.from_pretrained(save_dir)
加载量化模型
要加载本地或 🤗 hub 上托管的量化模型,您必须使用我们的 INCModelForXxx
类实例化您的模型。
from optimum.intel import INCModelForSequenceClassification
model_name = "Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic"
model = INCModelForSequenceClassification.from_pretrained(model_name)
您可以加载更多在 hub 上由 Intel 组织托管的量化模型,点击此处。
使用 Transformers pipeline 进行推理
然后,量化模型可以轻松地用于使用 Transformers pipelines 运行推理。
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe_cls = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = pipe_cls(text)
[{'label': 'NEGATIVE', 'score': 0.9880216121673584}]
查看 examples
目录以获取更复杂用法。