Transformers 文档
生成策略
并获得增强的文档体验
开始使用
生成策略
解码策略决定了模型应如何选择下一个生成的 token。解码策略有很多种,选择合适的策略对生成文本的质量有显著影响。
本指南将帮助您了解 Transformers 中可用的不同解码策略,以及如何和何时使用它们。
贪婪搜索
贪婪搜索是默认的解码策略。它在每一步选择最有可能的 token。除非在 GenerationConfig 中指定,否则此策略最多生成 20 个 token。
贪婪搜索对于输出相对较短的任务效果良好。然而,在生成较长的序列时,它会失效,因为它会开始重复自身。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("I look forward to", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to default length because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=20)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company that provides a suite of tools and services for building, deploying, and maintaining natural language processing'
对比搜索
对比搜索是一种解码策略,旨在减少重复,即使在生成较长序列时也是如此。此策略比较生成的 token 与之前的 token 有多相似,如果它们更相似,则会施加惩罚。
使用 penalty_alpha
和 top_k
参数启用对比搜索。penalty_alpha
管理应用的惩罚,top_k
是要返回的最有可能的 token 的数量。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to 100 because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=100, penalty_alpha=0.6, top_k=4)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company that provides a platform for building and deploying AI models.\nHugging Face is an open-source company that provides a platform for building and deploying AI models. The platform allows developers to build and deploy AI models, as well as collaborate with other developers.\nHugging Face was founded in 2019 by Thibault Wittemberg and Clément Delangue. The company is based in Paris, France.\nHugging Face has'
束搜索
束搜索在每个时间步跟踪多个生成的序列(束)。在一定数量的步骤之后,它选择总体概率最高的序列。与贪婪搜索不同,即使初始 token 的概率较低,此策略也可以“向前看”并选择总体概率较高的序列。
查看束搜索可视化工具,了解束搜索的工作原理。
使用 num_beams
参数启用束搜索(应大于 1,否则等同于贪婪搜索)。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to 100 because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=50, num_beams=2)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
"['Hugging Face is an open-source company that develops and maintains the Hugging Face platform, which is a collection of tools and libraries for building and deploying natural language processing (NLP) models. Hugging Face was founded in 2018 by Thomas Wolf']"
多样束搜索
多样束搜索是束搜索的一种变体,它产生更多样化的输出候选项供选择。此策略衡量序列的不相似性,如果序列太相似,则会施加惩罚。为了避免高计算成本,束的数量被分成组。
使用 num_beams
、num_beam_groups
和 diversity_penalty
参数启用多样束搜索(num_beams
参数应可被 num_beam_groups
整除)。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to 100 because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=50, num_beams=6, num_beam_groups=3, diversity_penalty=1.0, do_sample=False)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company 🤗\nWe are an open-source company. Our mission is to democratize AI and make it accessible to everyone. We believe that AI should be used for the benefit of humanity, not for the benefit of a'
多项式采样
搜索方法选择最有可能的 token。采样,或多项式采样,基于整个模型词汇表的概率分布随机选择一个 token。这意味着每个具有非零概率的 token 都有机会被选中。采样策略减少了重复,并且可以生成更具创造性和多样性的输出。
使用 do_sample=True
和 num_beams=1
启用多项式采样。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to 100 because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=True, num_beams=1)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company 🤗\nWe are open-source and believe that open-source is the best way to build technology. Our mission is to make AI accessible to everyone, and we believe that open-source is the best way to achieve that.'
束搜索多项式采样
这种解码策略是束搜索和多项式采样的组合。它生成多个束,并为每个束使用采样策略。
通过将 num_beams
设置为大于 1 的值并将 do_sample=True
来启用束搜索多项式采样。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
# explicitly set to 100 because Llama2 generation length is 4096
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=True, num_beams=4)
'Hugging Face is an open-source company 100% dedicated to making AI more accessible. We believe that AI should be available to everyone, and we’re working hard to make that a reality.\nWe’re a team of passionate engineers, designers,'
推测解码
推测解码或辅助解码不是搜索或采样策略。相反,推测解码添加了第二个较小的模型来生成候选 token。主模型在单个前向传递中验证候选 token,从而总体上加快了解码过程。此方法对于 LLM 尤其有用,因为在 LLM 中生成 token 的成本可能更高且速度更慢。请参阅推测解码指南以了解更多信息。
目前,推测解码仅支持贪婪搜索和多项式采样。也不支持批量输入。
使用 assistant_model
参数启用推测解码。您会注意到,当辅助模型比主模型小得多时,速度提升最快。添加 do_sample=True
以启用使用重采样的 token 验证。
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-1.7B")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-1.7B")
assistant_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt")
outputs = model.generate(**inputs, assistant_model=assistant_model)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company that provides a platform for developers to build and deploy machine'
Pipeline 也支持推测解码,使用 assistant_model
参数。
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="meta-llama/Llama-3.1-8B",
assistant_model="meta-llama/Llama-3.2-1B",
torch_dtype=torch.bfloat16
)
pipe_output = pipe("Once upon a time, ", max_new_tokens=50, do_sample=False)
pipe_output[0]["generated_text"]
提示查找解码
提示查找解码是推测解码的一种变体,它使用重叠的 n-gram 作为候选 token。它非常适用于以输入为基础的任务,例如摘要。请参阅提示查找解码指南以了解更多信息。
使用 prompt_lookup_num_tokens
参数启用提示查找解码。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-1.7B")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-1.7B", torch_dtype=torch.float16).to("cuda")
assistant_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M", torch_dtype=torch.float16).to("cuda")
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, assistant_model=assistant_model, max_new_tokens=20, prompt_lookup_num_tokens=5)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
'Hugging Face is an open-source company that provides a platform for developers to build and deploy machine learning models. It offers a variety of tools'
自推测解码
提前退出使用来自语言建模头的较早隐藏状态作为输入,有效地跳过层以产生较低质量的输出。较低质量的输出用作辅助输出,并应用自推测来使用剩余层修复输出。来自此自推测方法的最终生成结果与原始模型的生成结果相同(或具有相同的分布)。
辅助模型也是目标模型的一部分,因此可以共享缓存和权重,从而降低内存需求。
对于使用提前退出训练的模型,将 assistant_early_exit
传递给 generate()。
from transformers import AutoModelForCausalLM, AutoTokenizer
prompt = "Alice and Bob"
checkpoint = "facebook/layerskip-llama3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(checkpoint)
outputs = model.generate(**inputs, assistant_early_exit=4, do_sample=False, max_new_tokens=20)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
通用辅助解码
通用辅助解码 (UAD) 使主模型和辅助模型能够使用不同的分词器。主模型的输入 token 被重新编码为辅助模型 token。候选 token 在辅助编码中生成,然后重新编码为主模型候选 token。候选 token 的验证方式如推测解码中所述。
重新编码涉及将 token id 解码为文本,并使用不同的分词器对文本进行编码。为了防止重新编码期间的分词差异,UAD 找到源编码和目标编码之间最长的公共子序列,以确保新 token 包含正确的提示后缀。
将 tokenizer
和 assistant_tokenizer
参数添加到 generate() 以启用 UAD。
from transformers import AutoModelForCausalLM, AutoTokenizer
prompt = "Alice and Bob"
assistant_tokenizer = AutoTokenizer.from_pretrained("double7/vicuna-68m")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
inputs = tokenizer(prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2-9b")
assistant_model = AutoModelForCausalLM.from_pretrained("double7/vicuna-68m")
outputs = model.generate(**inputs, assistant_model=assistant_model, tokenizer=tokenizer, assistant_tokenizer=assistant_tokenizer)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a']
DoLa
按层对比解码 (DoLa) 是一种对比解码策略,用于提高事实性和减少幻觉。此策略通过对比最终层和早期层之间的 logits 差异来工作。因此,定位到特定层的事实知识被放大。不建议对像 GPT-2 这样的小型模型使用 DoLa。
使用以下参数启用 DoLa。
dola_layers
是要与最终层对比的候选层。它可以是一个字符串(low
或high
)来对比层的较低或较高部分。对于像 TruthfulQA 这样的简答题任务,建议使用high
。对于像 GSM8K、StrategyQA、FACTOR 和 VicunaQA 这样的长答案推理任务,建议使用low
。当模型具有 tied word embeddings 时,将跳过第 0 层,并从第 2 层开始。
它也可以是一个整数列表,表示 0 到总层数之间的层索引。第 0 层是词嵌入,第 1 层是第一个 transformer 层,依此类推。请参考下表,了解取决于模型层数的层索引范围。
层数 低 高 > 40 (0, 20, 2) (N - 20, N, 2) <= 40 range(0, N // 2, 2) range(N // 2, N, 2) repetition_penalty
减少重复,建议将其设置为 1.2。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-1.7B")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-1.7B", torch_dtype=torch.float16).to("cuda")
inputs = tokenizer("What is the highest peak in the world??", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50, dola_layers="high", do_sample=False)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
" Mount EverestMount Everest, called Himalaya in Nepali, is the world's highest peak, lying almost 9.5 kilometers above the sea level and the tallest mountain from 19,036.91 ft. The mountain was"
资源
阅读《How to generate text: using different decoding methods for language generation with Transformers》博客文章,了解常见解码策略的工作原理。
< > 在 GitHub 上更新