LLM 课程文档

将所有内容整合在一起

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始

将所有内容整合在一起

Ask a Question Open In Colab Open In Studio Lab

在之前的几个章节中,我们一直尽力手动完成大部分工作。我们探索了分词器的工作原理,并研究了分词、转换为输入 ID、填充、截断和注意力掩码。

然而,正如我们在第 2 节中看到的,🤗 Transformers API 可以使用一个高级函数为我们处理所有这些,我们将在本文中深入探讨。当您直接在句子上调用 tokenizer 时,您会获得准备好传递到您的模型中的输入

from transformers import AutoTokenizer

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

在这里,model_inputs 变量包含模型良好运行所需的一切。 对于 DistilBERT,这包括输入 ID 以及注意力掩码。 其他接受额外输入的模型也将通过 tokenizer 对象输出这些输入。

正如我们将在下面的一些示例中看到的那样,此方法非常强大。 首先,它可以标记单个序列

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

它还可以一次处理多个序列,而 API 没有变化

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

model_inputs = tokenizer(sequences)

它可以根据多个目标进行填充

# Will pad the sequences up to the maximum sequence length
model_inputs = tokenizer(sequences, padding="longest")

# Will pad the sequences up to the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, padding="max_length")

# Will pad the sequences up to the specified max length
model_inputs = tokenizer(sequences, padding="max_length", max_length=8)

它还可以截断序列

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Will truncate the sequences that are longer than the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, truncation=True)

# Will truncate the sequences that are longer than the specified max length
model_inputs = tokenizer(sequences, max_length=8, truncation=True)

tokenizer 对象可以处理转换为特定框架张量的转换,然后可以直接发送到模型。 例如,在以下代码示例中,我们提示分词器返回来自不同框架的张量——"pt" 返回 PyTorch 张量,"tf" 返回 TensorFlow 张量,"np" 返回 NumPy 数组

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Returns PyTorch tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="pt")

# Returns TensorFlow tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="tf")

# Returns NumPy arrays
model_inputs = tokenizer(sequences, padding=True, return_tensors="np")

特殊 token

如果我们看一下分词器返回的输入 ID,我们会发现它们与我们之前的稍有不同

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)
print(model_inputs["input_ids"])

tokens = tokenizer.tokenize(sequence)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]
[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]

在开头添加了一个 token ID,在结尾添加了一个。 让我们解码上面的两个 ID 序列,看看这是怎么回事

print(tokenizer.decode(model_inputs["input_ids"]))
print(tokenizer.decode(ids))
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]"
"i've been waiting for a huggingface course my whole life."

分词器在开头添加了特殊词语 [CLS],在结尾添加了特殊词语 [SEP]。 这是因为模型是使用这些词语进行预训练的,因此为了获得相同的推理结果,我们也需要添加它们。 请注意,有些模型不添加特殊词语,或者添加不同的词语; 模型也可能仅在开头或仅在结尾添加这些特殊词语。 无论如何,分词器知道期望哪些词语,并将为您处理此事。

总结:从分词器到模型

现在我们已经了解了 tokenizer 对象应用于文本时使用的所有单独步骤,让我们最后一次看看它如何使用其主要 API 处理多个序列(填充!)、非常长的序列(截断!)和多种类型的张量

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)
< > 在 GitHub 上更新