整合所有内容
在之前的几个部分中,我们一直在尽力手工完成大部分工作。我们探索了分词器的工作原理,并研究了分词、转换为输入 ID、填充、截断和注意力掩码。
然而,正如我们在第 2 部分中所见,🤗 Transformers API 可以用一个高级函数为我们处理所有这些内容,我们将在本节中深入探讨。当您直接在句子上调用您的 `tokenizer` 时,您将获得准备好的输入,可以将其传递到您的模型中。
from transformers import AutoTokenizer
checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
在这里,`model_inputs` 变量包含模型正常运行所需的所有内容。对于 DistilBERT,它包括输入 ID 以及注意力掩码。接受额外输入的其他模型也会从 `tokenizer` 对象输出这些输入。
正如我们在下面的示例中所见,这种方法非常强大。首先,它可以对单个序列进行分词
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
它还可以一次处理多个序列,而 API 不会改变
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
model_inputs = tokenizer(sequences)
它可以根据多个目标进行填充
# Will pad the sequences up to the maximum sequence length
model_inputs = tokenizer(sequences, padding="longest")
# Will pad the sequences up to the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, padding="max_length")
# Will pad the sequences up to the specified max length
model_inputs = tokenizer(sequences, padding="max_length", max_length=8)
它也可以截断序列
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
# Will truncate the sequences that are longer than the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, truncation=True)
# Will truncate the sequences that are longer than the specified max length
model_inputs = tokenizer(sequences, max_length=8, truncation=True)
`tokenizer` 对象可以处理转换为特定框架张量,然后可以直接发送到模型。例如,在以下代码示例中,我们要求分词器返回不同框架的张量 - `“pt”` 返回 PyTorch 张量,`“tf”` 返回 TensorFlow 张量,`“np”` 返回 NumPy 数组
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
# Returns PyTorch tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="pt")
# Returns TensorFlow tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="tf")
# Returns NumPy arrays
model_inputs = tokenizer(sequences, padding=True, return_tensors="np")
特殊标记
如果我们查看分词器返回的输入 ID,我们会发现它们与我们之前的输入 ID 稍有不同
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
print(model_inputs["input_ids"])
tokens = tokenizer.tokenize(sequence)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]
[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]
在开头添加了一个标记 ID,在结尾添加了一个标记 ID。让我们解码上面的两个 ID 序列,看看这是怎么回事
print(tokenizer.decode(model_inputs["input_ids"]))
print(tokenizer.decode(ids))
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]"
"i've been waiting for a huggingface course my whole life."
分词器在开头添加了特殊单词 `[CLS]`,在结尾添加了特殊单词 `[SEP]`。这是因为该模型是在这些特殊单词的基础上进行预训练的,所以为了在推理时获得相同的结果,我们也需要添加它们。请注意,有些模型不会添加特殊单词,或者添加不同的特殊单词;模型也可能只在开头添加这些特殊单词,或者只在结尾添加它们。无论哪种情况,分词器都知道哪些特殊单词是需要的,并将为您处理这些特殊单词。
总结:从分词器到模型
现在我们已经了解了 `tokenizer` 对象在应用于文本时使用所有单独的步骤,让我们最后一次看看它如何处理多个序列(填充!)、非常长的序列(截断!),以及使用其主要 API 处理多种类型的张量。
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)