Transformers 文档
M2M100
并获得增强的文档体验
开始使用
此模型发布于 2020-10-21,并于 2021-03-06 添加到 Hugging Face Transformers。
M2M100
概述
M2M100 模型在 Angela Fan、Shruti Bhosale、Holger Schwenk、Zhiyi Ma、Ahmed El-Kishky、Siddharth Goyal、Mandeep Baines、Onur Celebi、Guillaume Wenzek、Vishrav Chaudhary、Naman Goyal、Tom Birch、Vitaliy Liptchinsky、Sergey Edunov、Edouard Grave、Michael Auli、Armand Joulin 提出的 Beyond English-Centric Multilingual Machine Translation 中提出。
论文摘要如下:
现有的翻译工作通过训练一个能够翻译任意语言对的单一模型,展示了大规模多语言机器翻译的潜力。然而,其中许多工作以英语为中心,仅在从英语翻译到英语或从英语翻译到英语的数据上进行训练。虽然这得到了大量训练数据的支持,但并未反映全球范围内的翻译需求。在这项工作中,我们创建了一个真正的多对多多语言翻译模型,可以直接翻译 100 种语言之间的任意组合。我们构建并开源了一个包含数千种语言方向的监督训练数据集,该数据集是通过大规模挖掘创建的。然后,我们探索如何通过密集缩放和特定语言的稀疏参数的组合来有效地提高模型容量,以创建高质量的模型。我们对非英语中心模型的关注,在非英语方向的直接翻译中带来了超过 10 BLEU 的提升,同时在 WMT 的单系统最佳性能方面具有竞争力。我们开源了我们的脚本,以便其他人能够重现数据、评估和最终的 M2M-100 模型。
此模型由 valhalla 贡献。
使用技巧和示例
M2M100 是一个主要用于翻译任务的多语言编码器-解码器 (seq-to-seq) 模型。由于该模型是多语言的,它期望输入序列具有特定的格式:一个特殊的语言 ID 标记将作为源文本和目标文本的前缀。源文本格式为 [lang_code] X [eos],其中 lang_code 是源文本的源语言 ID,目标文本的语言 ID,而 X 是源文本或目标文本。
M2M100Tokenizer 依赖于 sentencepiece,因此在运行示例之前请确保已安装它。要安装 sentencepiece,请运行 pip install sentencepiece。
监督训练
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass生成
M2M100 在生成时使用 eos_token_id 作为 decoder_start_token_id,目标语言 ID 将被强制作为第一个生成的 token。要强制目标语言 ID 作为第一个生成的 token,请将 forced_bos_token_id 参数传递给 generate 方法。以下示例展示了如何使用 facebook/m2m100_418M 检查点在印地语到法语和中文到英文之间进行翻译。
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
>>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
>>> chinese_text = "生活就像一盒巧克力。"
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
>>> # translate Hindi to French
>>> tokenizer.src_lang = "hi"
>>> encoded_hi = tokenizer(hi_text, return_tensors="pt")
>>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
>>> # translate Chinese to English
>>> tokenizer.src_lang = "zh"
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Life is like a box of chocolate."资源
M2M100Config
class transformers.M2M100Config
< source >( vocab_size = 128112 max_position_embeddings = 1024 encoder_layers = 12 encoder_ffn_dim = 4096 encoder_attention_heads = 16 decoder_layers = 12 decoder_ffn_dim = 4096 decoder_attention_heads = 16 encoder_layerdrop = 0.05 decoder_layerdrop = 0.05 use_cache = True is_encoder_decoder = True activation_function = 'relu' d_model = 1024 dropout = 0.1 attention_dropout = 0.1 activation_dropout = 0.0 init_std = 0.02 decoder_start_token_id = 2 scale_embedding = True pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 tie_word_embeddings = True **kwargs )
参数
- vocab_size (
int, optional, 默认值为 50265) — M2M100 模型的词汇表大小。定义了在调用 M2M100Model 时传入的inputs_ids所能表示的不同 token 的数量,或者 - d_model (
int, optional, 默认值为 1024) — 层的维度和池化层的维度。 - encoder_layers (
int, optional, 默认值为 12) — Encoder 层数。 - decoder_layers (
int, optional, 默认值为 12) — Decoder 层数。 - encoder_attention_heads (
int, optional, 默认值为 16) — Transformer encoder 中每个注意力层的注意力头数量。 - decoder_attention_heads (
int, optional, 默认值为 16) — Transformer decoder 中每个注意力层的注意力头数量。 - decoder_ffn_dim (
int, optional, 默认值为 4096) — decoder 中“中间”(通常称为前馈)层的维度。 - encoder_ffn_dim (
int, optional, 默认值为 4096) — encoder 中“中间”(通常称为前馈)层的维度。 - activation_function (
str或function, optional, 默认值为"gelu") — encoder 和 pooler 中的非线性激活函数(函数或字符串)。如果为字符串,则支持"gelu"、"relu"、"silu"和"gelu_new"。 - dropout (
float, optional, 默认值为 0.1) — embeddings、encoder 和 pooler 中所有全连接层的 dropout 概率。 - attention_dropout (
float, optional, 默认值为 0.0) — 注意力概率的 dropout 比率。 - activation_dropout (
float, optional, 默认值为 0.0) — 全连接层内激活的 dropout 比率。 - classifier_dropout (
float, optional, 默认值为 0.0) — 分类器的 dropout 比率。 - max_position_embeddings (
int, optional, 默认值为 1024) — 此模型可能使用的最大序列长度。通常设置为一个较大的值以备不时之需(例如 512、1024 或 2048)。 - init_std (
float, optional, 默认值为 0.02) — 用于初始化所有权重矩阵的 truncated_normal_initializer 的标准差。 - encoder_layerdrop (
float, optional, 默认值为 0.0) — encoder 的 LayerDrop 概率。更多细节请参见 [LayerDrop paper](see https://huggingface.co/papers/1909.11556)。 - decoder_layerdrop (
float, optional, 默认值为 0.0) — decoder 的 LayerDrop 概率。更多细节请参见 [LayerDrop paper](see https://huggingface.co/papers/1909.11556)。 - use_cache (
bool, optional, 默认值为True) — 模型是否应返回最后的 key/values 注意力(并非所有模型都使用)。
这是用于存储 M2M100Model 配置的配置类。它用于根据指定的参数实例化一个 M2M100 模型,定义模型架构。使用默认值实例化一个配置将产生一个类似于 M2M100 facebook/m2m100_418M 架构的配置。
配置对象继承自 PreTrainedConfig,可用于控制模型输出。有关更多信息,请阅读 PreTrainedConfig 的文档。
示例
>>> from transformers import M2M100Config, M2M100Model
>>> # Initializing a M2M100 facebook/m2m100_418M style configuration
>>> configuration = M2M100Config()
>>> # Initializing a model (with random weights) from the facebook/m2m100_418M style configuration
>>> model = M2M100Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configM2M100Tokenizer
class transformers.M2M100Tokenizer
< source >( vocab_file spm_file src_lang = None tgt_lang = None bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' pad_token = '<pad>' unk_token = '<unk>' language_codes = 'm2m100' sp_model_kwargs: dict[str, typing.Any] | None = None num_madeup_words = 8 **kwargs )
参数
- vocab_file (
str) — 词汇文件路径。 - spm_file (
str) — 包含词汇的 SentencePiece 文件路径(通常具有 .spm 扩展名)。 - src_lang (
str, 可选) — 表示源语言的字符串。 - tgt_lang (
str, 可选) — 表示目标语言的字符串。 - eos_token (
str, 可选, 默认为"</s>") — 序列结束符。 - sep_token (
str, 可选, 默认为"</s>") — 分隔符 token,用于构建多序列的序列,例如用于序列分类的两个序列,或用于文本和问题的问答。它也用于用特殊 token 构建的序列的最后一个 token。 - unk_token (
str, 可选, 默认为"<unk>") — 未知 token。词汇表中不存在的 token 将被转换为此 token。 - pad_token (
str, 可选, 默认为"<pad>") — 填充 token,例如当批处理不同长度的序列时使用。 - language_codes (
str, 可选, 默认为"m2m100") — 使用的语言代码。应为"m2m100"或"wmt21"之一。 - sp_model_kwargs (
dict, 可选) — 将传递给SentencePieceProcessor.__init__()方法。SentencePiece的 Python 包装器(Python wrapper for SentencePiece)可用于设置:-
enable_sampling:启用子词规则化。 -
nbest_size:用于 unigram 的采样参数。对于 BPE-Dropout 无效。nbest_size = {0,1}:不进行采样。nbest_size > 1:从 nbest_size 结果中采样。nbest_size < 0:假设 nbest_size 为无穷大,并使用前向过滤和后向采样算法从所有假设(lattice)中采样。
-
alpha:用于 unigram 采样和 BPE-dropout 的合并操作的 dropout 概率的平滑参数。
-
构建 M2M100 分词器。基于 SentencePiece。
此分词器继承自 PreTrainedTokenizer,其中包含大部分主要方法。用户应参考此超类以获取有关这些方法的更多信息。
示例
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="ro")
>>> src_text = " UN Chief Says There Is No Military Solution in Syria"
>>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
>>> outputs = model(**model_inputs) # should workbuild_inputs_with_special_tokens
< source >( token_ids_0: list token_ids_1: list[int] | None = None ) → list[int]
通过连接和添加特殊 token,为序列分类任务构建模型输入。MBART 序列格式如下,其中 X 代表序列
input_ids(用于编码器)X [eos, src_lang_code]decoder_input_ids:(用于解码器)X [eos, tgt_lang_code]
从不使用 BOS。序列对不是预期的用例,但它们将在没有分隔符的情况下处理。
get_special_tokens_mask
< source >( token_ids_0: list token_ids_1: list[int] | None = None already_has_special_tokens: bool = False ) → list[int]
从没有添加特殊标记的标记列表中检索序列ID。此方法在使用分词器prepare_for_model方法添加特殊标记时调用。
create_token_type_ids_from_sequences
< source >( token_ids_0: list token_ids_1: list[int] | None = None ) → list[int]
为用于序列对分类任务的两个序列创建一个掩码。
此方法根据分词器的配置属性动态构建 token 类型 ID
token_type_ids_pattern: 要使用的模式 ("all_zeros" 或 "bert_style")token_type_ids_include_special_tokens: 在长度计算中是否计入特殊标记
示例
# All zeros pattern (default, used by RoBERTa, BART, etc.)
tokenizer.token_type_ids_pattern = "all_zeros"
# Returns: [0, 0, 0, ...] for both sequences
# BERT-style pattern (first sequence gets 0s, second gets 1s)
tokenizer.token_type_ids_pattern = "bert_style"
# Returns: [0, 0, 0, ..., 1, 1, 1, ...] for sequence pairsM2M100Model
class transformers.M2M100Model
< source >( config: M2M100Config )
参数
- config (M2M100Config) — 包含模型所有参数的模型配置类。使用配置文件初始化不会加载与模型相关的权重,只加载配置。查看 from_pretrained() 方法加载模型权重。
裸 M2M 100 模型,输出原始隐藏状态,没有顶部任何特定头。
此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。
此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None decoder_input_ids: torch.LongTensor | None = None decoder_attention_mask: torch.LongTensor | None = None encoder_outputs: tuple[tuple[torch.FloatTensor]] | None = None past_key_values: transformers.cache_utils.Cache | None = None inputs_embeds: torch.FloatTensor | None = None decoder_inputs_embeds: torch.FloatTensor | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None cache_position: torch.Tensor | None = None **kwargs ) → transformers.modeling_outputs.Seq2SeqModelOutput 或 tuple(torch.FloatTensor)
参数
- input_ids (
torch.LongTensor, 形状(batch_size, sequence_length), 可选) — 词汇表中输入序列 token 的索引。默认情况下将忽略填充。可以使用 AutoTokenizer 获取索引。有关详细信息,请参阅 PreTrainedTokenizer.encode() 和 PreTrainedTokenizer.call()。
- attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- decoder_input_ids (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
M2M100 uses the
eos_token_idas the starting token fordecoder_input_idsgeneration. Ifpast_key_valuesis used, optionally only the lastdecoder_input_idshave to be input (seepast_key_values). - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids. Causal mask will also be used by default. - encoder_outputs (
tuple, optional) — Tuple consists of (last_hidden_state, optional:hidden_states, optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - decoder_inputs_embeds (
torch.FloatTensorof shape(batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passingdecoder_input_idsyou can choose to directly pass an embedded representation. Ifpast_key_valuesis used, optionally only the lastdecoder_inputs_embedshave to be input (seepast_key_values). This is useful if you want more control over how to convertdecoder_input_idsindices into associated vectors than the model’s internal embedding lookup matrix.If
decoder_input_idsanddecoder_inputs_embedsare both unset,decoder_inputs_embedstakes the value ofinputs_embeds. - use_cache (
bool, optional) — If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. - output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. - return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.Tensorof shape(sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
返回
transformers.modeling_outputs.Seq2SeqModelOutput 或 tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (M2M100Config) and inputs.
-
last_hidden_state (
torch.FloatTensor,形状为(batch_size, sequence_length, hidden_size)) — 模型解码器最后一层输出的隐藏状态序列。如果使用了
past_key_values,则只输出形状为(batch_size, 1, hidden_size)的序列的最后一个隐藏状态。 -
past_key_values (
EncoderDecoderCache, optional, 当传入use_cache=True或当config.use_cache=True时返回) — 这是一个 EncoderDecoderCache 实例。有关更多详细信息,请参阅我们的 kv 缓存指南。包含预先计算的隐藏状态(自注意力块和交叉注意力块中的键和值),可用于(参见
past_key_values输入)加速顺序解码。 -
decoder_hidden_states (
tuple(torch.FloatTensor), optional, 当传入output_hidden_states=True或当config.output_hidden_states=True时返回) —torch.FloatTensor元组(一个用于嵌入的输出,如果模型有嵌入层,+ 一个用于每个层的输出),形状为(batch_size, sequence_length, hidden_size)。解码器在每个层输出的隐藏状态,加上可选的初始嵌入输出。
-
decoder_attentions (
tuple(torch.FloatTensor), optional, 当传入output_attentions=True或当config.output_attentions=True时返回) —torch.FloatTensor元组(每个层一个),形状为(batch_size, num_heads, sequence_length, sequence_length)。解码器的注意力权重,在注意力 softmax 之后,用于计算自注意力头中的加权平均。
-
cross_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).解码器交叉注意力层的注意力权重,在注意力 softmax 之后,用于计算交叉注意力头中的加权平均。
-
encoder_last_hidden_state (
torch.FloatTensor,形状为(batch_size, sequence_length, hidden_size),可选) — 模型编码器最后一层输出的隐藏状态序列。 -
encoder_hidden_states (
tuple(torch.FloatTensor), optional, 当传入output_hidden_states=True或当config.output_hidden_states=True时返回) —torch.FloatTensor元组(一个用于嵌入的输出,如果模型有嵌入层,+ 一个用于每个层的输出),形状为(batch_size, sequence_length, hidden_size)。编码器在每个层输出的隐藏状态,加上可选的初始嵌入输出。
-
encoder_attentions (
tuple(torch.FloatTensor), optional, 当传入output_attentions=True或当config.output_attentions=True时返回) —torch.FloatTensor元组(每个层一个),形状为(batch_size, num_heads, sequence_length, sequence_length)。编码器的注意力权重,在注意力 softmax 之后,用于计算自注意力头中的加权平均。
The M2M100Model forward method, overrides the __call__ special method.
虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用
Module实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。
M2M100ForConditionalGeneration
class transformers.M2M100ForConditionalGeneration
< source >( config: M2M100Config )
参数
- config (M2M100Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The M2M100 Model with a language modeling head. Can be used for summarization.
此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。
此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None decoder_input_ids: torch.LongTensor | None = None decoder_attention_mask: torch.LongTensor | None = None encoder_outputs: tuple[tuple[torch.FloatTensor]] | None = None past_key_values: transformers.cache_utils.Cache | None = None inputs_embeds: torch.FloatTensor | None = None decoder_inputs_embeds: torch.FloatTensor | None = None labels: torch.LongTensor | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None cache_position: torch.Tensor | None = None **kwargs ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
参数
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- decoder_input_ids (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
M2M100 uses the
eos_token_idas the starting token fordecoder_input_idsgeneration. Ifpast_key_valuesis used, optionally only the lastdecoder_input_idshave to be input (seepast_key_values). - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids. Causal mask will also be used by default. - encoder_outputs (
tuple, optional) — Tuple consists of (last_hidden_state, optional:hidden_states, optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - decoder_inputs_embeds (
torch.FloatTensorof shape(batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passingdecoder_input_idsyou can choose to directly pass an embedded representation. Ifpast_key_valuesis used, optionally only the lastdecoder_inputs_embedshave to be input (seepast_key_values). This is useful if you want more control over how to convertdecoder_input_idsindices into associated vectors than the model’s internal embedding lookup matrix.If
decoder_input_idsanddecoder_inputs_embedsare both unset,decoder_inputs_embedstakes the value ofinputs_embeds. - labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]or -100 (seeinput_idsdocstring). Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]. - use_cache (
bool, optional) — If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. - output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. - return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.Tensorof shape(sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
返回
transformers.modeling_outputs.Seq2SeqLMOutput 或 tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (M2M100Config) and inputs.
-
loss (
torch.FloatTensor,形状为(1,),可选,当提供labels时返回) — 语言建模损失。 -
logits (形状为
(batch_size, sequence_length, config.vocab_size)的torch.FloatTensor) — 语言建模头部的预测分数(SoftMax 之前的每个词汇标记的分数)。 -
past_key_values (
EncoderDecoderCache, optional, 当传入use_cache=True或当config.use_cache=True时返回) — 这是一个 EncoderDecoderCache 实例。有关更多详细信息,请参阅我们的 kv 缓存指南。包含预先计算的隐藏状态(自注意力块和交叉注意力块中的键和值),可用于(参见
past_key_values输入)加速顺序解码。 -
decoder_hidden_states (
tuple(torch.FloatTensor), optional, 当传入output_hidden_states=True或当config.output_hidden_states=True时返回) —torch.FloatTensor元组(一个用于嵌入的输出,如果模型有嵌入层,+ 一个用于每个层的输出),形状为(batch_size, sequence_length, hidden_size)。解码器在每一层输出时的隐藏状态以及初始嵌入输出。
-
decoder_attentions (
tuple(torch.FloatTensor), optional, 当传入output_attentions=True或当config.output_attentions=True时返回) —torch.FloatTensor元组(每个层一个),形状为(batch_size, num_heads, sequence_length, sequence_length)。解码器的注意力权重,在注意力 softmax 之后,用于计算自注意力头中的加权平均。
-
cross_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).解码器交叉注意力层的注意力权重,在注意力 softmax 之后,用于计算交叉注意力头中的加权平均。
-
encoder_last_hidden_state (
torch.FloatTensor,形状为(batch_size, sequence_length, hidden_size),可选) — 模型编码器最后一层输出的隐藏状态序列。 -
encoder_hidden_states (
tuple(torch.FloatTensor), optional, 当传入output_hidden_states=True或当config.output_hidden_states=True时返回) —torch.FloatTensor元组(一个用于嵌入的输出,如果模型有嵌入层,+ 一个用于每个层的输出),形状为(batch_size, sequence_length, hidden_size)。编码器在每一层输出时的隐藏状态以及初始嵌入输出。
-
encoder_attentions (
tuple(torch.FloatTensor), optional, 当传入output_attentions=True或当config.output_attentions=True时返回) —torch.FloatTensor元组(每个层一个),形状为(batch_size, num_heads, sequence_length, sequence_length)。编码器的注意力权重,在注意力 softmax 之后,用于计算自注意力头中的加权平均。
The M2M100ForConditionalGeneration forward method, overrides the __call__ special method.
虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用
Module实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。
翻译示例
>>> from transformers import AutoTokenizer, M2M100ForConditionalGeneration
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/m2m100_418M")
>>> text_to_translate = "Life is like a box of chocolates"
>>> model_inputs = tokenizer(text_to_translate, return_tensors="pt")
>>> # translate to French
>>> gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))使用 Flash Attention 2
Flash Attention 2 是注意力分数计算的更快速、优化的版本,它依赖于 cuda 内核。
安装
首先,请检查您的硬件是否与 Flash Attention 2 兼容。兼容硬件的最新列表可在 官方文档 中找到。
接下来,安装 最新版本的 Flash Attention 2
pip install -U flash-attn --no-build-isolation
用法
要使用 Flash Attention 2 加载模型,我们可以向 .from_pretrained 传递参数 attn_implementation="flash_attention_2"。您可以使用 torch.float16 或 torch.bfloat16 精度。
>>> import torch
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M", dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto").eval()
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
>>> # translate Hindi to French
>>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
>>> tokenizer.src_lang = "hi"
>>> encoded_hi = tokenizer(hi_text, return_tensors="pt").to(model.device)
>>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."预期加速
下面是一个比较原生实现和 Flash Attention 2 之间纯推理时间的预期加速图。

使用缩放点积注意力 (SDPA)
PyTorch 在 torch.nn.functional 中包含一个原生的缩放点积注意力 (SDPA) 算子。此函数包含几个实现,具体取决于输入和使用的硬件。有关更多信息,请参阅官方文档或GPU 推理页面。
当实现可用时,SDPA 默认用于 `torch>=2.1.1`,但你也可以在 `from_pretrained()` 中设置 `attn_implementation="sdpa"` 来明确请求使用 SDPA。
from transformers import M2M100ForConditionalGeneration
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M", dtype=torch.float16, attn_implementation="sdpa")
...为了获得最佳加速效果,我们建议以半精度(例如 `torch.float16` 或 `torch.bfloat16`)加载模型。
在 GitHub 上更新