Transformers 文档

DeBERTa-v2

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

此模型于 2020-06-05 发布,并于 2021-02-19 添加到 Hugging Face Transformers。

PyTorch

DeBERTa-v2

DeBERTa-v2 在原始 DeBERTa 架构的基础上进行了改进,使用了基于 SentencePiece 的分词器和 128K 的新词汇表大小。它还在第一个 transformer 层中添加了一个额外的卷积层,以更好地学习输入 token 的局部依赖关系。最后,在注意力层中共享位置投影和内容投影矩阵,以减少参数数量。

您可以在 Microsoft 组织下找到所有原始 [DeBERTa-v2] 检查点。

此模型由 Pengcheng He 贡献。

点击右侧边栏中的 DeBERTa-v2 模型,了解更多如何将 DeBERTa-v2 应用于不同语言任务的示例。

下面的示例演示了如何使用 PipelineAutoModel 类对文本进行分类。

流水线
自动模型
Transformers CLI
import torch
from transformers import pipeline

pipeline = pipeline(
    task="text-classification",
    model="microsoft/deberta-v2-xlarge-mnli",
    device=0,
    dtype=torch.float16
)
result = pipeline("DeBERTa-v2 is great at understanding context!")
print(result)

量化通过以较低精度表示权重来减少大型模型的内存负担。有关更多可用量化后端,请参阅量化概述。

下面的示例使用 bitsandbytes 量化 仅将权重量化为 4 位。

from transformers import AutoModelForSequenceClassification, AutoTokenizer, BitsAndBytesConfig

model_id = "microsoft/deberta-v2-xlarge-mnli"
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
    bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(
    model_id,
    quantization_config=quantization_config,
    dtype="float16"
)

inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")

DebertaV2Config

class transformers.DebertaV2Config

< >

( vocab_size = 128100 hidden_size = 1536 num_hidden_layers = 24 num_attention_heads = 24 intermediate_size = 6144 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 0 initializer_range = 0.02 layer_norm_eps = 1e-07 relative_attention = False max_relative_positions = -1 pad_token_id = 0 bos_token_id = None eos_token_id = None position_biased_input = True pos_att_type = None pooler_dropout = 0 pooler_hidden_act = 'gelu' legacy = True tie_word_embeddings = True **kwargs )

参数

  • vocab_size (int, optional, defaults to 128100) — DeBERTa-v2 模型的词汇表大小。定义了调用 DebertaV2Model 时传入的 inputs_ids 可以表示的不同 token 的数量。
  • hidden_size (int, optional, defaults to 1536) — 编码器层和池化层的维度。
  • num_hidden_layers (int, optional, defaults to 24) — Transformer 编码器中的隐藏层数量。
  • num_attention_heads (int, optional, defaults to 24) — Transformer 编码器中每个注意力层的注意力头数量。
  • intermediate_size (int, optional, defaults to 6144) — Transformer 编码器中“中间”(通常称为前馈)层的维度。
  • hidden_act (str or Callable, optional, defaults to "gelu") — 编码器和池化器中的非线性激活函数(函数或字符串)。如果是字符串,则支持 "gelu""relu""silu""gelu""tanh""gelu_fast""mish""linear""sigmoid""gelu_new"
  • hidden_dropout_prob (float, optional, defaults to 0.1) — 嵌入层、编码器和池化器中所有全连接层的丢弃概率。
  • attention_probs_dropout_prob (float, optional, defaults to 0.1) — 注意力概率的丢弃率。
  • max_position_embeddings (int, optional, defaults to 512) — 此模型可能使用的最大序列长度。通常将其设置为较大的值以备不时之需(例如 512、1024 或 2048)。
  • type_vocab_size (int, optional, defaults to 0) — 调用 DebertaModel 时传入的 token_type_ids 的词汇表大小。
  • initializer_range (float, optional, defaults to 0.02) — 用于初始化所有权重矩阵的截断正态分布初始化器的标准差。
  • layer_norm_eps (float, optional, defaults to 1e-7) — 层归一化层使用的 epsilon 值。
  • relative_attention (bool, optional, defaults to True) — 是否使用相对位置编码。
  • max_relative_positions (int, optional, defaults to -1) — 相对位置的范围 [-max_position_embeddings, max_position_embeddings]。使用与 max_position_embeddings 相同的值。
  • pad_token_id (int, optional, defaults to 0) — 用于填充 input_ids 的值。
  • position_biased_input (bool, optional, defaults to True) — 是否将绝对位置嵌入添加到内容嵌入中。
  • pos_att_type (list[str], optional) — 相对位置注意力的类型,可以是 ["p2c", "c2p"] 的组合,例如 ["p2c"]["p2c", "c2p"]["p2c", "c2p"]
  • layer_norm_eps (float, optional, defaults to 1e-12) — 层归一化层使用的 epsilon 值。
  • legacy (bool, optional, defaults to True) — 模型是否应使用旧版 LegacyDebertaOnlyMLMHead,该版本无法正常用于掩码填充任务。

This is the configuration class to store the configuration of a DebertaV2Model. It is used to instantiate a DeBERTa-v2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa microsoft/deberta-v2-xlarge architecture.

配置对象继承自 PreTrainedConfig,可用于控制模型输出。有关更多信息,请阅读 PreTrainedConfig 的文档。

示例

>>> from transformers import DebertaV2Config, DebertaV2Model

>>> # Initializing a DeBERTa-v2 microsoft/deberta-v2-xlarge style configuration
>>> configuration = DebertaV2Config()

>>> # Initializing a model (with random weights) from the microsoft/deberta-v2-xlarge style configuration
>>> model = DebertaV2Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

DebertaV2Tokenizer

class transformers.DebertaV2Tokenizer

< >

( vocab: str | dict | list | None = None do_lower_case = False split_by_punct = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' add_prefix_space = True unk_id = 1 **kwargs )

参数

  • vocab_file (str, optional) — Path to the vocabulary file (SentencePiece model file). Not used directly but kept for compatibility.
  • vocab (str, dict or list, optional) — List of tuples (piece, score) for the vocabulary.
  • precompiled_charsmap (bytes, optional) — Precompiled character map for normalization.
  • do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing.
  • split_by_punct (bool, optional, defaults to False) — Whether to split by punctuation.
  • bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token.
  • eos_token (str, optional, defaults to "[SEP]") — The end of sequence token.
  • unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
  • pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths.
  • cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
  • mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
  • add_prefix_space (bool, optional, defaults to True) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word.
  • unk_id (int, optional, defaults to index of unk_token in vocab) — The ID of the unknown token in the vocabulary.

Construct a DeBERTa-v2 tokenizer (backed by HuggingFace’s tokenizers library). Based on Unigram tokenization.

此分词器继承自 TokenizersBackend,其中包含大部分主要方法。用户应参考此父类了解有关这些方法的更多信息。

get_special_tokens_mask

< >

( token_ids_0: list[int] token_ids_1: list[int] | None = None already_has_special_tokens: bool = False ) 一个介于 0 和 1 之间的整数列表

参数

  • token_ids_0 — List of IDs for the (possibly already formatted) sequence.
  • token_ids_1 — Unused when already_has_special_tokens=True. Must be None in that case.
  • already_has_special_tokens — Whether the sequence is already formatted with special tokens.

返回

一个范围在 [0, 1] 的整数列表

特殊令牌为 1,序列令牌为 0。

Retrieve sequence ids from a token list that has no special tokens added.

For fast tokenizers, data collators call this with already_has_special_tokens=True to build a mask over an already-formatted sequence. In that case, we compute the mask by checking membership in all_special_ids.

save_vocabulary

< >

( save_directory: str filename_prefix: str | None = None )

DebertaV2TokenizerFast

class transformers.DebertaV2Tokenizer

< >

( vocab: str | dict | list | None = None do_lower_case = False split_by_punct = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' add_prefix_space = True unk_id = 1 **kwargs )

参数

  • vocab_file (str, optional) — Path to the vocabulary file (SentencePiece model file). Not used directly but kept for compatibility.
  • vocab (str, dict or list, optional) — List of tuples (piece, score) for the vocabulary.
  • precompiled_charsmap (bytes, optional) — Precompiled character map for normalization.
  • do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing.
  • split_by_punct (bool, optional, defaults to False) — Whether to split by punctuation.
  • bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token.
  • eos_token (str, optional, defaults to "[SEP]") — The end of sequence token.
  • unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
  • pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths.
  • cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
  • mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
  • add_prefix_space (bool, optional, defaults to True) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word.
  • unk_id (int, optional, defaults to index of unk_token in vocab) — The ID of the unknown token in the vocabulary.

Construct a DeBERTa-v2 tokenizer (backed by HuggingFace’s tokenizers library). Based on Unigram tokenization.

此分词器继承自 TokenizersBackend,其中包含大部分主要方法。用户应参考此父类了解有关这些方法的更多信息。

DebertaV2Model

class transformers.DebertaV2Model

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

  • config (DebertaV2Model) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Deberta V2 Model outputting raw hidden-states without any specific head on top.

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,
    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

返回

transformers.modeling_outputs.BaseModelOutputtuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs.

  • last_hidden_state (torch.FloatTensor, 形状为 (batch_size, sequence_length, hidden_size)) — 模型最后一层输出的隐藏状态序列。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

The DebertaV2Model forward method, overrides the __call__ special method.

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

DebertaV2PreTrainedModel

class transformers.DebertaV2PreTrainedModel

< >

( config: PreTrainedConfig *inputs **kwargs )

参数

  • config (PreTrainedConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

_forward_unimplemented

< >

( *input: typing.Any )

定义每次调用时执行的计算。

应由所有子类覆盖。

尽管前向传播的配方需要在该函数中定义,但之后应该调用 Module 实例而不是它,因为前者负责运行注册的钩子,而后者则默默地忽略它们。

DebertaV2ForMaskedLM

class transformers.DebertaV2ForMaskedLM

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

  • config (DebertaV2ForMaskedLM) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The Deberta V2 Model with a language modeling head on top.”

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None labels: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,
    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

返回

transformers.modeling_outputs.MaskedLMOutputtuple(torch.FloatTensor)

A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs.

  • loss (形状为 (1,)torch.FloatTensor可选,当提供 labels 时返回) — 掩码语言建模 (MLM) 损失。

  • logits (形状为 (batch_size, sequence_length, config.vocab_size)torch.FloatTensor) — 语言建模头部的预测分数(SoftMax 之前的每个词汇标记的分数)。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

The DebertaV2ForMaskedLM forward method, overrides the __call__ special method.

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

示例

>>> from transformers import AutoTokenizer, DebertaV2ForMaskedLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v2-xlarge")

>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> # retrieve index of <mask>
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]

>>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
>>> tokenizer.decode(predicted_token_id)
...

>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>> # mask labels of non-<mask> tokens
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)

>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
...

DebertaV2ForSequenceClassification

class transformers.DebertaV2ForSequenceClassification

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

DeBERTa 模型 transformer,顶部带有一个序列分类/回归头(在池化输出顶部的线性层),例如用于 GLUE 任务。

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None labels: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 词汇表中输入序列 token 的索引。默认情况下,填充(padding)将被忽略。

    可以使用 AutoTokenizer 获取索引。有关详细信息,请参阅 PreTrainedTokenizer.encode()PreTrainedTokenizer.call()

    什么是输入 ID?

  • attention_mask (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 用于避免在填充 token 索引上执行注意力操作的掩码。掩码值选择在 [0, 1] 之间:

    • 1 表示**未被掩码**的 token,
    • 0 表示**被掩码**的 token。

    什么是注意力掩码?

  • token_type_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 段落 token 索引,用于指示输入的第一个和第二个部分。索引选择在 [0, 1] 之间:

    • 0 对应于 *句子 A* token,
    • 1 对应于 *句子 B* token。

    什么是 token 类型 ID?

  • position_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 位置嵌入中每个输入序列 token 位置的索引。选择范围为 [0, config.n_positions - 1]

    什么是位置 ID?

  • inputs_embeds (torch.Tensor, shape为 (batch_size, sequence_length, hidden_size), 可选) — 可选地,您可以选择直接传递嵌入表示,而不是传递 input_ids。如果您希望对如何将 input_ids 索引转换为关联向量有比模型内部嵌入查找矩阵更多的控制,这将很有用。
  • labels (torch.LongTensor, shape为 (batch_size,), 可选) — 用于计算序列分类/回归损失的标签。索引应在 [0, ..., config.num_labels - 1] 之间。如果 config.num_labels == 1,则计算回归损失(均方误差损失);如果 config.num_labels > 1,则计算分类损失(交叉熵损失)。
  • output_attentions (bool, 可选) — 是否返回所有注意力层的注意力张量。有关详细信息,请参阅返回张量下的 attentions
  • output_hidden_states (bool, 可选) — 是否返回所有层的隐藏状态。有关详细信息,请参阅返回张量下的 hidden_states
  • return_dict (bool, 可选) — 是否返回 ModelOutput 而不是普通的元组。

返回

transformers.modeling_outputs.SequenceClassifierOutputtuple(torch.FloatTensor)

A transformers.modeling_outputs.SequenceClassifierOutputtuple(torch.FloatTensor) 的元组(如果传入 return_dict=Falseconfig.return_dict=False),包含根据配置 (DebertaV2Config) 和输入的不同元素。

  • loss (形状为 (1,)torch.FloatTensor可选,当提供 labels 时返回) — 分类损失(如果 config.num_labels==1,则为回归损失)。

  • logits (形状为 (batch_size, config.num_labels)torch.FloatTensor) — 分类(如果 config.num_labels==1,则为回归)分数(SoftMax 之前)。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

DebertaV2ForSequenceClassification 的 forward 方法,覆盖了 __call__ 特殊方法。

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

单标签分类示例

>>> import torch
>>> from transformers import AutoTokenizer, DebertaV2ForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
...

>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", num_labels=num_labels)

>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
...

多标签分类示例

>>> import torch
>>> from transformers import AutoTokenizer, DebertaV2ForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", problem_type="multi_label_classification")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]

>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = DebertaV2ForSequenceClassification.from_pretrained(
...     "microsoft/deberta-v2-xlarge", num_labels=num_labels, problem_type="multi_label_classification"
... )

>>> labels = torch.sum(
...     torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss

DebertaV2ForTokenClassification

class transformers.DebertaV2ForTokenClassification

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

  • config (DebertaV2ForTokenClassification) — 模型的配置类,包含模型的所有参数。使用配置文件初始化时不会加载与模型相关的权重,只会加载配置。请查看 from_pretrained() 方法来加载模型权重。

Deberta V2 transformer,顶部带有一个 token 分类头(在隐藏状态输出顶部的线性层),例如用于命名实体识别 (NER) 任务。

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None labels: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 词汇表中输入序列 token 的索引。默认情况下,填充(padding)将被忽略。

    可以使用 AutoTokenizer 获取索引。有关详细信息,请参阅 PreTrainedTokenizer.encode()PreTrainedTokenizer.call()

    什么是输入 ID?

  • attention_mask (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 用于避免在填充 token 索引上执行注意力操作的掩码。掩码值选择在 [0, 1] 之间:

    • 1 表示**未被掩码**的 token,
    • 0 表示**被掩码**的 token。

    什么是注意力掩码?

  • token_type_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 段落 token 索引,用于指示输入的第一个和第二个部分。索引选择在 [0, 1] 之间:

    • 0 对应于 *句子 A* token,
    • 1 对应于 *句子 B* token。

    什么是 token 类型 ID?

  • position_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 位置嵌入中每个输入序列 token 位置的索引。选择范围为 [0, config.n_positions - 1]

    什么是位置 ID?

  • inputs_embeds (torch.Tensor, shape为 (batch_size, sequence_length, hidden_size), 可选) — 可选地,您可以选择直接传递嵌入表示,而不是传递 input_ids。如果您希望对如何将 input_ids 索引转换为关联向量有比模型内部嵌入查找矩阵更多的控制,这将很有用。
  • labels (torch.LongTensor, shape为 (batch_size, sequence_length), 可选) — 用于计算 token 分类损失的标签。索引应在 [0, ..., config.num_labels - 1] 之间。
  • output_attentions (bool, 可选) — 是否返回所有注意力层的注意力张量。有关详细信息,请参阅返回张量下的 attentions
  • output_hidden_states (bool, 可选) — 是否返回所有层的隐藏状态。有关详细信息,请参阅返回张量下的 hidden_states
  • return_dict (bool, 可选) — 是否返回 ModelOutput 而不是普通的元组。

返回

transformers.modeling_outputs.TokenClassifierOutputtuple(torch.FloatTensor)

A transformers.modeling_outputs.TokenClassifierOutputtuple(torch.FloatTensor) 的元组(如果传入 return_dict=Falseconfig.return_dict=False),包含根据配置 (DebertaV2Config) 和输入的不同元素。

  • loss (形状为 (1,)torch.FloatTensor可选,当提供 labels 时返回) — 分类损失。

  • logits (形状为 (batch_size, sequence_length, config.num_labels)torch.FloatTensor) — 分类分数(SoftMax 之前)。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

DebertaV2ForTokenClassification 的 forward 方法,覆盖了 __call__ 特殊方法。

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

示例

>>> from transformers import AutoTokenizer, DebertaV2ForTokenClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForTokenClassification.from_pretrained("microsoft/deberta-v2-xlarge")

>>> inputs = tokenizer(
...     "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> predicted_token_class_ids = logits.argmax(-1)

>>> # Note that tokens are classified rather then input words which means that
>>> # there might be more predicted token classes than words.
>>> # Multiple token classes might account for the same word
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> predicted_tokens_classes
...

>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
...

DebertaV2ForQuestionAnswering

class transformers.DebertaV2ForQuestionAnswering

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

  • config (DebertaV2ForQuestionAnswering) — 模型的配置类,包含模型的所有参数。使用配置文件初始化时不会加载与模型相关的权重,只会加载配置。请查看 from_pretrained() 方法来加载模型权重。

Deberta V2 transformer,顶部带有一个用于抽取式问答任务(如 SQuAD)的 span 分类头(在隐藏状态输出顶部的线性层,用于计算 span start logitsspan end logits)。

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None start_positions: torch.Tensor | None = None end_positions: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 词汇表中输入序列 token 的索引。默认情况下,填充(padding)将被忽略。

    可以使用 AutoTokenizer 获取索引。有关详细信息,请参阅 PreTrainedTokenizer.encode()PreTrainedTokenizer.call()

    什么是输入 ID?

  • attention_mask (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 用于避免在填充 token 索引上执行注意力操作的掩码。掩码值选择在 [0, 1] 之间:

    • 1 表示**未被掩码**的 token,
    • 0 表示**被掩码**的 token。

    什么是注意力掩码?

  • token_type_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 段落 token 索引,用于指示输入的第一个和第二个部分。索引选择在 [0, 1] 之间:

    • 0 对应于 *句子 A* token,
    • 1 对应于 *句子 B* token。

    什么是 token 类型 ID?

  • position_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 位置嵌入中每个输入序列 token 位置的索引。选择范围为 [0, config.n_positions - 1]

    什么是位置 ID?

  • inputs_embeds (torch.Tensor, shape为 (batch_size, sequence_length, hidden_size), 可选) — 可选地,您可以选择直接传递嵌入表示,而不是传递 input_ids。如果您希望对如何将 input_ids 索引转换为关联向量有比模型内部嵌入查找矩阵更多的控制,这将很有用。
  • start_positions (torch.Tensor, shape为 (batch_size,), 可选) — 用于计算 token 分类损失的标注 span 起始位置(索引)的标签。位置被限制在序列长度 (sequence_length) 范围内。序列以外的位置不计入损失计算。
  • end_positions (torch.Tensor, shape为 (batch_size,), 可选) — 用于计算 token 分类损失的标注 span 结束位置(索引)的标签。位置被限制在序列长度 (sequence_length) 范围内。序列以外的位置不计入损失计算。
  • output_attentions (bool, 可选) — 是否返回所有注意力层的注意力张量。有关详细信息,请参阅返回张量下的 attentions
  • output_hidden_states (bool, 可选) — 是否返回所有层的隐藏状态。有关详细信息,请参阅返回张量下的 hidden_states
  • return_dict (bool, 可选) — 是否返回 ModelOutput 而不是普通的元组。

返回

transformers.modeling_outputs.QuestionAnsweringModelOutputtuple(torch.FloatTensor)

A transformers.modeling_outputs.QuestionAnsweringModelOutputtuple(torch.FloatTensor) 的元组(如果传入 return_dict=Falseconfig.return_dict=False),包含根据配置 (DebertaV2Config) 和输入的不同元素。

  • loss (torch.FloatTensor of shape (1,), 可选, 当提供 labels 时返回) — 总范围提取损失是起始位置和结束位置的交叉熵之和。

  • start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — 范围起始分数(SoftMax 之前)。

  • end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — 范围结束分数(SoftMax 之前)。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

DebertaV2ForQuestionAnswering 的 forward 方法,覆盖了 __call__ 特殊方法。

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

示例

>>> from transformers import AutoTokenizer, DebertaV2ForQuestionAnswering
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForQuestionAnswering.from_pretrained("microsoft/deberta-v2-xlarge")

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"

>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
...     outputs = model(**inputs)

>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()

>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
...

>>> # target is "nice puppet"
>>> target_start_index = torch.tensor([14])
>>> target_end_index = torch.tensor([15])

>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
...

DebertaV2ForMultipleChoice

class transformers.DebertaV2ForMultipleChoice

< >

( config model_args: ~utils.generic.ModelArgs | None = None adapter_args: ~utils.generic.AdapterArgs | None = None lora_args: ~utils.generic.LoRAArgs | None = None tokenizer_args: ~utils.generic.TokenizerArgs | None = None dataset_args: ~utils.generic.DatasetArgs | None = None data_args: ~utils.generic.DataArgs | None = None training_args: ~utils.generic.TrainingArgs | None = None generation_args: ~utils.generic.GenerationArgs | None = None vision_tower_args: ~utils.generic.VisionTowerArgs | None = None qlora_args: ~utils.generic.QLoRAArgs | None = None vision_tower_template_args: ~utils.generic.VisionTowerTemplateArgs | None = None video_tower_args: ~utils.generic.VideoTowerArgs | None = None vision_config: ~utils.generic.VisionConfig | None = None video_config: ~utils.generic.VideoConfig | None = None load_dataset: bool | None = None load_data_collator: bool | None = None load_processor: bool | None = None load_lora_adapter: bool | None = None load_adapter: bool | None = None load_qlora_adapter: bool | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_utils.PreTrainedModelKwargs] )

参数

  • config (DebertaV2ForMultipleChoice) — 模型的配置类,包含模型的所有参数。使用配置文件初始化时不会加载与模型相关的权重,只会加载配置。请查看 from_pretrained() 方法来加载模型权重。

Deberta V2 模型,顶部带有一个多项选择分类头(在池化输出顶部的线性层和 softmax),例如用于 RocStories/SWAG 任务。

此模型继承自 PreTrainedModel。查看其父类文档,了解库为所有模型实现的通用方法(例如下载或保存、调整输入嵌入大小、修剪头等)。

此模型也是一个 PyTorch torch.nn.Module 子类。像普通的 PyTorch Module 一样使用它,并参考 PyTorch 文档了解一般用法和行为的所有相关信息。

forward

< >

( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None labels: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)

参数

  • input_ids (torch.Tensor, shape为 (batch_size, num_choices, sequence_length), 可选) — 词汇表中输入序列 token 的索引。默认情况下,填充(padding)将被忽略。

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    什么是输入 ID?

  • attention_mask (torch.Tensor, shape为 (batch_size, num_choices, sequence_length), 可选) — 用于避免在填充 token 索引上执行注意力操作的掩码。掩码值选择在 [0, 1] 之间:

    • 1 表示**未被掩码**的 token,
    • 0 表示**被掩码**的 token。

    什么是注意力掩码?

  • token_type_ids (torch.Tensor, shape为 (batch_size, num_choices, sequence_length), 可选) — 段落 token 索引,用于指示输入的第一个和第二个部分。索引选择在 [0, 1] 之间:

    • 0 对应于 *句子 A* token,
    • 1 对应于 *句子 B* token。

    什么是 token 类型 ID?

  • position_ids (torch.Tensor, shape为 (batch_size, sequence_length), 可选) — 位置嵌入中每个输入序列 token 位置的索引。选择范围为 [0, config.n_positions - 1]

    什么是位置 ID?

  • inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — 可选参数,您可以选择直接传入嵌入表示(embedding representation),而不是传入 input_ids。如果您希望对如何将 input_ids 索引转换为关联向量有更多的控制权,而不是使用模型内部的嵌入查找矩阵,这将非常有用。
  • labels (torch.LongTensor of shape (batch_size,), optional) — 用于计算多项选择分类损失的标签。索引应在 [0, ..., num_choices-1] 范围内,其中 num_choices 是输入张量第二维的大小。(请参阅上面的 input_ids
  • output_attentions (bool, optional) — 是否返回所有注意力层的注意力张量。有关详细信息,请参阅返回张量下的 attentions
  • output_hidden_states (bool, optional) — 是否返回所有层的隐藏状态。有关详细信息,请参阅返回张量下的 hidden_states
  • return_dict (bool, optional) — 是否返回 ModelOutput 对象而不是普通的元组。

返回

transformers.modeling_outputs.MultipleChoiceModelOutputtuple(torch.FloatTensor)

返回 transformers.modeling_outputs.MultipleChoiceModelOutput 对象,或者是一个 torch.FloatTensor 元组(如果传入 return_dict=Falseconfig.return_dict=False),其中包含各种元素,具体取决于配置(DebertaV2Config)和输入。

  • loss (形状为 (1,)torch.FloatTensor可选,当提供 labels 时返回) — 分类损失。

  • logits (形状为 (batch_size, num_choices)torch.FloatTensor) — num_choices 是输入张量的第二维大小。(请参阅上面的 input_ids)。

    分类分数(SoftMax 之前)。

  • hidden_states (tuple(torch.FloatTensor), optional, 当传递 output_hidden_states=True 或当 config.output_hidden_states=True 时返回) — torch.FloatTensor 的元组(一个用于嵌入层的输出,如果模型有嵌入层;+一个用于每个层的输出),形状为 (batch_size, sequence_length, hidden_size)

    模型在每个层输出的隐藏状态以及可选的初始嵌入输出。

  • attentions (tuple(torch.FloatTensor), optional, 当传递 output_attentions=True 或当 config.output_attentions=True 时返回) — torch.FloatTensor 的元组(每个层一个),形状为 (batch_size, num_heads, sequence_length, sequence_length)

    注意力 softmax 后的注意力权重,用于计算自注意力头中的加权平均值。

DebertaV2ForMultipleChoice 的 forward 方法重写了 __call__ 特殊方法。

虽然 forward pass 的实现需要在此函数中定义,但你应该在之后调用 Module 实例而不是这个,因为前者负责运行预处理和后处理步骤,而后者会静默地忽略它们。

示例

>>> from transformers import AutoTokenizer, DebertaV2ForMultipleChoice
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> model = DebertaV2ForMultipleChoice.from_pretrained("microsoft/deberta-v2-xlarge")

>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0)  # choice0 is correct (according to Wikipedia ;)), batch size 1

>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels)  # batch size is 1

>>> # the linear classifier still needs to be trained
>>> loss = outputs.loss
>>> logits = outputs.logits
在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.