Diffusers 文档

CogView3PlusTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

CogView3PlusTransformer2DModel

来自 CogView3Plus 的 2D 数据扩散 Transformer 模型在清华大学和智谱AI 的 CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion 中被介绍。

该模型可以通过以下代码片段加载。

from diffusers import CogView3PlusTransformer2DModel

transformer = CogView3PlusTransformer2DModel.from_pretrained("THUDM/CogView3Plus-3b", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")

CogView3PlusTransformer2DModel

class diffusers.CogView3PlusTransformer2DModel

< >

( patch_size: int = 2 in_channels: int = 16 num_layers: int = 30 attention_head_dim: int = 40 num_attention_heads: int = 64 out_channels: int = 16 text_embed_dim: int = 4096 time_embed_dim: int = 512 condition_dim: int = 256 pos_embed_max_size: int = 128 sample_size: int = 128 )

参数

  • patch_size (int, 默认为 2) — 在补丁嵌入层中使用的补丁大小。
  • in_channels (int, 默认为 16) — 输入中的通道数。
  • num_layers (int, 默认为 30) — 要使用的 Transformer 块层数。
  • attention_head_dim (int, 默认为 40) — 每个头的通道数。
  • num_attention_heads (int, 默认为 64) — 多头注意力使用的头数。
  • out_channels (int, 默认为 16) — 输出中的通道数。
  • text_embed_dim (int, 默认为 4096) — 文本编码器文本嵌入的输入维度。
  • time_embed_dim (int, 默认为 512) — 时间步嵌入的输出维度。
  • condition_dim (int, 默认为 256) — 输入 SDXL 风格分辨率条件(original_size、target_size、crop_coords)的嵌入维度。
  • pos_embed_max_size (int, 默认为 128) — 位置嵌入的最大分辨率,从中获取形状为 H x W 的切片并添加到输入打补丁的潜变量中,其中 HW 分别是潜在变量的高度和宽度。值为 128 意味着图像生成的最大支持高度和宽度为 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048
  • sample_size (int, 默认为 128) — 输入潜变量的基础分辨率。如果在生成期间未提供高度/宽度,则此值用于确定分辨率为 sample_size * vae_scale_factor => 128 * 8 => 1024

CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion 中引入的 Transformer 模型。

forward

< >

( hidden_states: Tensor encoder_hidden_states: Tensor timestep: LongTensor original_size: Tensor target_size: Tensor crop_coords: Tensor return_dict: bool = True ) torch.Tensor~models.transformer_2d.Transformer2DModelOutput

参数

  • hidden_states (torch.Tensor) — 形状为 (批大小, 通道, 高度, 宽度) 的输入 hidden_states
  • encoder_hidden_states (torch.Tensor) — 形状为 (批大小, 序列长度, text_embed_dim) 的条件嵌入(从提示等输入条件计算的嵌入)
  • timestep (torch.LongTensor) — 用于指示去噪步骤。
  • original_size (torch.Tensor) — CogView3 使用 SDXL 风格的微条件来表示原始图像大小,如 https://huggingface.ac.cn/papers/2307.01952 第 2.2 节所述。
  • target_size (torch.Tensor) — CogView3 使用 SDXL 风格的微条件来表示目标图像大小,如 https://huggingface.ac.cn/papers/2307.01952 第 2.2 节所述。
  • crop_coords (torch.Tensor) — CogView3 使用 SDXL 风格的微条件来表示裁剪坐标,如 https://huggingface.ac.cn/papers/2307.01952 第 2.2 节所述。
  • return_dict (bool, 可选, 默认为 True) — 是否返回 ~models.transformer_2d.Transformer2DModelOutput 而不是普通元组。

返回

torch.Tensor~models.transformer_2d.Transformer2DModelOutput

使用提供的输入作为条件去噪后的潜在变量。

CogView3PlusTransformer2DModel 的 forward 方法。

设置注意力处理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

参数

  • processor (dict of AttentionProcessor 或 仅 AttentionProcessor) — 将设置为 所有 Attention 层的处理器的实例化处理器类或处理器类字典。

    如果 processor 是一个字典,则键需要定义到相应交叉注意力处理器的路径。强烈建议在设置可训练注意力处理器时使用此方法。

设置用于计算注意力的注意力处理器。

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

参数

  • sample (形状为 (批大小, 通道数, 高度, 宽度)torch.Tensor;如果 Transformer2DModel 是离散的,则为 (批大小, 向量嵌入数 - 1, 潜在像素数)) — 在 encoder_hidden_states 输入上进行条件化的隐藏状态输出。如果是离散的,则返回未去噪潜在像素的概率分布。

Transformer2DModel 的输出。

< > 在 GitHub 上更新