Diffusers 文档

CogVideoXTransformer3D模型

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

CogVideoXTransformer3D模型

清华大学和智谱AI在CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer中介绍了来自CogVideoX的用于3D数据的扩散Transformer模型。

该模型可以通过以下代码片段加载。

from diffusers import CogVideoXTransformer3DModel

transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-2b", subfolder="transformer", torch_dtype=torch.float16).to("cuda")

CogVideoXTransformer3D模型

class diffusers.CogVideoXTransformer3DModel

< >

( num_attention_heads: int = 30 attention_head_dim: int = 64 in_channels: int = 16 out_channels: typing.Optional[int] = 16 flip_sin_to_cos: bool = True freq_shift: int = 0 time_embed_dim: int = 512 ofs_embed_dim: typing.Optional[int] = None text_embed_dim: int = 4096 num_layers: int = 30 dropout: float = 0.0 attention_bias: bool = True sample_width: int = 90 sample_height: int = 60 sample_frames: int = 49 patch_size: int = 2 patch_size_t: typing.Optional[int] = None temporal_compression_ratio: int = 4 max_text_seq_length: int = 226 activation_fn: str = 'gelu-approximate' timestep_activation_fn: str = 'silu' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 spatial_interpolation_scale: float = 1.875 temporal_interpolation_scale: float = 1.0 use_rotary_positional_embeddings: bool = False use_learned_positional_embeddings: bool = False patch_bias: bool = True )

参数

  • num_attention_heads (int, defaults to 30) — 用于多头注意力机制的头数。
  • attention_head_dim (int, defaults to 64) — 每个注意力头中的通道数。
  • in_channels (int, defaults to 16) — 输入中的通道数。
  • out_channels (int, 可选, defaults to 16) — 输出中的通道数。
  • flip_sin_to_cos (bool, defaults to True) — 是否在时间嵌入中将sin翻转为cos。
  • time_embed_dim (int, defaults to 512) — 时间步嵌入的输出维度。
  • ofs_embed_dim (int, defaults to 512) — CogVideoX-5b-I2B 1.5版中使用的“ofs”嵌入的输出维度。
  • text_embed_dim (int, defaults to 4096) — 文本编码器中文本嵌入的输入维度。
  • num_layers (int, defaults to 30) — 要使用的Transformer块层数。
  • dropout (float, defaults to 0.0) — 要使用的dropout概率。
  • attention_bias (bool, defaults to True) — 是否在注意力投影层中使用偏置。
  • sample_width (int, defaults to 90) — 输入潜在的宽度。
  • sample_height (int, defaults to 60) — 输入潜在的高度。
  • sample_frames (int, defaults to 49) — 输入潜在的帧数。请注意,此参数最初错误地初始化为49而非13,因为CogVideoX在默认和推荐设置下一次性处理13个潜在帧,但为确保向后兼容性,无法更改为正确的值。要创建具有K个潜在帧的Transformer,此处应传递的正确值为:((K - 1) * temporal_compression_ratio + 1)。
  • patch_size (int, defaults to 2) — 补丁嵌入层中使用的补丁大小。
  • temporal_compression_ratio (int, defaults to 4) — 跨时间维度的压缩比。请参阅sample_frames的文档。
  • max_text_seq_length (int, defaults to 226) — 输入文本嵌入的最大序列长度。
  • activation_fn (str, defaults to "gelu-approximate") — 前馈网络中使用的激活函数。
  • timestep_activation_fn (str, defaults to "silu") — 生成时间步嵌入时使用的激活函数。
  • norm_elementwise_affine (bool, defaults to True) — 是否在归一化层中使用逐元素仿射。
  • norm_eps (float, defaults to 1e-5) — 归一化层中使用的epsilon值。
  • spatial_interpolation_scale (float, defaults to 1.875) — 在3D位置嵌入中应用于空间维度的缩放因子。
  • temporal_interpolation_scale (float, defaults to 1.0) — 在3D位置嵌入中应用于时间维度的缩放因子。

CogVideoX中用于视频类数据的Transformer模型。

融合 qkv 投影

< >

( )

启用融合 QKV 投影。对于自注意力模块,所有投影矩阵(即查询、键、值)都将融合。对于交叉注意力模块,键和值投影矩阵将融合。

此 API 是 🧪 实验性的。

设置注意力处理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

参数

  • processor (AttentionProcessor的字典或仅AttentionProcessor) — 实例化处理器类或处理器类字典,将设置为所有Attention层的处理器。

    如果processor是一个字典,则键需要定义到相应交叉注意力处理器的路径。在设置可训练注意力处理器时,强烈建议这样做。

设置用于计算注意力的注意力处理器。

unfuse_qkv_projections

< >

( )

如果启用了,则禁用融合的 QKV 投影。

此 API 是 🧪 实验性的。

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

参数

  • sample (torch.Tensor, 形状为(batch_size, num_channels, height, width);如果Transformer2DModel是离散的,则为(batch size, num_vector_embeds - 1, num_latent_pixels)) — 在encoder_hidden_states输入条件下输出的隐藏状态。如果是离散的,则返回未去噪的潜在像素的概率分布。

Transformer2DModel 的输出。

< > 在 GitHub 上更新