Diffusers 文档

HunyuanVideoTransformer3D模型

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

HunyuanVideoTransformer3D模型

腾讯在 HunyuanVideo: A Systematic Framework For Large Video Generative Models 中介绍了用于3D视频类数据的扩散Transformer模型。

该模型可以通过以下代码片段加载。

from diffusers import HunyuanVideoTransformer3DModel

transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)

HunyuanVideoTransformer3D模型

diffusers.HunyuanVideoTransformer3DModel

< >

( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 24 attention_head_dim: int = 128 num_layers: int = 20 num_single_layers: int = 40 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: int = 2 patch_size_t: int = 1 qk_norm: str = 'rms_norm' guidance_embeds: bool = True text_embed_dim: int = 4096 pooled_projection_dim: int = 768 rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int] = (16, 56, 56) image_condition_type: typing.Optional[str] = None )

参数

  • in_channels (int, 默认为 16) — 输入中的通道数量。
  • out_channels (int, 默认为 16) — 输出中的通道数量。
  • num_attention_heads (int, 默认为 24) — 多头注意力中使用的头数量。
  • attention_head_dim (int, 默认为 128) — 每个头中的通道数量。
  • num_layers (int, 默认为 20) — 双流块的层数。
  • num_single_layers (int, 默认为 40) — 单流块的层数。
  • num_refiner_layers (int, 默认为 2) — 精炼器块的层数。
  • mlp_ratio (float, 默认为 4.0) — 前馈网络中隐藏层大小与输入大小的比率。
  • patch_size (int, 默认为 2) — 补丁嵌入层中使用的空间补丁大小。
  • patch_size_t (int, 默认为 1) — 补丁嵌入层中使用的时间补丁大小。
  • qk_norm (str, 默认为 rms_norm) — 注意力层中查询和键投影使用的归一化方式。
  • guidance_embeds (bool, 默认为 True) — 是否在模型中使用指导嵌入。
  • text_embed_dim (int, 默认为 4096) — 文本编码器中文本嵌入的输入维度。
  • pooled_projection_dim (int, 默认为 768) — 文本嵌入池化投影的维度。
  • rope_theta (float, 默认为 256.0) — RoPE 层中使用的 theta 值。
  • rope_axes_dim (Tuple[int], 默认为 (16, 56, 56)) — RoPE 层中使用的轴维度。
  • image_condition_type (str, 可选, 默认为 None) — 使用的图像条件类型。如果为 None,则不使用图像条件。如果为 latent_concat,则图像与潜在流连接。如果为 token_replace,则图像用于替换潜在流中的第一帧标记并应用条件。

用于 HunyuanVideo 中的视频类数据的 Transformer 模型。

设置注意力处理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

参数

  • processor (AttentionProcessordict 或仅 AttentionProcessor) — 将设置为 所有 Attention 层的处理器类的实例化或处理器类字典。

    如果 processor 是一个字典,则键需要定义到相应交叉注意力处理器的路径。强烈建议在设置可训练的注意力处理器时使用此方法。

设置用于计算注意力的注意力处理器。

Transformer2DModelOutput

diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

参数

  • sample (torch.Tensor,形状为 (batch_size, num_channels, height, width)(batch size, num_vector_embeds - 1, num_latent_pixels) 如果 Transformer2DModel 是离散的) — 在 encoder_hidden_states 输入条件下输出的隐藏状态。如果是离散的,则返回未去噪潜在像素的概率分布。

Transformer2DModel 的输出。

< > 在 GitHub 上更新