Diffusers 文档
HunyuanVideoTransformer3D模型
并获得增强的文档体验
开始使用
HunyuanVideoTransformer3D模型
腾讯在 HunyuanVideo: A Systematic Framework For Large Video Generative Models 中介绍了用于3D视频类数据的扩散Transformer模型。
该模型可以通过以下代码片段加载。
from diffusers import HunyuanVideoTransformer3DModel
transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)
HunyuanVideoTransformer3D模型
类 diffusers.HunyuanVideoTransformer3DModel
< 源代码 >( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 24 attention_head_dim: int = 128 num_layers: int = 20 num_single_layers: int = 40 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: int = 2 patch_size_t: int = 1 qk_norm: str = 'rms_norm' guidance_embeds: bool = True text_embed_dim: int = 4096 pooled_projection_dim: int = 768 rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int] = (16, 56, 56) image_condition_type: typing.Optional[str] = None )
参数
- in_channels (
int
, 默认为16
) — 输入中的通道数量。 - out_channels (
int
, 默认为16
) — 输出中的通道数量。 - num_attention_heads (
int
, 默认为24
) — 多头注意力中使用的头数量。 - attention_head_dim (
int
, 默认为128
) — 每个头中的通道数量。 - num_layers (
int
, 默认为20
) — 双流块的层数。 - num_single_layers (
int
, 默认为40
) — 单流块的层数。 - num_refiner_layers (
int
, 默认为2
) — 精炼器块的层数。 - mlp_ratio (
float
, 默认为4.0
) — 前馈网络中隐藏层大小与输入大小的比率。 - patch_size (
int
, 默认为2
) — 补丁嵌入层中使用的空间补丁大小。 - patch_size_t (
int
, 默认为1
) — 补丁嵌入层中使用的时间补丁大小。 - qk_norm (
str
, 默认为rms_norm
) — 注意力层中查询和键投影使用的归一化方式。 - guidance_embeds (
bool
, 默认为True
) — 是否在模型中使用指导嵌入。 - text_embed_dim (
int
, 默认为4096
) — 文本编码器中文本嵌入的输入维度。 - pooled_projection_dim (
int
, 默认为768
) — 文本嵌入池化投影的维度。 - rope_theta (
float
, 默认为256.0
) — RoPE 层中使用的 theta 值。 - rope_axes_dim (
Tuple[int]
, 默认为(16, 56, 56)
) — RoPE 层中使用的轴维度。 - image_condition_type (
str
, 可选, 默认为None
) — 使用的图像条件类型。如果为None
,则不使用图像条件。如果为latent_concat
,则图像与潜在流连接。如果为token_replace
,则图像用于替换潜在流中的第一帧标记并应用条件。
用于 HunyuanVideo 中的视频类数据的 Transformer 模型。
设置注意力处理器
< 源代码 >( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )
设置用于计算注意力的注意力处理器。
Transformer2DModelOutput
类 diffusers.models.modeling_outputs.Transformer2DModelOutput
< 源代码 >( sample: torch.Tensor )
参数
- sample (
torch.Tensor
,形状为(batch_size, num_channels, height, width)
或(batch size, num_vector_embeds - 1, num_latent_pixels)
如果 Transformer2DModel 是离散的) — 在encoder_hidden_states
输入条件下输出的隐藏状态。如果是离散的,则返回未去噪潜在像素的概率分布。
Transformer2DModel 的输出。