Diffusers 文档

SanaTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

SanaTransformer2DModel

来自 SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers 的 SanaTransformer2DModel 是一个用于2D数据的扩散 Transformer 模型,由 NVIDIA 和 MIT HAN Lab 的 Enze Xie、Junsong Chen、Junyu Chen、Han Cai、Haotian Tang、Yujun Lin、Zhekai Zhang、Muyang Li、Ligeng Zhu、Yao Lu、Song Han 共同推出。

论文摘要如下:

我们推出了 Sana,这是一个文本到图像框架,能够高效生成高达 4096×4096 分辨率的图像。Sana 能够以惊人的速度合成高分辨率、高质量图像,并实现强大的文本-图像对齐,甚至可以在笔记本电脑 GPU 上部署。核心设计包括:(1) 深度压缩自编码器:与传统仅将图像压缩 8 倍的自编码器不同,我们训练了一个能够将图像压缩 32 倍的自编码器,有效减少了潜在令牌的数量。(2) 线性 DiT:我们用线性注意力替换了 DiT 中所有香草注意力,这种注意力在高分辨率下更高效,且不牺牲质量。(3) 仅解码器文本编码器:我们用现代的仅解码器小型 LLM 替换了 T5 作为文本编码器,并设计了带有上下文学习的复杂人类指令,以增强图像-文本对齐。(4) 高效训练和采样:我们提出了 Flow-DPM-Solver 以减少采样步骤,并通过高效的字幕标注和选择来加速收敛。因此,Sana-0.6B 在与现代巨型扩散模型(例如 Flux-12B)竞争时具有很强的竞争力,其模型尺寸小 20 倍,吞吐量快 100 倍以上。此外,Sana-0.6B 可以在 16GB 笔记本电脑 GPU 上部署,生成 1024×1024 分辨率的图像所需时间不到 1 秒。Sana 以低成本实现了内容创作。代码和模型将公开发布。

该模型可以通过以下代码片段加载。

from diffusers import SanaTransformer2DModel

transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

SanaTransformer2DModel

class diffusers.SanaTransformer2DModel

< >

( in_channels: int = 32 out_channels: typing.Optional[int] = 32 num_attention_heads: int = 70 attention_head_dim: int = 32 num_layers: int = 20 num_cross_attention_heads: typing.Optional[int] = 20 cross_attention_head_dim: typing.Optional[int] = 112 cross_attention_dim: typing.Optional[int] = 2240 caption_channels: int = 2304 mlp_ratio: float = 2.5 dropout: float = 0.0 attention_bias: bool = False sample_size: int = 32 patch_size: int = 1 norm_elementwise_affine: bool = False norm_eps: float = 1e-06 interpolation_scale: typing.Optional[int] = None guidance_embeds: bool = False guidance_embeds_scale: float = 0.1 qk_norm: typing.Optional[str] = None timestep_scale: float = 1.0 )

参数

  • in_channels (int, 默认为 32) — 输入中的通道数。
  • out_channels (int, 可选, 默认为 32) — 输出中的通道数。
  • num_attention_heads (int, 默认为 70) — 用于多头注意力的头数。
  • attention_head_dim (int, 默认为 32) — 每个注意力头中的通道数。
  • num_layers (int, 默认为 20) — Transformer 块的层数。
  • num_cross_attention_heads (int, 可选, 默认为 20) — 用于交叉注意力的头数。
  • cross_attention_head_dim (int, 可选, 默认为 112) — 每个交叉注意力头中的通道数。
  • cross_attention_dim (int, 可选, 默认为 2240) — 交叉注意力输出中的通道数。
  • caption_channels (int, 默认为 2304) — 字幕嵌入中的通道数。
  • mlp_ratio (float, 默认为 2.5) — 在 GLUMBConv 层中使用的扩展比例。
  • dropout (float, 默认为 0.0) — dropout 概率。
  • attention_bias (bool, 默认为 False) — 是否在注意力层中使用偏置。
  • sample_size (int, 默认为 32) — 输入潜在的基础大小。
  • patch_size (int, 默认为 1) — 补丁嵌入层中使用的补丁大小。
  • norm_elementwise_affine (bool, 默认为 False) — 是否在归一化层中使用逐元素仿射。
  • norm_eps (float, 默认为 1e-6) — 归一化层的 epsilon 值。
  • qk_norm (str, 可选, 默认为 None) — 用于查询和键的归一化方式。
  • timestep_scale (float, 默认为 1.0) — 用于时间步的比例。

Sana 模型系列中引入的 2D Transformer 模型。

设置注意力处理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

参数

  • processor (AttentionProcessordict 或仅 AttentionProcessor) — 将作为 所有 Attention 层的处理器而设置的实例化处理器类或处理器类字典。

    如果 processor 是一个字典,则键需要定义到相应交叉注意力处理器的路径。在设置可训练注意力处理器时,强烈建议这样做。

设置用于计算注意力的注意力处理器。

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

参数

  • sample (形状为 (batch_size, num_channels, height, width)torch.Tensor 或如果 Transformer2DModel 是离散的,则为 (batch_size, num_vector_embeds - 1, num_latent_pixels)) — 在 encoder_hidden_states 输入上输出的隐藏状态。如果为离散,则返回未加噪潜在像素的概率分布。

Transformer2DModel 的输出。

< > 在 GitHub 上更新