Diffusers 文档
SanaTransformer2D模型
并获得增强的文档体验
开始使用
SanaTransformer2D模型
SanaTransformer2D 模型是一个来自 NVIDIA 和 MIT HAN 实验室的用于 2D 数据的扩散 Transformer 模型,出自 SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers,由 Enze Xie、Junsong Chen、Junyu Chen、Han Cai、Haotian Tang、Yujun Lin、Zhekai Zhang、Muyang Li、Ligeng Zhu、Yao Lu、Song Han 介绍。
论文摘要如下:
我们介绍了 Sana,一个可以高效生成高达 4096×4096 分辨率图像的文本到图像框架。Sana 能够以非常快的速度合成高分辨率、高质量的图像,并具有强大的文本-图像对齐能力,可在笔记本电脑 GPU 上部署。核心设计包括:(1)深度压缩自编码器:与传统的仅压缩图像 8 倍的 AE 不同,我们训练了一个可以压缩图像 32 倍的 AE,有效地减少了潜在 token 的数量。(2)线性 DiT:我们用线性注意力取代了 DiT 中所有原始的注意力,这在高分辨率下更有效率,且不牺牲质量。(3)仅解码器文本编码器:我们用现代的仅解码器小型 LLM 取代了 T5 作为文本编码器,并设计了复杂的人工指令与上下文学习,以增强图像-文本对齐。(4)高效的训练和采样:我们提出了 Flow-DPM-Solver 来减少采样步骤,并采用高效的标题标注和选择来加速收敛。因此,Sana-0.6B 与现代巨型扩散模型(例如 Flux-12B)相比非常有竞争力,尺寸缩小了 20 倍,吞吐量速度提高了 100 倍以上。此外,Sana-0.6B 可以部署在 16GB 笔记本电脑 GPU 上,生成 1024×1024 分辨率的图像不到 1 秒。Sana 实现了低成本的内容创作。代码和模型将公开发布。
该模型可以使用以下代码片段加载。
from diffusers import SanaTransformer2DModel
transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
SanaTransformer2D模型
class diffusers.SanaTransformer2DModel
< source >( in_channels: int = 32 out_channels: typing.Optional[int] = 32 num_attention_heads: int = 70 attention_head_dim: int = 32 num_layers: int = 20 num_cross_attention_heads: typing.Optional[int] = 20 cross_attention_head_dim: typing.Optional[int] = 112 cross_attention_dim: typing.Optional[int] = 2240 caption_channels: int = 2304 mlp_ratio: float = 2.5 dropout: float = 0.0 attention_bias: bool = False sample_size: int = 32 patch_size: int = 1 norm_elementwise_affine: bool = False norm_eps: float = 1e-06 interpolation_scale: typing.Optional[int] = None )
参数
- in_channels (
int
,默认为32
) — 输入中的通道数。 - out_channels (
int
,可选,默认为32
) — 输出中的通道数。 - num_attention_heads (
int
,默认为70
) — 用于多头注意力的头的数量。 - attention_head_dim (
int
,默认为32
) — 每个头中的通道数。 - num_layers (
int
, 默认为20
) — 要使用的 Transformer 块的数量。 - num_cross_attention_heads (
int
, 可选, 默认为20
) — 用于交叉注意力的头的数量。 - cross_attention_head_dim (
int
, 可选, 默认为112
) — 用于交叉注意力的每个头部的通道数。 - cross_attention_dim (
int
, 可选, 默认为2240
) — 交叉注意力输出中的通道数。 - caption_channels (
int
, 默认为2304
) — 标题嵌入中的通道数。 - mlp_ratio (
float
, 默认为2.5
) — 在 GLUMBConv 层中使用的扩展比率。 - dropout (
float
, 默认为0.0
) — dropout 概率。 - attention_bias (
bool
, 默认为False
) — 是否在注意力层中使用偏置。 - sample_size (
int
, 默认为32
) — 输入潜变量的基本大小。 - patch_size (
int
, 默认为1
) — 在 patch 嵌入层中使用的 patch 大小。 - norm_elementwise_affine (
bool
, 默认为False
) — 是否在归一化层中使用逐元素仿射变换。 - norm_eps (
float
, 默认为1e-6
) — 归一化层的 epsilon 值。
在 Sana 模型系列中引入的 2D Transformer 模型。
set_attn_processor
< source >( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )
设置用于计算注意力的注意力处理器。
Transformer2DModelOutput
class diffusers.models.modeling_outputs.Transformer2DModelOutput
< source >( sample: torch.Tensor )
参数
- sample (形状为
(batch_size, num_channels, height, width)
的torch.Tensor
或(batch size, num_vector_embeds - 1, num_latent_pixels)
,如果 Transformer2DModel 是离散的) — 以encoder_hidden_states
输入为条件的隐藏状态输出。 如果是离散的,则返回未噪声潜像素的概率分布。
Transformer2DModel 的输出。