Diffusers 文档
UNetMotionModel
并获得增强的文档体验
开始使用
UNetMotionModel
UNet 模型最初由 Ronneberger 等人提出,用于生物医学图像分割,但它也常用于 🤗 Diffusers,因为它输出的图像大小与输入相同。它是扩散系统最重要的组件之一,因为它促进了实际的扩散过程。在 🤗 Diffusers 中有几种 UNet 模型的变体,具体取决于其维度数量以及是否是条件模型。这是一个 2D UNet 模型。
论文摘要如下:
人们普遍认为,成功训练深度网络需要数千个带注释的训练样本。在本文中,我们提出了一种网络和训练策略,它强烈依赖于数据增强,以更有效地利用可用的带注释样本。该架构包括一个收缩路径用于捕获上下文,以及一个对称的扩展路径,可实现精确的定位。我们证明,这种网络可以通过极少的图像进行端到端训练,并在 ISBI 挑战赛中超越了先前最好的方法(滑动窗口卷积网络),用于电子显微镜堆栈中神经元结构的分割。使用在透射光显微镜图像(相衬和 DIC)上训练的相同网络,我们在 2015 年 ISBI 细胞追踪挑战赛的这些类别中以大幅优势获胜。此外,该网络速度很快。在最近的 GPU 上分割 512x512 图像所需时间不到一秒。完整的实现(基于 Caffe)和训练好的网络可在 http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net 获取。
UNetMotionModel
class diffusers.UNetMotionModel
< 源 >( sample_size: typing.Optional[int] = None in_channels: int = 4 out_channels: int = 4 down_block_types: typing.Tuple[str, ...] = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: typing.Tuple[str, ...] = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: typing.Tuple[int, ...] = (320, 640, 1280, 1280) layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple]] = 1 reverse_transformer_layers_per_block: typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple], NoneType] = None temporal_transformer_layers_per_block: typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple]] = 1 reverse_temporal_transformer_layers_per_block: typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple], NoneType] = None transformer_layers_per_mid_block: typing.Union[int, typing.Tuple[int], NoneType] = None temporal_transformer_layers_per_mid_block: typing.Union[int, typing.Tuple[int], NoneType] = 1 use_linear_projection: bool = False num_attention_heads: typing.Union[int, typing.Tuple[int, ...]] = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: typing.Union[int, typing.Tuple[int, ...]] = 8 reverse_motion_num_attention_heads: typing.Union[int, typing.Tuple[int, ...], typing.Tuple[typing.Tuple[int, ...], ...], NoneType] = None use_motion_mid_block: bool = True mid_block_layers: int = 1 encoder_hid_dim: typing.Optional[int] = None encoder_hid_dim_type: typing.Optional[str] = None addition_embed_type: typing.Optional[str] = None addition_time_embed_dim: typing.Optional[int] = None projection_class_embeddings_input_dim: typing.Optional[int] = None time_cond_proj_dim: typing.Optional[int] = None )
一个经过修改的条件 2D UNet 模型,它接受一个带噪声的样本、条件状态和一个时间步长,并返回一个样本形状的输出。
此模型继承自 ModelMixin。有关所有模型实现的通用方法(如下载或保存),请参阅超类文档。
禁用 FreeU 机制。
enable_freeu
< 源 >( s1: float s2: float b1: float b2: float )
启用来自 https://huggingface.ac.cn/papers/2309.11497 的 FreeU 机制。
缩放因子后面的后缀表示它们正在应用的阶段块。
请参阅官方仓库,了解适用于 Stable Diffusion v1、v2 和 Stable Diffusion XL 等不同管道的已知良好值组合。
forward
< 源 >( sample: Tensor timestep: typing.Union[torch.Tensor, float, int] encoder_hidden_states: Tensor timestep_cond: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None added_cond_kwargs: typing.Optional[typing.Dict[str, torch.Tensor]] = None down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None mid_block_additional_residual: typing.Optional[torch.Tensor] = None return_dict: bool = True ) → UNetMotionOutput
或 tuple
参数
- sample (
torch.Tensor
) — 形状为(batch, num_frames, channel, height, width)
的带噪声输入张量。 - timestep (
torch.Tensor
或float
或int
) — 去噪输入的时间步数。 - encoder_hidden_states (
torch.Tensor
) — 形状为(batch, sequence_length, feature_dim)
的编码器隐藏状态。 - timestep_cond — (
torch.Tensor
, 可选, 默认为None
): 时间步长的条件嵌入。如果提供,嵌入将与通过self.time_embedding
层传递的样本相加,以获得时间步长嵌入。 - attention_mask (
torch.Tensor
, 可选, 默认为None
) — 形状为(batch, key_tokens)
的注意力掩码应用于encoder_hidden_states
。如果为1
,则保留掩码,否则如果为0
,则丢弃。掩码将转换为偏差,这会向对应于“丢弃”token 的注意力分数添加大的负值。 - cross_attention_kwargs (
dict
, 可选) — 一个 kwargs 字典,如果指定,将传递给self.processor
中定义的AttentionProcessor
,如 diffusers.models.attention_processor 所述。 - down_block_additional_residuals — (
tuple
oftorch.Tensor
, 可选): 一个张量元组,如果指定,则添加到下层 unet 块的残差中。 - mid_block_additional_residual — (
torch.Tensor
, 可选): 一个张量,如果指定,则添加到中间 unet 块的残差中。 - return_dict (
bool
, 可选, 默认为True
) — 是否返回UNetMotionOutput
而不是普通元组。
返回
UNetMotionOutput
或 tuple
如果 return_dict
为 True,则返回 UNetMotionOutput
;否则,返回一个 tuple
,其中第一个元素是样本张量。
UNetMotionModel
前向方法。
冻结仅 UNet2DConditionModel 的权重,并解冻运动模块以进行微调。
设置注意力处理器
< 源 >( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )
设置用于计算注意力的注意力处理器。
禁用自定义注意力处理器并设置默认注意力实现。
UNet3DConditionOutput
class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput
< 源 >( sample: Tensor )
UNet3DConditionModel
的输出。