Diffusers 文档
视频生成
并获得增强的文档体验
开始使用
视频生成
视频生成模型将图像生成(可视为单帧视频)扩展到处理与空间和时间相关的数据。在生成长篇和高分辨率视频时,确保所有这些数据——文本、空间、时间——在帧与帧之间保持一致和对齐是一个巨大的挑战。
现代视频模型通过扩散 Transformer(DiT)架构来应对这一挑战。这降低了计算成本,并允许更有效地扩展到更大、更高质量的图像和视频数据。
请在下方查看这些视频模型的部分功能。
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel
text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
apply_group_offloading(text_encoder,
onload_device=onload_device,
offload_device=offload_device,
offload_type="block_level",
num_blocks_per_group=4
)
transformer.enable_group_offload(
onload_device=onload_device,
offload_device=offload_device,
offload_type="leaf_level",
use_stream=True
)
pipeline = WanPipeline.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers",
vae=vae,
transformer=transformer,
text_encoder=text_encoder,
torch_dtype=torch.bfloat16
)
pipeline.to("cuda")
prompt = """
The camera rushes from far to near in a low-angle shot,
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""
output = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
本指南将介绍视频生成的基础知识,例如要配置哪些参数以及如何减少其内存使用量。
如果您有兴趣了解如何使用特定模型,请参阅其 pipeline API 模型卡。
Pipeline 参数
pipeline 中有几个参数需要配置,它们会影响视频生成的质量或速度。尝试不同的参数值对于发现合适的质量和速度平衡点非常重要。
num_frames
帧是静态图像,它与其他帧按顺序播放以创建运动或视频。使用 num_frames
控制每秒生成的帧数。增加 num_frames
会提高感知的运动平滑度和视觉连贯性,这对于动态内容的视频尤其重要。较高的 num_frames
值也会增加视频时长。
一些视频模型在推理时需要更具体的 num_frames
值。例如,HunyuanVideoPipeline 建议使用 (4 * num_frames) +1
来计算 num_frames
。请务必查看 pipeline 的 API 模型卡,以了解是否有推荐值。
import torch
from diffusers import LTXPipeline
from diffusers.utils import export_to_video
pipeline = LTXPipeline.from_pretrained(
"Lightricks/LTX-Video", torch_dtype=torch.bfloat16
).to("cuda")
prompt = """
A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman
with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The
camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and
natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be
real-life footage
"""
video = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
width=768,
height=512,
num_frames=161,
decode_timestep=0.03,
decode_noise_scale=0.025,
num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
guidance_scale
引导比例(guidance scale)或“cfg”控制生成的帧与输入条件(文本、图像或两者)的贴近程度。增加 guidance_scale
会生成更接近输入条件的帧,并包含更精细的细节,但有引入伪影和降低输出多样性的风险。较低的 guidance_scale
值鼓励更宽松的提示遵循和增加的输出多样性,但细节可能不那么好。如果值过低,它可能会完全忽略您的提示并生成随机噪声。
import torch
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
from diffusers.utils import export_to_video
pipeline = CogVideoXPipeline.from_pretrained(
"THUDM/CogVideoX-2b",
torch_dtype=torch.float16
).to("cuda")
prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over
a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown,
with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an
oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at
a playful environment. The scene captures the innocence and imagination of childhood,
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""
video = pipeline(
prompt=prompt,
guidance_scale=6,
num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)
negative_prompt
负面提示(negative prompt)对于排除您不希望在生成视频中看到的内容非常有用。它通常用于通过将模型推离不希望的元素(如“模糊、扭曲、丑陋”)来优化生成视频的质量和对齐度。这可以创建更清晰、更专注的视频。
# pip install ftfy
import torch
from diffusers import WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
from diffusers.utils import export_to_video
vae = AutoencoderKLWan.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32
)
pipeline = WanPipeline.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers", vae=vae, torch_dtype=torch.bfloat16
)
pipeline.scheduler = UniPCMultistepScheduler.from_config(
pipeline.scheduler.config, flow_shift=5.0
)
pipeline.to("cuda")
pipeline.load_lora_weights("benjamin-paine/steamboat-willie-14b", adapter_name="steamboat-willie")
pipeline.set_adapters("steamboat-willie")
pipeline.enable_model_cpu_offload()
# use "steamboat willie style" to trigger the LoRA
prompt = """
steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot,
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts
dynamic shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
output = pipeline(
prompt=prompt,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
减少内存使用
最近的视频模型,如 HunyuanVideoPipeline 和 WanPipeline,参数超过100亿,需要大量内存,通常会超出消费级硬件的可用内存。Diffusers 提供了几种技术来减少这些大型模型的内存需求。
有关其他节省内存技术的更多详细信息,请参阅减少内存使用指南。
其中一种技术是分组卸载,它将内部模型层的组(例如 torch.nn.Sequential
)在不使用时卸载到 CPU。这些层仅在需要计算时才加载,以避免将所有模型组件存储在 GPU 上。对于像 WanPipeline 这样的 140 亿参数模型,分组卸载可以将所需内存降低到约 13GB 的 VRAM。
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel
text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
apply_group_offloading(text_encoder,
onload_device=onload_device,
offload_device=offload_device,
offload_type="block_level",
num_blocks_per_group=4
)
transformer.enable_group_offload(
onload_device=onload_device,
offload_device=offload_device,
offload_type="leaf_level",
use_stream=True
)
pipeline = WanPipeline.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers",
vae=vae,
transformer=transformer,
text_encoder=text_encoder,
torch_dtype=torch.bfloat16
)
pipeline.to("cuda")
prompt = """
The camera rushes from far to near in a low-angle shot,
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""
output = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
减少内存的另一种选择是考虑对模型进行量化,即将模型权重存储在较低精度的数据类型中。然而,量化可能会影响视频质量,具体取决于特定的视频模型。请参阅量化概述以了解有关不同支持的量化后端的更多信息。
下面的示例使用 bitsandbytes 对模型进行量化。
# pip install ftfy
import torch
from diffusers import WanPipeline
from diffusers import AutoModel, WanPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
from transformers import UMT5EncoderModel
from diffusers.utils import export_to_video
# quantize transformer and text encoder weights with bitsandbytes
pipeline_quant_config = PipelineQuantizationConfig(
quant_backend="bitsandbytes_4bit",
quant_kwargs={"load_in_4bit": True},
components_to_quantize=["transformer", "text_encoder"]
)
vae = AutoModel.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32
)
pipeline = WanPipeline.from_pretrained(
"Wan-AI/Wan2.1-T2V-14B-Diffusers", vae=vae, quantization_config=pipeline_quant_config, torch_dtype=torch.bfloat16
)
pipeline.scheduler = UniPCMultistepScheduler.from_config(
pipeline.scheduler.config, flow_shift=5.0
)
pipeline.to("cuda")
pipeline.load_lora_weights("benjamin-paine/steamboat-willie-14b", adapter_name="steamboat-willie")
pipeline.set_adapters("steamboat-willie")
pipeline.enable_model_cpu_offload()
# use "steamboat willie style" to trigger the LoRA
prompt = """
steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot,
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts
dynamic shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
output = pipeline(
prompt=prompt,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
推理速度
torch.compile 可以通过使用优化的内核来加速推理。首次编译需要更长的时间,但一旦编译完成,速度会快得多。最好一次性编译 pipeline,然后在不更改任何内容的情况下多次使用 pipeline。像图像尺寸这样的更改会触发重新编译。
下面的示例编译了 pipeline 中的 transformer,并使用 "max-autotune"
模式来最大化性能。
import torch
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
from diffusers.utils import export_to_video
pipeline = CogVideoXPipeline.from_pretrained(
"THUDM/CogVideoX-2b",
torch_dtype=torch.float16
).to("cuda")
# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
pipeline.transformer, mode="max-autotune", fullgraph=True
)
prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea.
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse.
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood,
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""
video = pipeline(
prompt=prompt,
guidance_scale=6,
num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)