Skip to content

Commit 09abe15

Browse files
committed
Add community video inpainting pipeline with temporal reuse and tests
1 parent 23ebbb4 commit 09abe15

File tree

3 files changed

+812
-0
lines changed

3 files changed

+812
-0
lines changed

examples/community/README.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,40 @@ frames = pipe(
146146
export_to_video(frames, "output.mp4", fps=30)
147147
```
148148

149+
### Video Inpaint Pipeline
150+
151+
**Akilesh KR**
152+
153+
`VideoInpaintPipeline` extends the classic Stable Diffusion inpainting pipeline to full videos. It adds temporal reuse of diffusion noise and optional optical-flow–guided warping (RAFT) so that successive frames stay coherent while still running on lightweight image weights. This is aimed at creators who cannot fit fully video-native diffusion models on their GPUs but still need flicker-free edits.
154+
155+
#### Usage example
156+
157+
```python
158+
from diffusers import VideoInpaintPipeline
159+
160+
pipe = VideoInpaintPipeline.from_pretrained(
161+
"runwayml/stable-diffusion-inpainting",
162+
torch_dtype="auto",
163+
)
164+
pipe.enable_model_cpu_offload()
165+
166+
result = pipe(
167+
prompt="replace the background with a snowy mountain",
168+
video_path="input.mp4",
169+
mask_path="mask.mp4",
170+
num_inference_steps=12,
171+
use_optical_flow=True, # requires torchvision>=0.15
172+
flow_strength=0.85,
173+
noise_blend=0.7,
174+
output_video_path="output.mp4",
175+
)
176+
177+
print(f"Generated {len(result.frames)} frames")
178+
print("Saved video:", result.video_path)
179+
```
180+
181+
> **Tip:** Install `torchvision>=0.15` to enable RAFT optical flow (`use_optical_flow=True`). Without it the pipeline still works but falls back to latent reuse only.
182+
149183
### Adaptive Mask Inpainting
150184

151185
**Hyeonwoo Kim\*, Sookwan Han\*, Patrick Kwon, Hanbyul Joo**

0 commit comments

Comments
 (0)