You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/community/README.md
+34Lines changed: 34 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -146,6 +146,40 @@ frames = pipe(
146
146
export_to_video(frames, "output.mp4", fps=30)
147
147
```
148
148
149
+
### Video Inpaint Pipeline
150
+
151
+
**Akilesh KR**
152
+
153
+
`VideoInpaintPipeline` extends the classic Stable Diffusion inpainting pipeline to full videos. It adds temporal reuse of diffusion noise and optional optical-flow–guided warping (RAFT) so that successive frames stay coherent while still running on lightweight image weights. This is aimed at creators who cannot fit fully video-native diffusion models on their GPUs but still need flicker-free edits.
154
+
155
+
#### Usage example
156
+
157
+
```python
158
+
from diffusers import VideoInpaintPipeline
159
+
160
+
pipe = VideoInpaintPipeline.from_pretrained(
161
+
"runwayml/stable-diffusion-inpainting",
162
+
torch_dtype="auto",
163
+
)
164
+
pipe.enable_model_cpu_offload()
165
+
166
+
result = pipe(
167
+
prompt="replace the background with a snowy mountain",
> **Tip:** Install `torchvision>=0.15` to enable RAFT optical flow (`use_optical_flow=True`). Without it the pipeline still works but falls back to latent reuse only.
182
+
149
183
### Adaptive Mask Inpainting
150
184
151
185
**Hyeonwoo Kim\*, Sookwan Han\*, Patrick Kwon, Hanbyul Joo**
0 commit comments