Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
3067 commits
Select commit Hold shift + click to select a range
f16219e
Add cheap latent preview for flux 2. (#10907)
comfyanonymous Nov 26, 2025
8938aa3
add Veo3 First-Last-Frame node (#10878)
bigcat88 Nov 26, 2025
1105e0d
improve UX for batch uploads in upload_images_to_comfyapi (#10913)
bigcat88 Nov 26, 2025
8908ee2
fix(gemini): use first 10 images as fileData (URLs) and remaining ima…
bigcat88 Nov 26, 2025
234c3dc
Bump frontend to 1.32.9 (#10867)
christian-byrne Nov 26, 2025
58c6ed5
Merge 3d animation node (#10025)
jtydhr88 Nov 26, 2025
55f654d
Fix the CSP offline feature. (#10923)
comfyanonymous Nov 26, 2025
dd41b74
Add Z Image to readme. (#10924)
comfyanonymous Nov 26, 2025
d8433c6
chore(api-nodes): remove chat widgets from OpenAI/Gemini nodes (#10861)
bigcat88 Nov 26, 2025
a2d60aa
convert nodes_customer_sampler.py to V3 schema (#10206)
bigcat88 Nov 26, 2025
cc6a8dc
Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi r…
KohakuBlueleaf Nov 27, 2025
eaf68c9
Make lora training work on Z Image and remove some redundant nodes. (…
comfyanonymous Nov 27, 2025
c38e7d6
block info (#10841)
Haoming02 Nov 27, 2025
f17251b
Account for the VRAM cost of weight offloading (#10733)
rattus128 Nov 27, 2025
3f382a4
quant ops: Dequantize weight in-place (#10935)
rattus128 Nov 27, 2025
b59750a
Update template to 0.7.23 (#10949)
comfyui-wiki Nov 27, 2025
9d8a817
Enable async offloading by default on Nvidia. (#10953)
comfyanonymous Nov 27, 2025
52e778f
feat(Kling-API-Nodes): add v2-5-turbo model to FirstLastFrame node (#…
bigcat88 Nov 28, 2025
ca7808f
fix(user_manager): fix typo in move_userdata dest validation (#10967)
ltdrdata Nov 28, 2025
f55c98a
Disable offload stream when torch compile. (#10961)
comfyanonymous Nov 28, 2025
6484ac8
fix QuantizedTensor.is_contiguous (#10956) (#10959)
urlesistiana Nov 28, 2025
0ff0457
mm: wrap the raw stream in context manager (#10958)
rattus128 Nov 28, 2025
065a2fb
Update driver link in AMD portable README (#10974)
comfyanonymous Nov 29, 2025
b907085
Support video tiny VAEs (#10884)
kijai Nov 29, 2025
52a32e2
Support some z image lora formats. (#10978)
comfyanonymous Nov 29, 2025
af96d98
feat(security): add System User protection with `__` prefix (#10966)
ltdrdata Nov 29, 2025
5151cff
Add some missing z image lora layers. (#10980)
comfyanonymous Nov 29, 2025
0a67468
Make the ScaleRope node work on Z Image and Lumina. (#10994)
comfyanonymous Nov 29, 2025
4967f81
update template to 0.7.25 (#10996)
comfyui-wiki Nov 30, 2025
f8b981a
Next AMD portable will have pytorch with ROCm 7.1.1 (#11002)
comfyanonymous Nov 30, 2025
7dbd5df
bump comfyui-frontend-package to 1.32.10 (#11018)
christian-byrne Dec 1, 2025
2640acb
Update qwen tokenizer to add qwen 3 tokens. (#11029)
comfyanonymous Dec 1, 2025
1cb7e22
[API Nodes] add Kling O1 model support (#11025)
bigcat88 Dec 2, 2025
30c259c
ComfyUI version v0.3.76
comfyanonymous Dec 2, 2025
878db3a
Implement the Ovis image model. (#11030)
comfyanonymous Dec 2, 2025
c55dc85
bump comfyui-frontend-package to 1.33.10 (#11028)
christian-byrne Dec 2, 2025
b4a20ac
feat: Support ComfyUI-Manager for pip version (#7555)
ltdrdata Dec 2, 2025
a17cf1c
Add @guill as a code owner (#11031)
yoland68 Dec 2, 2025
44baa0b
Fix CODEOWNERS formatting to have all on the same line, otherwise onl…
Kosinkadink Dec 2, 2025
33d6aec
add check for the format arg type in VideoFromComponents.save_to func…
bigcat88 Dec 2, 2025
daaceac
Hack to make zimage work in fp16. (#11057)
comfyanonymous Dec 2, 2025
277237c
attention: use flag based OOM fallback (#11038)
rattus128 Dec 2, 2025
b94d394
Support Z Image alibaba pai fun controlnets. (#11062)
comfyanonymous Dec 3, 2025
3f512f5
Added PATCH method to CORS headers (#11066)
jheising Dec 3, 2025
73f5649
Implement temporal rolling VAE (Major VRAM reductions in Hunyuan and …
rattus128 Dec 3, 2025
c120eee
Add MatchType, DynamicCombo, and Autogrow support to V3 Schema (#10832)
Kosinkadink Dec 3, 2025
861817d
Fix issue with portable updater. (#11070)
comfyanonymous Dec 3, 2025
519c941
Prs/lora reservations (reduce massive Lora reservations especially on…
rattus128 Dec 3, 2025
19f2192
fix(V3-Schema): use empty list defaults for Schema.inputs/outputs/hid…
bigcat88 Dec 3, 2025
87c104b
add support for "@image" reference format in Kling Omni API nodes (#1…
bigcat88 Dec 3, 2025
440268d
convert nodes_load_3d.py to V3 schema (#10990)
bigcat88 Dec 3, 2025
dce518c
convert nodes_audio.py to V3 schema (#10798)
bigcat88 Dec 4, 2025
ecdc869
Qwen Image Lora training fix from #11090 (#11094)
comfyanonymous Dec 4, 2025
ea17add
Fix case where text encoders where running on the CPU instead of GPU.…
comfyanonymous Dec 4, 2025
6be85c7
mp: use look-ahead actuals for stream offload VRAM calculation (#11096)
rattus128 Dec 4, 2025
f4bdf5f
sd: revise hy VAE VRAM (#11105)
rattus128 Dec 4, 2025
9bc893c
sd: bump HY1.5 VAE estimate (#11107)
rattus128 Dec 4, 2025
3c84562
[API Nodes]: fixes and refactor (#11104)
bigcat88 Dec 4, 2025
35fa091
Forgot to put this in README. (#11112)
comfyanonymous Dec 5, 2025
0ec05b1
Remove line made unnecessary (and wrong) after transformer_options wa…
Kosinkadink Dec 5, 2025
43071e3
Make old scaled fp8 format use the new mixed quant ops system. (#11000)
comfyanonymous Dec 5, 2025
6fd463a
Fix regression when text encoder loaded directly on GPU. (#11129)
comfyanonymous Dec 5, 2025
79d17ba
Context windows fixes and features (#10975)
kijai Dec 5, 2025
092ee8a
Fix some custom nodes. (#11134)
comfyanonymous Dec 5, 2025
bed1267
docs: add ComfyUI-Manager documentation and update to v4.0.3b4 (#11133)
ltdrdata Dec 5, 2025
fd10932
Kandinsky5 model support (#10988)
kijai Dec 6, 2025
ae676ed
Fix regression. (#11137)
comfyanonymous Dec 6, 2025
117bf3f
convert nodes_freelunch.py to the V3 schema (#10904)
bigcat88 Dec 6, 2025
913f86b
[V3] convert nodes_mask.py to V3 schema (#10669)
bigcat88 Dec 6, 2025
d7a0aef
Set OCL_SET_SVM_SIZE on AMD. (#11139)
comfyanonymous Dec 6, 2025
76f18e9
marked all Pika API nodes a deprecated (#11146)
bigcat88 Dec 6, 2025
7ac7d69
Fix EmptyAudio node input types (#11149)
kijai Dec 6, 2025
50ca97e
Speed up lora compute and lower memory usage by doing it in fp16. (#1…
comfyanonymous Dec 6, 2025
4086acf
Fix on-load VRAM OOM (#11144)
rattus128 Dec 6, 2025
329480d
Fix qwen scaled fp8 not working with kandinsky. Make basic t2i wf wor…
comfyanonymous Dec 7, 2025
56fa7db
Properly load the newbie diffusion model. (#11172)
comfyanonymous Dec 7, 2025
ec7f651
chore(comfy_api): replace absolute imports with relative (#11145)
bigcat88 Dec 8, 2025
058f084
Update workflow templates to v0.7.51 (#11150)
comfyui-wiki Dec 8, 2025
85c4b4a
chore: replace imports of deprecated V1 classes (#11127)
bigcat88 Dec 8, 2025
c3c6313
Added "system_prompt" input to Gemini nodes (#11177)
bigcat88 Dec 8, 2025
fd271de
[API Nodes] add support for seedance-1-0-pro-fast model (#10947)
bigcat88 Dec 8, 2025
8e889c5
Support "transformer." LoRA prefix for Z-Image (#11135)
dxqb Dec 8, 2025
60ee574
retune lowVramPatch VRAM accounting (#11173)
rattus128 Dec 8, 2025
935493f
chore: update workflow templates to v0.7.54 (#11192)
comfyui-wiki Dec 8, 2025
3b0368a
Fix regression. (#11194)
comfyanonymous Dec 8, 2025
d50f342
Fix potential issue. (#11201)
comfyanonymous Dec 9, 2025
e136b6d
dequantization offload accounting (fixes Flux2 OOMs - incl TEs) (#11171)
rattus128 Dec 9, 2025
cabc4d3
bump comfyui-frontend-package to 1.33.13 (patch) (#11200)
christian-byrne Dec 9, 2025
b9fb542
add chroma-radiance-x0 mode (#11197)
lodestone-rock Dec 9, 2025
9d252f3
ops: delete dead code (#11204)
rattus128 Dec 9, 2025
e2a800e
Fix for HunyuanVideo1.5 meanflow distil (#11212)
kijai Dec 9, 2025
791e30f
Fix nan issue when quantizing fp16 tensor. (#11213)
comfyanonymous Dec 9, 2025
fc657f4
ComfyUI version v0.4.0
comfyanonymous Dec 9, 2025
f668c2e
bump comfyui-frontend-package to 1.34.8 (#11220)
benceruleanlu Dec 10, 2025
36357bb
process the NodeV1 dict results correctly (#11237)
bigcat88 Dec 10, 2025
17c92a9
Tweak Z Image memory estimation. (#11254)
comfyanonymous Dec 11, 2025
57ddb7f
Fix: filter hidden files from /internal/files endpoint (#11191)
Myestery Dec 11, 2025
e711aaf
Lower VAE loading requirements:Create a new branch for GPU memory cal…
WenXIN-AI Dec 11, 2025
93948e3
feat(api-nodes): enable Kling Omni O1 node (#11229)
bigcat88 Dec 11, 2025
f8321eb
Adjust memory usage factor. (#11257)
comfyanonymous Dec 11, 2025
fdebe18
Fix regular chroma radiance (#11276)
comfyanonymous Dec 11, 2025
ae65433
This only works on radiance. (#11277)
comfyanonymous Dec 11, 2025
eeb020b
Better chroma radiance and other models vram estimation. (#11278)
comfyanonymous Dec 11, 2025
338d9ae
Make portable updater work with repos in unmerged state. (#11281)
comfyanonymous Dec 11, 2025
982876d
WanMove support (#11247)
kijai Dec 12, 2025
5495589
Respect the dtype the op was initialized in for non quant mixed op. (…
comfyanonymous Dec 12, 2025
908fd7d
feat(api-nodes): new TextToVideoWithAudio and ImageToVideoWithAudio n…
bigcat88 Dec 12, 2025
c5a47a1
Fix bias dtype issue in mixed ops. (#11293)
comfyanonymous Dec 12, 2025
da2bfb5
Basic implementation of z image fun control union 2.0 (#11304)
comfyanonymous Dec 13, 2025
971cefe
Fix pytorch warnings. (#11314)
comfyanonymous Dec 13, 2025
6592bff
seeds_2: add phi_2 variant and sampler node (#11309)
chaObserv Dec 14, 2025
5ac3b26
Update warning for old pytorch version. (#11319)
comfyanonymous Dec 14, 2025
a5e8501
bump manager requirments to the 4.0.3b5 (#11324)
ltdrdata Dec 15, 2025
51347f9
chore: update workflow templates to v0.7.59 (#11337)
comfyui-wiki Dec 15, 2025
5cb1e0c
Disable guards on transformer_options when torch.compile (#11317)
comfyanonymous Dec 15, 2025
af91eb6
api-nodes: drop Kling v1 model (#11307)
bigcat88 Dec 15, 2025
33c7f11
drop Pika API nodes (#11306)
bigcat88 Dec 15, 2025
dbd3304
feat(preview): add per-queue live preview method override (#11261)
ltdrdata Dec 15, 2025
43e0d4e
comfy_api: remove usage of "Type","List" and "Dict" types (#11238)
bigcat88 Dec 16, 2025
77b2f7c
Add context windows callback for custom cond handling (#11208)
drozbay Dec 16, 2025
70541d4
Support the new qwen edit 2511 reference method. (#11340)
comfyanonymous Dec 16, 2025
d02d0e5
[add] tripo3.0 (#10663)
seed93 Dec 16, 2025
41bcf06
Add code to detect if a z image fun controlnet is broken or not. (#11…
comfyanonymous Dec 16, 2025
fc4af86
[BlockInfo] Lumina (#11227)
Haoming02 Dec 16, 2025
ea2c117
[BlockInfo] Wan (#10845)
Haoming02 Dec 16, 2025
683569d
Only enable fp16 on ZImage on newer pytorch. (#11344)
comfyanonymous Dec 16, 2025
3d082c3
bump comfyui-frontend-package to 1.34.9 (patch) (#11342)
christian-byrne Dec 16, 2025
645ee18
Inpainting for z image fun control. Use the ZImageFunControlnet node.…
comfyanonymous Dec 16, 2025
bc606d7
Add a way to set the default ref method in the qwen image code. (#11349)
comfyanonymous Dec 16, 2025
9304e47
Update workflows for new release process (#11064)
benceruleanlu Dec 16, 2025
65e2103
feat(api-nodes): add Wan2.6 model to video nodes (#11357)
bigcat88 Dec 16, 2025
ffdd53b
Check state dict key to auto enable the index_timestep_zero ref metho…
comfyanonymous Dec 16, 2025
827bb15
Add exp_heun_2_x0 sampler series (#11360)
chaObserv Dec 17, 2025
3a5f239
ComfyUI v0.5.0
comfyanonymous Dec 17, 2025
8871438
feat(api-nodes): add GPT-Image-1.5 (#11368)
bigcat88 Dec 17, 2025
c08f97f
fix regression in V3 nodes processing (#11375)
bigcat88 Dec 17, 2025
5d9ad0c
Fix the last step with non-zero sigma in sa_solver (#11380)
chaObserv Dec 17, 2025
16d85ea
Better handle torch being imported by prestartup nodes. (#11383)
comfyanonymous Dec 18, 2025
ba6080b
ComfyUI v0.5.1
comfyanonymous Dec 18, 2025
86dbb89
Resolution bucketing and Trainer implementation refactoring (#11117)
KohakuBlueleaf Dec 18, 2025
bf7dc63
skip_load_model -> force_full_load (#11390)
comfyanonymous Dec 18, 2025
1ca89b8
Add unified jobs API with /api/jobs endpoints (#11054)
ric-yu Dec 18, 2025
e8ebbe6
chore: update workflow templates to v0.7.60 (#11403)
comfyui-wiki Dec 18, 2025
e4fb3a3
Support loading Wan/Qwen VAEs with different in/out channels. (#11405)
comfyanonymous Dec 18, 2025
6a2678a
Trim/pad channels in VAE code. (#11406)
comfyanonymous Dec 18, 2025
28eaab6
Diffusion model part of Qwen Image Layered. (#11408)
comfyanonymous Dec 19, 2025
894802b
Add LatentCutToBatch node. (#11411)
comfyanonymous Dec 19, 2025
5b4d066
add Flux2MaxImage API Node (#11420)
bigcat88 Dec 19, 2025
8376ff6
bump comfyui_manager version to the 4.0.3b7 (#11422)
ltdrdata Dec 19, 2025
cc4ddba
Allow enabling use of MIOpen by setting COMFYUI_ENABLE_MIOPEN=1 as an…
BradPepersAMD Dec 19, 2025
809ce68
Support nested tensor denoise masks. (#11431)
comfyanonymous Dec 20, 2025
514c24d
Fix error from logging line (#11423)
drozbay Dec 20, 2025
0aa7fa4
Implement sliding attention in Gemma3 (#11409)
woct0rdho Dec 20, 2025
3ab9748
Disable prompt weights on newbie te. (#11434)
comfyanonymous Dec 20, 2025
767ee30
ZImageFunControlNet: Fix mask concatenation in --gpu-only (#11421)
rattus128 Dec 20, 2025
31e9617
Fix issue with batches and newbie. (#11435)
comfyanonymous Dec 20, 2025
4c432c1
Implement Jina CLIP v2 and NewBie dual CLIP (#11415)
woct0rdho Dec 20, 2025
fb478f6
Only apply gemma quant config to gemma model for newbie. (#11436)
comfyanonymous Dec 20, 2025
0899012
chore(api-nodes): by default set Watermark generation to False (#11437)
bigcat88 Dec 20, 2025
bbb11e2
fix(api-nodes): Topaz 4k video upscaling (#11438)
bigcat88 Dec 20, 2025
807538f
Core release process. (#11447)
comfyanonymous Dec 21, 2025
91bf6b6
Add node to create empty latents for qwen image layered model. (#11460)
comfyanonymous Dec 22, 2025
c176b21
extend possible duration range for Kling O1 StartEndFrame node (#11451)
bigcat88 Dec 22, 2025
eb0e10a
Update workflow templates to v0.7.62 (#11467)
comfyui-wiki Dec 22, 2025
33aa808
Make denoised output on custom sampler nodes work with nested tensors…
comfyanonymous Dec 22, 2025
f4f44bb
api-nodes: use new custom endpoint for Nano Banana (#11311)
bigcat88 Dec 23, 2025
22ff1bb
chore: update workflow templates to v0.7.63 (#11482)
comfyui-wiki Dec 24, 2025
e4c61d7
ComfyUI v0.6.0
comfyanonymous Dec 24, 2025
650e716
Bump comfyui-frontend-package to 1.35.9 (#11470)
comfy-pr-bot Dec 24, 2025
4f067b0
chore: update workflow templates to v0.7.64 (#11496)
comfyui-wiki Dec 24, 2025
532e285
Add a ManualSigmas node. (#11499)
comfyanonymous Dec 25, 2025
d9a76cf
Specify in readme that we only support pytorch 2.4 and up. (#11512)
comfyanonymous Dec 26, 2025
16fb684
bump comfyui_manager version to the 4.0.4 (#11521)
ltdrdata Dec 26, 2025
1e4e342
Fix noise with ancestral samplers when inferencing on cpu. (#11528)
comfyanonymous Dec 27, 2025
865568b
feat(api-nodes): add Kling Motion Control node (#11493)
bigcat88 Dec 27, 2025
eff4ea0
[V3] converted nodes_images.py to V3 schema (#11206)
bigcat88 Dec 27, 2025
0d2e4bd
fix(api-nodes-gemini): always force enhance_prompt to be True (#11503)
bigcat88 Dec 27, 2025
36deef2
chore(api-nodes): switch to credits instead of $ (#11489)
bigcat88 Dec 27, 2025
2943093
Enable async offload by default for AMD. (#11534)
comfyanonymous Dec 27, 2025
8fd0717
Comment out unused norm_final in lumina/z image model. (#11545)
comfyanonymous Dec 29, 2025
9ca7e14
mm: discard async errors from pinning failures (#10738)
rattus128 Dec 29, 2025
0e6221c
Add some warnings for pin and unpin errors. (#11561)
comfyanonymous Dec 29, 2025
d7111e4
ResizeByLongerSide: support video (#11555)
tavihalperin Dec 30, 2025
25a1bfa
chore(api-nodes-bytedance): mark "seededit" as deprecated, adjust dis…
bigcat88 Dec 30, 2025
178bdc5
Add handling for vace_context in context windows (#11386)
drozbay Dec 30, 2025
f59f71c
ComfyUI version v0.7.0
comfyanonymous Dec 31, 2025
0357ed7
Add support for sage attention 3 in comfyui, enable via new cli arg (…
mengqin Dec 31, 2025
0be8a76
V3 Improvements + DynamicCombo + Autogrow exposed in public API (#11345)
Kosinkadink Dec 31, 2025
6ca3d5c
fix(api-nodes-vidu): preserve percent-encoding for signed URLs (#11564)
bigcat88 Dec 31, 2025
236b9e2
chore: update workflow templates to v0.7.65 (#11579)
comfyui-wiki Dec 31, 2025
d622a61
Refactor: move clip_preprocess to comfy.clip_model (#11586)
comfyanonymous Dec 31, 2025
1bdc9a9
Remove duplicate import of model_management (#11587)
comfyanonymous Jan 1, 2026
65cfcf5
New Year ruff cleanup. (#11595)
comfyanonymous Jan 2, 2026
9e5f677
Ignore all frames except the first one for MPO format. (#11569)
bigcat88 Jan 2, 2026
303b173
Give Mahiro CFG a more appropriate display name (#11580)
throttlekitty Jan 2, 2026
f2fda02
Tripo3D: pass face_limit parameter only when it differs from default …
bigcat88 Jan 2, 2026
9a552df
Remove leftover scaled_fp8 key. (#11603)
comfyanonymous Jan 3, 2026
53e762a
Print memory summary on OOM to help with debugging. (#11613)
comfyanonymous Jan 4, 2026
acbf08c
feat(api-nodes): add support for 720p resolution for Kling Omni nodes…
bigcat88 Jan 4, 2026
38d0493
Fix case where upscale model wouldn't be moved to cpu. (#11633)
comfyanonymous Jan 5, 2026
f2b0023
Support the LTXV 2 model. (#11632)
comfyanonymous Jan 5, 2026
d1b9822
Add LTXAVTextEncoderLoader node. (#11634)
comfyanonymous Jan 5, 2026
d157c32
Refactor module_size function. (#11637)
comfyanonymous Jan 5, 2026
4f3f9e7
Fix name. (#11638)
comfyanonymous Jan 5, 2026
6da00dd
Initial ops changes to use comfy_kitchen: Initial nvfp4 checkpoint su…
comfyanonymous Jan 6, 2026
6ef85c4
Use rope functions from comfy kitchen. (#11647)
comfyanonymous Jan 6, 2026
1618002
Revert "Use rope functions from comfy kitchen. (#11647)" (#11648)
comfyanonymous Jan 6, 2026
e14f3b6
chore: update workflow templates to v0.7.66 (#11652)
comfyui-wiki Jan 6, 2026
96e0d09
Add helpful message to portable. (#11671)
comfyanonymous Jan 6, 2026
6ffc159
Update comfy-kitchen version to 0.2.1 (#11672)
comfyanonymous Jan 6, 2026
c3c3e93
Use rope functions from comfy kitchen. (#11674)
comfyanonymous Jan 6, 2026
c3566c0
chore: update workflow templates to v0.7.67 (#11667)
comfyui-wiki Jan 6, 2026
023cf13
Fix lowvram issue with ltxv2 text encoder. (#11675)
comfyanonymous Jan 6, 2026
6e9ee55
Disable ltxav previews. (#11676)
comfyanonymous Jan 6, 2026
2c03884
Skip fp4 matrix mult on devices that don't support it. (#11677)
comfyanonymous Jan 6, 2026
edee33f
Disable comfy kitchen cuda if pytorch cuda less than 13 (#11681)
comfyanonymous Jan 7, 2026
c5cfb34
Update comfy-kitchen version to 0.2.3 (#11685)
comfyanonymous Jan 7, 2026
ce0000c
Force sequential execution in CI test jobs (#11687)
yoland68 Jan 7, 2026
79e9454
feat(api-nodes): add WAN2.6 ReferenceToVideo (#11644)
bigcat88 Jan 7, 2026
b7d7cc1
Fix fp8 fast issue. (#11688)
comfyanonymous Jan 7, 2026
fc0cb10
ComfyUI v0.8.0
comfyanonymous Jan 7, 2026
c0c9720
Fix stable release workflow not pulling latest comfy kitchen. (#11695)
comfyanonymous Jan 7, 2026
3cd7b32
Support gemma 12B with quant weights. (#11696)
comfyanonymous Jan 7, 2026
48e5ea1
model_patcher: Remove confusing load stat (#11710)
rattus128 Jan 7, 2026
1c705f7
Add device selection for LTXAVTextEncoderLoader (#11700)
kijai Jan 7, 2026
34751fe
Lower ltxv text encoder vram use. (#11713)
comfyanonymous Jan 8, 2026
007b87e
Bump required comfy-kitchen version. (#11714)
comfyanonymous Jan 8, 2026
3cd19e9
Increase ltxav mem estimation by a bit. (#11715)
comfyanonymous Jan 8, 2026
25bc1b5
Add memory estimation function to ltxav text encoder. (#11716)
comfyanonymous Jan 8, 2026
b6c79a6
ops: Fix offloading with FP8MM performance (#11697)
rattus128 Jan 8, 2026
21e8425
Add warning for old pytorch. (#11718)
comfyanonymous Jan 8, 2026
fcd9a23
Update template to 0.7.69 (#11719)
comfyui-wiki Jan 8, 2026
ac12f77
ComfyUI version v0.8.1
comfyanonymous Jan 8, 2026
50d6e1c
Tweak ltxv vae mem estimation. (#11722)
comfyanonymous Jan 8, 2026
2e9d516
ComfyUI version v0.8.2
comfyanonymous Jan 8, 2026
a60b7b8
Revert "Force sequential execution in CI test jobs (#11687)" (#11725)
yoland68 Jan 8, 2026
5943fbf
bump comfyui_manager version to the 4.0.5 (#11732)
ltdrdata Jan 8, 2026
0f11869
Better detection if AMD torch compiled with efficient attention. (#11…
comfyanonymous Jan 8, 2026
1a20656
Fix import issue. (#11746)
comfyanonymous Jan 8, 2026
027042d
Add node: JoinAudioChannels (#11728)
kijai Jan 9, 2026
b48d6a8
Fix csp error in frontend when forcing offline. (#11749)
comfyanonymous Jan 9, 2026
114fc73
Bump comfyui-frontend-package to 1.36.13 (#11645)
comfy-pr-bot Jan 9, 2026
1dc3da6
Add most basic Asset support for models (#11315)
Kosinkadink Jan 9, 2026
6207f86
Fix VAEEncodeForInpaint to support WAN VAE tuple downscale_ratio (#11…
rattus128 Jan 9, 2026
4609fcd
add node - image compare (#11343)
jtydhr88 Jan 9, 2026
04c49a2
feat: add cancelled filter to /jobs (#11680)
ric-yu Jan 9, 2026
ec0a832
Add workaround for hacky nodepack(s) that edit folder_names_and_paths…
Kosinkadink Jan 9, 2026
bd0e682
Be less strict when loading mixed ops weights. (#11769)
comfyanonymous Jan 9, 2026
4484b93
fix(api-nodes): do not downscale the input image for Topaz Enhance (#…
bigcat88 Jan 9, 2026
393d288
feat(api-nodes): added nodes for Vidu2 (#11760)
bigcat88 Jan 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  

This file was deleted.

133 changes: 116 additions & 17 deletions .ci/update_windows/update.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import pygit2
from datetime import datetime
import sys
import os
import shutil
import filecmp

def pull(repo, remote_name='origin', branch='master'):
for remote in repo.remotes:
Expand All @@ -25,41 +28,137 @@ def pull(repo, remote_name='origin', branch='master'):

if repo.index.conflicts is not None:
for conflict in repo.index.conflicts:
print('Conflicts found in:', conflict[0].path)
print('Conflicts found in:', conflict[0].path) # noqa: T201
raise AssertionError('Conflicts, ahhhhh!!')

user = repo.default_signature
tree = repo.index.write_tree()
commit = repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
# We need to do this or git CLI will think we are still merging.
repo.state_cleanup()
else:
raise AssertionError('Unknown merge analysis result')

pygit2.option(pygit2.GIT_OPT_SET_OWNER_VALIDATION, 0)
repo = pygit2.Repository(str(sys.argv[1]))
repo_path = str(sys.argv[1])
repo = pygit2.Repository(repo_path)
ident = pygit2.Signature('comfyui', 'comfy@ui')
try:
print("stashing current changes")
print("stashing current changes") # noqa: T201
repo.stash(ident)
except KeyError:
print("nothing to stash")
print("nothing to stash") # noqa: T201
except:
print("Could not stash, cleaning index and trying again.") # noqa: T201
repo.state_cleanup()
repo.index.read_tree(repo.head.peel().tree)
repo.index.write()
try:
repo.stash(ident)
except KeyError:
print("nothing to stash.") # noqa: T201

backup_branch_name = 'backup_branch_{}'.format(datetime.today().strftime('%Y-%m-%d_%H_%M_%S'))
print("creating backup branch: {}".format(backup_branch_name))
repo.branches.local.create(backup_branch_name, repo.head.peel())
print("creating backup branch: {}".format(backup_branch_name)) # noqa: T201
try:
repo.branches.local.create(backup_branch_name, repo.head.peel())
except:
pass

print("checking out master branch")
print("checking out master branch") # noqa: T201
branch = repo.lookup_branch('master')
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)
if branch is None:
try:
ref = repo.lookup_reference('refs/remotes/origin/master')
except:
print("fetching.") # noqa: T201
for remote in repo.remotes:
if remote.name == "origin":
remote.fetch()
ref = repo.lookup_reference('refs/remotes/origin/master')
repo.checkout(ref)
branch = repo.lookup_branch('master')
if branch is None:
repo.create_branch('master', repo.get(ref.target))
else:
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)

print("pulling latest changes")
print("pulling latest changes") # noqa: T201
pull(repo)

print("Done!")
if "--stable" in sys.argv:
def latest_tag(repo):
versions = []
for k in repo.references:
try:
prefix = "refs/tags/v"
if k.startswith(prefix):
version = list(map(int, k[len(prefix):].split(".")))
versions.append((version[0] * 10000000000 + version[1] * 100000 + version[2], k))
except:
pass
versions.sort()
if len(versions) > 0:
return versions[-1][1]
return None
latest_tag = latest_tag(repo)
if latest_tag is not None:
repo.checkout(latest_tag)

print("Done!") # noqa: T201

self_update = True
if len(sys.argv) > 2:
self_update = '--skip_self_update' not in sys.argv

update_py_path = os.path.realpath(__file__)
repo_update_py_path = os.path.join(repo_path, ".ci/update_windows/update.py")

cur_path = os.path.dirname(update_py_path)


req_path = os.path.join(cur_path, "current_requirements.txt")
repo_req_path = os.path.join(repo_path, "requirements.txt")


def files_equal(file1, file2):
try:
return filecmp.cmp(file1, file2, shallow=False)
except:
return False

def file_size(f):
try:
return os.path.getsize(f)
except:
return 0


if self_update and not files_equal(update_py_path, repo_update_py_path) and file_size(repo_update_py_path) > 10:
shutil.copy(repo_update_py_path, os.path.join(cur_path, "update_new.py"))
exit()

if not os.path.exists(req_path) or not files_equal(repo_req_path, req_path):
import subprocess
try:
subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', '-r', repo_req_path])
shutil.copy(repo_req_path, req_path)
except:
pass


stable_update_script = os.path.join(repo_path, ".ci/update_windows/update_comfyui_stable.bat")
stable_update_script_to = os.path.join(cur_path, "update_comfyui_stable.bat")

try:
if not file_size(stable_update_script_to) > 10:
shutil.copy(stable_update_script, stable_update_script_to)
except:
pass

8 changes: 7 additions & 1 deletion .ci/update_windows/update_comfyui.bat
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\
pause
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update
)
if "%~1"=="" pause
3 changes: 0 additions & 3 deletions .ci/update_windows/update_comfyui_and_python_dependencies.bat

This file was deleted.

8 changes: 8 additions & 0 deletions .ci/update_windows/update_comfyui_stable.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --stable
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update --stable
)
if "%~1"=="" pause

This file was deleted.

28 changes: 28 additions & 0 deletions .ci/windows_amd_base_files/README_VERY_IMPORTANT.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
As of the time of writing this you need this driver for best results:
https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-WINDOWS-PYTORCH-7-1-1.html

HOW TO RUN:

If you have a AMD gpu:

run_amd_gpu.bat

If you have memory issues you can try disabling the smart memory management by running comfyui with:

run_amd_gpu_disable_smart_memory.bat

IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion XL one from: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors


RECOMMENDED WAY TO UPDATE:
To update the ComfyUI code: update\update_comfyui.bat


TO SHARE MODELS BETWEEN COMFYUI AND ANOTHER UI:
In the ComfyUI directory you will find a file: extra_model_paths.yaml.example
Rename this file to: extra_model_paths.yaml and edit it with your favorite text editor.



Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --use-pytorch-cross-attention
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --disable-smart-memory
pause
2 changes: 2 additions & 0 deletions .ci/windows_nightly_base_files/run_nvidia_gpu_fast.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast
pause
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ if you have a NVIDIA gpu:

run_nvidia_gpu.bat

if you want to enable the fast fp16 accumulation (faster for fp16 models with slightly less quality):

run_nvidia_gpu_fast_fp16_accumulation.bat


To run it in slow CPU mode:
Expand All @@ -14,7 +17,7 @@ run_cpu.bat

IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion 1.5 one from: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt
You can download the stable diffusion 1.5 one from: https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/blob/main/v1-5-pruned-emaonly-fp16.safetensors


RECOMMENDED WAY TO UPDATE:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
..\python_embeded\python.exe -s ..\ComfyUI\main.py --windows-standalone-build --disable-api-nodes
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause
3 changes: 3 additions & 0 deletions .ci/windows_nvidia_base_files/run_nvidia_gpu.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast fp16_accumulation
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause
3 changes: 3 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
/web/assets/** linguist-generated
/web/** linguist-vendored
comfy_api_nodes/apis/__init__.py linguist-generated
58 changes: 58 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
name: Bug Report
description: "Something is broken inside of ComfyUI. (Do not use this if you're just having issues and need help, or if the issue relates to a custom node)"
labels: ["Potential Bug"]
body:
- type: markdown
attributes:
value: |
Before submitting a **Bug Report**, please ensure the following:

- **1:** You are running the latest version of ComfyUI.
- **2:** You have your ComfyUI logs and relevant workflow on hand and will post them in this bug report.
- **3:** You confirmed that the bug is not caused by a custom node. You can disable all custom nodes by passing
`--disable-all-custom-nodes` command line argument. If you have custom node try updating them to the latest version.
- **4:** This is an actual bug in ComfyUI, not just a support question. A bug is when you can specify exact
steps to replicate what went wrong and others will be able to repeat your steps and see the same issue happen.

## Very Important

Please make sure that you post ALL your ComfyUI logs in the bug report. A bug report without logs will likely be ignored.
- type: checkboxes
id: custom-nodes-test
attributes:
label: Custom Node Testing
description: Please confirm you have tried to reproduce the issue with all custom nodes disabled.
options:
- label: I have tried disabling custom nodes and the issue persists (see [how to disable custom nodes](https://docs.comfy.org/troubleshooting/custom-node-issues#step-1%3A-test-with-all-custom-nodes-disabled) if you need help)
required: false
- type: textarea
attributes:
label: Expected Behavior
description: "What you expected to happen."
validations:
required: true
- type: textarea
attributes:
label: Actual Behavior
description: "What actually happened. Please include a screenshot of the issue if possible."
validations:
required: true
- type: textarea
attributes:
label: Steps to Reproduce
description: "Describe how to reproduce the issue. Please be sure to attach a workflow JSON or PNG, ideally one that doesn't require custom nodes to test. If the bug open happens when certain custom nodes are used, most likely that custom node is what has the bug rather than ComfyUI, in which case it should be reported to the node's author."
validations:
required: true
- type: textarea
attributes:
label: Debug Logs
description: "Please copy the output from your terminal logs here."
render: powershell
validations:
required: true
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
11 changes: 11 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
blank_issues_enabled: true
contact_links:
- name: ComfyUI Frontend Issues
url: https://github.com/Comfy-Org/ComfyUI_frontend/issues
about: Issues related to the ComfyUI frontend (display issues, user interaction bugs), please go to the frontend repo to file the issue
- name: ComfyUI Matrix Space
url: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
about: The ComfyUI Matrix Space is available for support and general discussion related to ComfyUI (Matrix is like Discord but open source).
- name: Comfy Org Discord
url: https://discord.gg/comfyorg
about: The Comfy Org Discord is available for support and general discussion related to ComfyUI.
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Feature Request
description: "You have an idea for something new you would like to see added to ComfyUI's core."
labels: [ "Feature" ]
body:
- type: markdown
attributes:
value: |
Before submitting a **Feature Request**, please ensure the following:

**1:** You are running the latest version of ComfyUI.
**2:** You have looked to make sure there is not already a feature that does what you need, and there is not already a Feature Request listed for the same idea.
**3:** This is something that makes sense to add to ComfyUI Core, and wouldn't make more sense as a custom node.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Feature Idea
description: "Describe the feature you want to see."
validations:
required: true
- type: textarea
attributes:
label: Existing Solutions
description: "Please search through available custom nodes / extensions to see if there are existing custom solutions for this. If so, please link the options you found here as a reference."
validations:
required: false
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
Loading