Skip to content

Video Generation failing #80

@copypastegod

Description

@copypastegod

Updated to newest Nvidia Studio Drivers 595.79

2026-04-03 06:24:45,189 - INFO - [Electron] Session log file: C:\Users\Admin\AppData\Local\LTXDesktop\logs\session_2026-04-02_17-24-45_unknown.log
2026-04-03 06:24:45,227 - INFO - [Electron] [icon] Loading app icon from: C:\AI\LTX\LTX Desktop\resources\icon.ico | exists: false
2026-04-03 06:24:45,323 - INFO - [Renderer] Projects saved: 1
2026-04-03 06:24:45,324 - INFO - [Renderer] Starting Python backend...
2026-04-03 06:24:45,325 - INFO - [Electron] Using bundled Python: C:\Users\Admin\AppData\Local\LTXDesktop\python\python.exe
2026-04-03 06:24:45,325 - INFO - [Electron] Starting Python backend: C:\Users\Admin\AppData\Local\LTXDesktop\python\python.exe C:\AI\LTX\LTX Desktop\resources\backend\ltx2_server.py
2026-04-03 06:24:45,786 - ERROR - [Backend] C:\Users\Admin\AppData\Local\LTXDesktop\python\Lib\site-packages\torch\cuda\__init__.py:65: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
2026-04-03 06:24:45,786 - ERROR - [Backend]   import pynvml  # type: ignore[import]
2026-04-03 06:24:46,656 - INFO - [Backend] INFO:__main__:SageAttention enabled - attention operations will be faster
2026-04-03 06:24:46,716 - INFO - [Backend] INFO:__main__:Models directory: C:\Users\Admin\AppData\Local\LTXDesktop\models
2026-04-03 06:24:46,745 - INFO - [Backend] INFO:__main__:Runtime policy force_api_generations=False (system=Windows cuda_available=True vram_gb=15)
2026-04-03 06:24:49,169 - INFO - [Backend] INFO:handlers.settings_handler:Settings loaded from C:\Users\Admin\AppData\Local\LTXDesktop\settings.json
2026-04-03 06:24:49,189 - INFO - [Backend] INFO:__main__:============================================================
2026-04-03 06:24:49,190 - INFO - [Backend] INFO:__main__:LTX-2 Video Generation Server (FastAPI + Uvicorn)
2026-04-03 06:24:49,211 - INFO - [Backend] INFO:__main__:Platform: Windows (AMD64)
2026-04-03 06:24:49,211 - INFO - [Backend] INFO:__main__:Device: cuda  |  Dtype: torch.bfloat16
2026-04-03 06:24:49,211 - INFO - [Backend] INFO:__main__:GPU: NVIDIA GeForce RTX 5080  |  VRAM: 15 GB
2026-04-03 06:24:49,211 - INFO - [Backend] INFO:__main__:SageAttention: enabled
2026-04-03 06:24:49,211 - INFO - [Backend] INFO:__main__:Python: 3.13.12  |  Torch: 2.10.0+cu128
2026-04-03 06:24:49,212 - INFO - [Backend] INFO:__main__:============================================================
2026-04-03 06:24:49,231 - INFO - [Backend] Started server process [17040]
2026-04-03 06:24:49,231 - INFO - [Backend] Waiting for application startup.
2026-04-03 06:24:49,231 - INFO - [Backend] Application startup complete.
2026-04-03 06:24:49,232 - INFO - [Backend] Server running on http://127.0.0.1:14300
2026-04-03 06:24:49,232 - INFO - [Renderer] Checking backend health...
2026-04-03 06:24:49,233 - INFO - [Renderer] Python backend started successfully
2026-04-03 06:24:49,258 - INFO - [Renderer] Backend health: {"status":"ok","models_loaded":false,"active_model":null,"gpu_info":{"name":"NVIDIA GeForce RTX 5080","vram":16303,"vramUsed":726},"sage_attention":true,"models_status":[{"id":"fast","name":"LTX-2 Fast (Distilled)","loaded":false,"downloaded":true}]}
2026-04-03 06:24:49,400 - INFO - [Backend] INFO:_routes.settings:Applied settings patch (changed=none)
2026-04-03 06:24:50,255 - INFO - [Electron] Checking for update...
2026-04-03 06:24:52,320 - INFO - [Backend] INFO:handlers.video_generation_handler:Resolution 540p - using fast pipeline
2026-04-03 06:24:52,376 - INFO - [Backend] INFO:handlers.video_generation_handler:Image: C:\Users\Admin\Downloads\Ltx Desktop Assets\project-1775148378860-pjzh4gnil\zit_image_20260403_061057_9a2049cc.png -> 960x544
2026-04-03 06:24:52,377 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder.__init__ patch for None gemma_root
2026-04-03 06:24:52,377 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder API embeddings patch
2026-04-03 06:24:52,377 - INFO - [Backend] WARNING:services.text_encoder.ltx_text_encoder:Failed to patch cleanup_memory for module ltx_pipelines.retake_pipeline
2026-04-03 06:24:52,378 - INFO - [Backend] Traceback (most recent call last):
2026-04-03 06:24:52,378 - INFO - [Backend]   File "C:\AI\LTX\LTX Desktop\resources\backend\services\text_encoder\ltx_text_encoder.py", line 156, in _install_cleanup_memory_patch
2026-04-03 06:24:52,378 - INFO - [Backend]     module = __import__(module_name, fromlist=["cleanup_memory"])
2026-04-03 06:24:52,378 - INFO - [Backend] ModuleNotFoundError: No module named 'ltx_pipelines.retake_pipeline'
2026-04-03 06:24:52,378 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed cleanup_memory patch
2026-04-03 06:24:52,379 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Generation started (model=fast, 960x544, 121 frames, 24 fps)
2026-04-03 06:24:52,379 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder.__init__ patch for None gemma_root
2026-04-03 06:24:52,379 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Pipeline load: 0.00s
2026-04-03 06:24:52,422 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Text encoding (local): 0.00s
2026-04-03 06:24:53,352 - ERROR - [Backend] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
2026-04-03 06:25:50,109 - ERROR - [Backend] 
  0%|          | 0/8 [00:00<?, ?it/s]
2026-04-03 06:26:04,768 - ERROR - [Backend] 
 12%|#2        | 1/8 [00:14<01:42, 14.67s/it]
2026-04-03 06:26:06,095 - ERROR - [Backend] 
 25%|##5       | 2/8 [00:16<00:40,  6.82s/it]
2026-04-03 06:26:07,422 - ERROR - [Backend] 
 38%|###7      | 3/8 [00:17<00:21,  4.31s/it]
2026-04-03 06:26:08,772 - ERROR - [Backend] 
 50%|#####     | 4/8 [00:18<00:12,  3.14s/it]
2026-04-03 06:26:10,096 - ERROR - [Backend] 
 62%|######2   | 5/8 [00:20<00:07,  2.49s/it]
2026-04-03 06:26:11,420 - ERROR - [Backend] 
 75%|#######5  | 6/8 [00:21<00:04,  2.09s/it]
2026-04-03 06:26:12,746 - ERROR - [Backend] 
 88%|########7 | 7/8 [00:22<00:01,  1.84s/it]
2026-04-03 06:26:14,069 - ERROR - [Backend] 
100%|##########| 8/8 [00:23<00:00,  1.68s/it]
2026-04-03 06:26:14,070 - ERROR - [Backend] 
100%|##########| 8/8 [00:23<00:00,  3.00s/it]
2026-04-03 06:26:17,964 - INFO - [Electron] Python backend exited with code 3221225477

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions