Skip to content

Could not connect to the generation server error for I2V and T2V #83

@Tylervp

Description

@Tylervp

I get the error: Generation failed. Could not connect to the generation server. Make sure the backend is running. when trying to do I2V or T2V.

Then immediately after it just shows: Reconnecting. The backend process stopped unexpectedly. Attempting to restart...

Things I have tried:

  • Switching encoders to use the API
  • Running the app as Admin
  • Changing image resolutions
  • Turning on/off prompt enhancement

Text to image works fine, as well as generating videos using the API.

Full log:

2026-04-02 21:17:00,675 - INFO - [Electron] Session log file: C:\Users\tyler\AppData\Local\LTXDesktop\logs\session_2026-04-03_02-17-00_unknown.log
2026-04-02 21:17:00,722 - INFO - [Electron] [icon] Loading app icon from: C:\Users\tyler\AppData\Local\Programs\LTX Desktop\resources\icon.ico | exists: false
2026-04-02 21:17:00,843 - INFO - [Renderer] Projects saved: 1
2026-04-02 21:17:00,844 - INFO - [Renderer] Starting Python backend...
2026-04-02 21:17:00,845 - INFO - [Electron] Using bundled Python: C:\Users\tyler\AppData\Local\LTXDesktop\python\python.exe
2026-04-02 21:17:00,845 - INFO - [Electron] Starting Python backend: C:\Users\tyler\AppData\Local\LTXDesktop\python\python.exe C:\Users\tyler\AppData\Local\Programs\LTX Desktop\resources\backend\ltx2_server.py
2026-04-02 21:17:01,428 - ERROR - [Backend] C:\Users\tyler\AppData\Local\LTXDesktop\python\Lib\site-packages\torch\cuda_init_.py:65: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
2026-04-02 21:17:01,429 - ERROR - [Backend] import pynvml # type: ignore[import]
2026-04-02 21:17:02,593 - INFO - [Backend] INFO:main:SageAttention enabled - attention operations will be faster
2026-04-02 21:17:02,606 - INFO - [Backend] INFO:main:Models directory: C:\Users\tyler\AppData\Local\LTXDesktop\models
2026-04-02 21:17:02,641 - INFO - [Backend] INFO:main:Runtime policy force_api_generations=False (system=Windows cuda_available=True vram_gb=23)
2026-04-02 21:17:05,740 - INFO - [Backend] INFO:handlers.settings_handler:Settings loaded from C:\Users\tyler\AppData\Local\LTXDesktop\settings.json
2026-04-02 21:17:05,759 - INFO - [Electron] Checking for update...
2026-04-02 21:17:05,769 - INFO - [Backend] INFO:main:============================================================
2026-04-02 21:17:05,769 - INFO - [Backend] INFO:main:LTX-2 Video Generation Server (FastAPI + Uvicorn)
2026-04-02 21:17:05,790 - INFO - [Backend] INFO:main:Platform: Windows (AMD64)
2026-04-02 21:17:05,791 - INFO - [Backend] INFO:main:Device: cuda | Dtype: torch.bfloat16
2026-04-02 21:17:05,791 - INFO - [Backend] INFO:main:GPU: NVIDIA GeForce RTX 4090 | VRAM: 23 GB
2026-04-02 21:17:05,791 - INFO - [Backend] INFO:main:SageAttention: enabled
2026-04-02 21:17:05,791 - INFO - [Backend] INFO:main:Python: 3.13.12 | Torch: 2.10.0+cu128
2026-04-02 21:17:05,791 - INFO - [Backend] INFO:main:============================================================
2026-04-02 21:17:05,816 - INFO - [Backend] Started server process [33244]
2026-04-02 21:17:05,816 - INFO - [Backend] Waiting for application startup.
2026-04-02 21:17:05,816 - INFO - [Backend] Application startup complete.
2026-04-02 21:17:05,816 - INFO - [Backend] Server running on http://127.0.0.1:55353
2026-04-02 21:17:05,817 - INFO - [Renderer] Checking backend health...
2026-04-02 21:17:05,817 - INFO - [Renderer] Python backend started successfully
2026-04-02 21:17:05,853 - INFO - [Renderer] Backend health: {"status":"ok","models_loaded":false,"active_model":null,"gpu_info":{"name":"NVIDIA GeForce RTX 4090","vram":24564,"vramUsed":2601},"sage_attention":true,"models_status":[{"id":"fast","name":"LTX-2 Fast (Distilled)","loaded":false,"downloaded":true}]}
2026-04-02 21:17:05,994 - INFO - [Backend] INFO:_routes.settings:Applied settings patch (changed=none)
2026-04-02 21:17:22,497 - INFO - [Backend] INFO:handlers.video_generation_handler:Resolution 540p - using fast pipeline
2026-04-02 21:17:22,515 - INFO - [Backend] INFO:handlers.video_generation_handler:Image: C:\Users\tyler\Desktop\TestLTX4.jpg -> 544x960
2026-04-02 21:17:22,516 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder.init patch for None gemma_root
2026-04-02 21:17:22,516 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder API embeddings patch
2026-04-02 21:17:22,516 - INFO - [Backend] WARNING:services.text_encoder.ltx_text_encoder:Failed to patch cleanup_memory for module ltx_pipelines.retake_pipeline
2026-04-02 21:17:22,516 - INFO - [Backend] Traceback (most recent call last):
2026-04-02 21:17:22,517 - INFO - [Backend] File "C:\Users\tyler\AppData\Local\Programs\LTX Desktop\resources\backend\services\text_encoder\ltx_text_encoder.py", line 156, in install_cleanup_memory_patch
2026-04-02 21:17:22,517 - INFO - [Backend] module = import(module_name, fromlist=["cleanup_memory"])
2026-04-02 21:17:22,517 - INFO - [Backend] ModuleNotFoundError: No module named 'ltx_pipelines.retake_pipeline'
2026-04-02 21:17:22,517 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed cleanup_memory patch
2026-04-02 21:17:22,517 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Generation started (model=fast, 544x960, 121 frames, 24 fps)
2026-04-02 21:17:22,517 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Installed PromptEncoder.init patch for None gemma_root
2026-04-02 21:17:22,517 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Pipeline load: 0.00s
2026-04-02 21:17:28,209 - INFO - [Backend] INFO:services.text_encoder.ltx_text_encoder:Text encoded via API in 5.6s
2026-04-02 21:17:28,210 - INFO - [Backend] INFO:handlers.video_generation_handler:[i2v] Text encoding (api): 5.65s
2026-04-02 21:17:28,329 - INFO - [Electron] Python backend exited with code 3221225477
2026-04-02 21:17:28,329 - INFO - [Electron] Using bundled Python: C:\Users\tyler\AppData\Local\LTXDesktop\python\python.exe
2026-04-02 21:17:28,329 - INFO - [Electron] Starting Python backend: C:\Users\tyler\AppData\Local\LTXDesktop\python\python.exe C:\Users\tyler\AppData\Local\Programs\LTX Desktop\resources\backend\ltx2_server.py
2026-04-02 21:17:28,869 - ERROR - [Backend] C:\Users\tyler\AppData\Local\LTXDesktop\python\Lib\site-packages\torch\cuda_init
.py:65: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
2026-04-02 21:17:28,869 - ERROR - [Backend] import pynvml # type: ignore[import]
2026-04-02 21:17:29,974 - INFO - [Backend] INFO:main:SageAttention enabled - attention operations will be faster
2026-04-02 21:17:29,985 - INFO - [Backend] INFO:main:Models directory: C:\Users\tyler\AppData\Local\LTXDesktop\models
2026-04-02 21:17:30,019 - INFO - [Backend] INFO:main:Runtime policy force_api_generations=False (system=Windows cuda_available=True vram_gb=23)
2026-04-02 21:17:33,103 - INFO - [Backend] INFO:handlers.settings_handler:Settings loaded from C:\Users\tyler\AppData\Local\LTXDesktop\settings.json
2026-04-02 21:17:33,130 - INFO - [Backend] INFO:main:============================================================
2026-04-02 21:17:33,130 - INFO - [Backend] INFO:main:LTX-2 Video Generation Server (FastAPI + Uvicorn)
2026-04-02 21:17:33,151 - INFO - [Backend] INFO:main:Platform: Windows (AMD64)
2026-04-02 21:17:33,151 - INFO - [Backend] INFO:main:Device: cuda | Dtype: torch.bfloat16
2026-04-02 21:17:33,151 - INFO - [Backend] INFO:main:GPU: NVIDIA GeForce RTX 4090 | VRAM: 23 GB
2026-04-02 21:17:33,151 - INFO - [Backend] INFO:main:SageAttention: enabled
2026-04-02 21:17:33,152 - INFO - [Backend] INFO:main:Python: 3.13.12 | Torch: 2.10.0+cu128
2026-04-02 21:17:33,152 - INFO - [Backend] INFO:main:============================================================
2026-04-02 21:17:33,176 - INFO - [Backend] Started server process [37352]
2026-04-02 21:17:33,176 - INFO - [Backend] Waiting for application startup.
2026-04-02 21:17:33,177 - INFO - [Backend] Application startup complete.
2026-04-02 21:17:33,177 - INFO - [Backend] Server running on http://127.0.0.1:60119
2026-04-02 21:17:33,178 - INFO - [Renderer] Checking backend health...
2026-04-02 21:17:33,198 - INFO - [Renderer] Backend health: {"status":"ok","models_loaded":false,"active_model":null,"gpu_info":{"name":"NVIDIA GeForce RTX 4090","vram":24564,"vramUsed":2708},"sage_attention":true,"models_status":[{"id":"fast","name":"LTX-2 Fast (Distilled)","loaded":false,"downloaded":true}]}
2026-04-02 21:17:33,340 - INFO - [Backend] INFO:_routes.settings:Applied settings patch (changed=none)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions