-
Notifications
You must be signed in to change notification settings - Fork 1
Vision model detection fails for qwen2.5vl:32b despite capabilities field reporting vision #4
Description
Bug Description
A-Eye's "Load Models" feature does not detect qwen2.5vl:32b as a vision model, even though Ollama's /api/show endpoint correctly reports "vision" in the capabilities array for this model.
The qwen2.5vl:72b variant IS detected as a vision model. Both models are identical in structure — same family (qwen25vl), same template, same vision components.
Steps to Reproduce
- Pull
qwen2.5vl:32bvia Ollama (ollama pull qwen2.5vl:32b) - Open A-Eye Settings
- Click "Load Models"
qwen2.5vl:32bdoes not appear in the Vision Model dropdown
Evidence
Both models report identical capabilities from Ollama:
// qwen2.5vl:32b
{"capabilities": ["completion", "vision"]}
// qwen2.5vl:72b
{"capabilities": ["completion", "vision"]}Both share the same family and vision metadata:
{"family": "qwen25vl", "families": ["qwen25vl"]}Both have identical qwen25vl.vision.* keys in model_info.
Code Reference
The detection logic in backend/ollama_client.py (list_models_by_capability) correctly checks for "vision" in capabilities, so this should work. The issue may be upstream in what specific Ollama versions return for the capabilities field, or a caching issue in A-Eye.
Environment
- A-Eye: latest (Community Applications, Unraid)
- Ollama: 0.19.0
- Tested on both M2 Max (96GB) and M4 Max (128GB) Mac Studios
- Both machines show the same behavior — 32b not detected, 72b detected
Workaround
Currently none — the Vision Model field is dropdown-only with no manual text entry, so there's no way to select the 32b model.
Suggestion
Consider allowing manual text entry in the Vision Model field as a fallback, in addition to the dropdown. This would let users specify any Ollama model regardless of detection issues.