-
Notifications
You must be signed in to change notification settings - Fork 362
Open
Description
I have compiled llama.cpp with the LLAMA_CUDA option and I notice that running an edge model does not use the GPU at all. Is there something I should look for in my config?
Also, would it be possible to download models other than the LIBERTY - EDGE models? I assume that I could get more inference earnings if I had a more popular model, too.
(Running on Ubuntu Linux with proprietary nvidia drivers)
Metadata
Metadata
Assignees
Labels
No labels