Skip to content

Llama cuda support? #138

@jojo2357

Description

@jojo2357

I have compiled llama.cpp with the LLAMA_CUDA option and I notice that running an edge model does not use the GPU at all. Is there something I should look for in my config?

Also, would it be possible to download models other than the LIBERTY - EDGE models? I assume that I could get more inference earnings if I had a more popular model, too.

(Running on Ubuntu Linux with proprietary nvidia drivers)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions