-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request
Milestone
Description
local inferencing may boost feelings of security and privacy for the end user
down-sides include greater power consumption, potentially more latency awaiting replies on top of less reliable interactions
additionally, we have to invest time supporting various models
however, we can experiment with the Ollama framework and validate several models, offering them as possible options for the user to have
ideally, the user can configure their use-case, including
- specifying locally inferenced LLM (LAN, API)
- remotely inferenced LLM (WAN, API)
- choose which LLM they want to use
- have feedback & info which LLM is in use at any time
- have easy means of switching models at run-time reliably
Metadata
Metadata
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request