Skip to content

custom local LLM #5

@mbz4

Description

@mbz4

local inferencing may boost feelings of security and privacy for the end user

down-sides include greater power consumption, potentially more latency awaiting replies on top of less reliable interactions

additionally, we have to invest time supporting various models

however, we can experiment with the Ollama framework and validate several models, offering them as possible options for the user to have

ideally, the user can configure their use-case, including

  • specifying locally inferenced LLM (LAN, API)
  • remotely inferenced LLM (WAN, API)
  • choose which LLM they want to use
  • have feedback & info which LLM is in use at any time
  • have easy means of switching models at run-time reliably

Metadata

Metadata

Assignees

Labels

documentationImprovements or additions to documentationenhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions