Support Ollama, LM Studio, Llama.cpp, and other model providers/servers #426
              
  
  Closed
              
          
                  
                    
                      wbste
                    
                  
                
                  started this conversation in
                Feature Requests
              
            Replies: 1 comment
-
| Seems I had the wrong keyword when I searched. Found this: #419 | 
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, the endpoint for OpenAI sends a request to
/responses, which is an OpenAI only endpoint AFAIK. Most "OpenAI-compat" servers rely on/chat/completions, and the client app maintains message state.See: https://github.com/ollama/ollama/blob/main/docs/openai.md or https://lmstudio.ai/docs/app/api/endpoints/openai
It looks like the easiest thing is to add a few providers:
https://ai-sdk.dev/providers/community-providers/ollama, although there is an warning at the top about the embeddings.
https://ai-sdk.dev/providers/openai-compatible-providers/lmstudio
I think simply adding
@ai-sdk/openai-compatiblewould take care of Ollama, LM Studio, Llama.cpp, and many other model providers.Beta Was this translation helpful? Give feedback.
All reactions