From 3d9d84c4f4cbf4163dcfb36b7785fc59a243ac57 Mon Sep 17 00:00:00 2001 From: aditya-xq <32733783+aditya-xq@users.noreply.github.com> Date: Sun, 16 Nov 2025 16:02:55 +0530 Subject: [PATCH] Update LMStudio documentation link for local server Fixing the broken link to LM Studio local server docs. --- weave/guides/integrations/local_models.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/weave/guides/integrations/local_models.mdx b/weave/guides/integrations/local_models.mdx index 8029ae60dd..efba0d5141 100644 --- a/weave/guides/integrations/local_models.mdx +++ b/weave/guides/integrations/local_models.mdx @@ -25,7 +25,7 @@ In the case of local models, the `api_key` can be any string but it should be ov Here's a list of apps that allows you to download and run models from Hugging Face on your computer, that support OpenAI SDK compatibility. 1. Nomic [GPT4All](https://www.nomic.ai/gpt4all) - support via Local Server in settings ([FAQ](https://docs.gpt4all.io/gpt4all_help/faq.html)) -1. [LMStudio](https://lmstudio.ai/) - Local Server OpenAI SDK support [docs](https://lmstudio.ai/docs/local-server) +1. [LMStudio](https://lmstudio.ai/) - Local Server OpenAI SDK support [docs](https://lmstudio.ai/docs/developer/core/server) 1. [Ollama](https://ollama.com/) - [Experimental Support](https://github.com/ollama/ollama/blob/main/docs/openai.mdx) for OpenAI SDK 1. llama.cpp via [llama-cpp-python](https://llama-cpp-python.readthedocs.io/en/latest/server/) python package 1. [llamafile](https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles) - `http://localhost:8080/v1` automatically supports OpenAI SDK on Llamafile run