Skip to content

feat: Ollama local LLM support#94

Open
f1rep0wr wants to merge 1 commit intoTHU-MAIC:mainfrom
f1rep0wr:feat/ollama-integration
Open

feat: Ollama local LLM support#94
f1rep0wr wants to merge 1 commit intoTHU-MAIC:mainfrom
f1rep0wr:feat/ollama-integration

Conversation

@f1rep0wr
Copy link
Copy Markdown

Summary

  • Add Ollama as a built-in provider (OpenAI-compatible, no API key needed)
  • 8 default models: llama3.3, llama3.2, qwen2.5, qwen2.5:32b, mistral, gemma3, deepseek-r1, phi4
  • Keyless provider activation: providers with only a baseUrl (no API key) now activate correctly
  • isProviderKeyRequired() helper for consistent key validation across routes
  • i18n strings for Ollama (zh-CN + en-US) and missing Doubao strings
  • SSRF guard documentation clarifying server-configured URLs bypass safely

Security: requiresApiKey no longer accepted from client

Previously, clients could send x-requires-api-key: false (header) or requiresApiKey: false (body) to skip API key validation for ANY provider, including ones like OpenAI that require keys. This meant a client could consume server-configured credentials without authorization.

Fix: the server now derives requiresApiKey exclusively from its own provider registry. The client header and body field are removed from all server-side parsing. For Ollama, the registry says requiresApiKey: false so it works keyless. For OpenAI, the registry says true so the check is enforced regardless of what a client sends.

Test plan

  • Configure OLLAMA_BASE_URL=http://localhost:11434/v1 in .env.local, select Ollama model, verify chat works
  • Verify existing providers (OpenAI, Anthropic, etc.) still require API keys
  • Verify sending x-requires-api-key: false header does NOT bypass key check for OpenAI
  • Verify classroom generation works with Ollama as DEFAULT_MODEL

@wyuc
Copy link
Copy Markdown
Contributor

wyuc commented Apr 2, 2026

This needs a rebase onto current main. A few things landed after this PR was opened that cause conflicts (Grok provider, MiniMax expansion, model selection refactoring). The main spots are lib/types/provider.ts, lib/ai/providers.ts, lib/server/provider-config.ts, and app/api/chat/route.ts.

Let me know if you need a hand with the rebase.

@f1rep0wr f1rep0wr force-pushed the feat/ollama-integration branch 2 times, most recently from 503c3aa to 3df6684 Compare April 2, 2026 17:57
- Ollama as a built-in provider (OpenAI-compatible, no API key needed)
- 8 default models: llama3.3, llama3.2, qwen2.5, qwen2.5:32b, mistral, gemma3, deepseek-r1, phi4
- Keyless provider activation: providers with only a baseUrl (no API key) now activate correctly
- isProviderKeyRequired() helper for consistent key validation across routes
- i18n strings for Ollama (zh-CN + en-US) and missing Doubao strings
- SSRF guard documentation clarifying server-configured URLs bypass safely

Previously, clients could send `x-requires-api-key: false` (header) or
`requiresApiKey: false` (body) to skip API key validation for ANY provider,
including ones like OpenAI that require keys. This meant a client could
consume server-configured credentials without authorization.

Fix: the server now derives requiresApiKey exclusively from its own provider
registry. The client header and body field are removed from all server-side
parsing. For Ollama, the registry says `requiresApiKey: false` so it works
keyless. For OpenAI, the registry says `true` so the check is enforced
regardless of what a client sends.
@f1rep0wr f1rep0wr force-pushed the feat/ollama-integration branch from 3df6684 to 8f92df6 Compare April 2, 2026 18:05
@f1rep0wr
Copy link
Copy Markdown
Author

f1rep0wr commented Apr 2, 2026

@wyuc Rebased onto main (95b5c2b), conflicts in spots are all resolved.

  • resolveModel now returns providerId on ResolvedModel, so chat/route.ts and classroom-generation.ts use it directly instead of re-parsing the model string
  • Removed dead requiresApiKey destructure from verify-model/route.ts to match security fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants