Skip to content

AI Now in remote mode can ignore selected remote provider and use stale local config #121

@HamsteRider-m

Description

@HamsteRider-m

Summary

When the desktop app is in remote mode and the UI shows the AI Now model list from the remote server, AI Now can still launch with stale local provider config from local JSON/TOML files instead of using the provider currently shown in the UI.

In my case, the UI showed a remote kimi-k2.5 provider on DashScope and marked it as connected/default, but AI Now still read a local leftover localhost:8082 provider first. After manual edits, the client later switched to openai_responses for the same DashScope endpoint and failed with 404, while the remote server was correctly using chat-completions / legacy mode.

Expected behavior

When remote mode is enabled and the frontend is already showing the remote provider/model list, AI Now should resolve and use the same active remote provider configuration automatically.

It should not silently prefer stale local files under ~/Library/Application Support/co.nowledge.mem.desktop/... if those conflict with the provider currently selected in the UI.

Actual behavior

The UI showed a valid remote provider/model, but AI Now launched with a different provider config from local residual files.

This produced two failure modes:

  1. First failure mode
  • UI showed remote kimi-k2.5
  • AI Now runtime loaded local http://localhost:8082/v1
  • Result: connection failure
  1. Second failure mode after partial local sync
  • AI Now runtime switched to DashScope endpoint, but chose openai_responses
  • Same endpoint/provider combination should have been chat-completions / legacy mode
  • Result: 404 from the provider

Minimal reproduction

  1. Enable remote mode in the desktop app.
  2. Make sure the frontend is showing remote provider/model entries.
  3. Set a remote custom provider as connected/default in the UI.
  4. Keep stale local AI Now config files present from an older provider setup.
  5. Start a new AI Now session.

Observed evidence (redacted)

Local failing runtime loaded stale local config first:

Using LLM provider: type='openai_legacy' base_url='http://localhost:8082/v1'

After local manual edits, AI Now still chose the wrong protocol mode for the endpoint:

Using LLM provider: type='openai_responses' base_url='https://coding.dashscope.aliyuncs.com/v1'
openai.NotFoundError: Error code: 404

Manual alignment to the remote server's working config fixed it. The remote server was using the equivalent of:

remote_llm.json: openai_compatible + api_format=chat_completions
kimi runtime: openai_legacy
model: kimi-k2.5
base_url: https://coding.dashscope.aliyuncs.com/v1

Why this looks like a bug

The frontend and AI Now runtime appear to resolve provider state from different sources.

The UI had already switched to the remote provider list and displayed the remote model as connected/default, but AI Now still preferred stale local config on disk. That mismatch is very confusing because the visible source of truth in the product says one thing while the runtime uses something else.

Workaround

Manual edits to local files made AI Now work again by forcing the local runtime to match the remote server config. That should not be necessary.

Environment

  • Product: Nowledge Mem desktop app / AI Now
  • Mode: remote mode enabled
  • MCP server connected: nowledge-mem
  • Remote provider shown in UI: custom DashScope-backed kimi-k2.5

Suggested fix direction

  • Make AI Now resolve the provider/model from the same source of truth as the frontend when remote mode is enabled.
  • If local fallback is still needed, detect conflicts and surface a warning instead of silently using stale local files.
  • Keep the protocol mode (chat_completions vs responses) consistent with the selected remote provider metadata.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions