Skip to content

Releases: LettuceAI/app

Desktop Dev Build #155.1 (fd337f7)

02 Apr 20:59
fd337f7

Choose a tag to compare

Pre-release

Automated desktop build artifacts for commit fd337f715b248e962d038afe529dce4cd20fafc6.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23921714345

Changes since previous Desktop build desktop-dev-154-1-6a28ef5

Compare: desktop-dev-154-1-6a28ef5...fd337f7

  • fd337f7 fix(tauri): repair lorebook schema and unicode thinking parsing

Desktop Dev Build #154.1 (6a28ef5)

02 Apr 19:11
6a28ef5

Choose a tag to compare

Pre-release

Automated desktop build artifacts for commit 6a28ef536f3f7fea22e5d653ba2f512fc434ac96.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23917485227

Changes since previous Desktop build desktop-dev-151-1-a85d2e8

Compare: desktop-dev-151-1-a85d2e8...6a28ef5

  • 6a28ef5 fix(windows): use DXGI video memory out param
  • 7eb9e51 fix(windows): repair DXGI adapter checks
  • d808a70 fix: fix typescript check error
  • 93dedcc feat: add chat settings drawer and redesign session advanced settings
  • becf3d2 refactor: migrate discovery pages to TopNav, add window controls to chat sub-pages
  • 757b7b9 fix: add window controls and drag regions to all non-TopNav pages
  • ac29e13 feat: custom window decorations with frameless titlebar
  • 3170af1 refactor: redesign model editor layout for better horizontal space usage
  • 6b9ccb2 refactor: replace embedded GGUF template modal with inline collapsible panel
  • 31b1115 fix: expose llama.cpp reasoning settings
  • a14a6de fix: share llama backend across runtime and context info
  • 5491b01 feat: add local rp default prompt template
  • 5c13103 fix: support additional thinking tag variants
  • 8ca7834 fix: normalize thinking tags across api and local responses
  • cd2c897 fix(llama-cpp): harden local tool-calling flow and structured-output failures
  • ab00be4 fix: stabilize llama tool calling
  • c16ed7d feat: add lorebook keyword detection modes
  • ceab247 fix: clamp windows vulkan vram estimates
  • 0776b51 fix: correct llama cpu fallback sizing
  • 1ada467 fix: prevent settings reset after llama cpu fallback
  • 0fa482e Merge pull request #40 from GentleBurr/feature/prompt-caching
  • b7d8807 fix: track cache-aware usage and pricing
  • 3f83920 fix: complete prompt caching support
  • d02f523 refactor: remove legacy personas page
  • 336a226 refactor: route persona flows through library
  • 9ae7c66 fix: expand stacked sonner toasts
  • 0294c62 fix: reuse llama backend for context info
  • 045e302 feat: unload local chat model during dynamic memory
  • 394af41 chore: fix memory state test error
  • 46cdf36 feat: add llama model load progress toast
  • 3db3c17 feat(memory): add progress bar to dynamic memory cycle
  • 102d979 feat(chat): add prompt cache TTL and sticky routing
  • 8156954 feat(models): add runability score to edit model page for llama.cpp models
  • 5865d54 feat(ios): add onnxruntime installer workflow
  • 459bfee feat(ollama): add native ollama integration
  • fba299a Update README.md
  • 833b133 fix(backup): preserve dynamic memory settings on restore
  • 010c4a3 Update Cargo.lock

Android Dev Build #181.1 (fd337f7)

02 Apr 20:59
fd337f7

Choose a tag to compare

Pre-release

Automated Android build artifacts for commit fd337f715b248e962d038afe529dce4cd20fafc6.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23921704128

Changes since previous Android build android-dev-180-1-d808a70

Compare: android-dev-180-1-d808a70...fd337f7

  • fd337f7 fix(tauri): repair lorebook schema and unicode thinking parsing
  • 6a28ef5 fix(windows): use DXGI video memory out param
  • 7eb9e51 fix(windows): repair DXGI adapter checks

Android Dev Build #180.1 (d808a70)

02 Apr 17:02
d808a70

Choose a tag to compare

Pre-release

Automated Android build artifacts for commit d808a7063195578606b16ef4d2a52ece698211d0.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23912158641

Changes since previous Android build android-dev-178-1-bbb19e0

Compare: android-dev-178-1-bbb19e0...d808a70

  • d808a70 fix: fix typescript check error
  • 93dedcc feat: add chat settings drawer and redesign session advanced settings
  • becf3d2 refactor: migrate discovery pages to TopNav, add window controls to chat sub-pages
  • 757b7b9 fix: add window controls and drag regions to all non-TopNav pages
  • ac29e13 feat: custom window decorations with frameless titlebar
  • 3170af1 refactor: redesign model editor layout for better horizontal space usage
  • 6b9ccb2 refactor: replace embedded GGUF template modal with inline collapsible panel
  • 31b1115 fix: expose llama.cpp reasoning settings
  • a14a6de fix: share llama backend across runtime and context info
  • 5491b01 feat: add local rp default prompt template
  • 5c13103 fix: support additional thinking tag variants
  • 8ca7834 fix: normalize thinking tags across api and local responses
  • cd2c897 fix(llama-cpp): harden local tool-calling flow and structured-output failures
  • ab00be4 fix: stabilize llama tool calling
  • c16ed7d feat: add lorebook keyword detection modes
  • ceab247 fix: clamp windows vulkan vram estimates
  • 0776b51 fix: correct llama cpu fallback sizing
  • 1ada467 fix: prevent settings reset after llama cpu fallback
  • 0fa482e Merge pull request #40 from GentleBurr/feature/prompt-caching
  • b7d8807 fix: track cache-aware usage and pricing
  • 3f83920 fix: complete prompt caching support
  • d02f523 refactor: remove legacy personas page
  • 336a226 refactor: route persona flows through library
  • 9ae7c66 fix: expand stacked sonner toasts
  • 0294c62 fix: reuse llama backend for context info
  • 045e302 feat: unload local chat model during dynamic memory
  • 394af41 chore: fix memory state test error
  • 46cdf36 feat: add llama model load progress toast
  • 3db3c17 feat(memory): add progress bar to dynamic memory cycle
  • 102d979 feat(chat): add prompt cache TTL and sticky routing
  • 8156954 feat(models): add runability score to edit model page for llama.cpp models
  • 5865d54 feat(ios): add onnxruntime installer workflow
  • 459bfee feat(ollama): add native ollama integration
  • fba299a Update README.md
  • 833b133 fix(backup): preserve dynamic memory settings on restore
  • 010c4a3 Update Cargo.lock
  • a85d2e8 fix: update llama.cpp to latest version

Desktop Dev Build #151.1 (a85d2e8)

30 Mar 07:09
a85d2e8

Choose a tag to compare

Pre-release

Automated desktop build artifacts for commit a85d2e83e1fa4b85e08b0f1f2be80b5a462cd980.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23732399442

Changes since previous Desktop build desktop-dev-150-1-bbb19e0

Compare: desktop-dev-150-1-bbb19e0...a85d2e8

  • a85d2e8 fix: update llama.cpp to latest version

Desktop Dev Build #150.1 (bbb19e0)

29 Mar 23:00
bbb19e0

Choose a tag to compare

Pre-release

Automated desktop build artifacts for commit bbb19e053e1219e7abea0f0c8e11e89c8fccc5d2.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23721151583

Changes since previous Desktop build desktop-dev-148-1-4a37683

Compare: desktop-dev-148-1-4a37683...bbb19e0

  • bbb19e0 fix(dynamic-memory): reconcile stale processing state on load
  • 3f23e5f fix(dynamic-memory): allow cancelling stale non-idle runs
  • f3d19e0 fix(llama-cpp): reuse loaded models across concurrent requests
  • f828e91 feat(llama-cpp): surface runtime fallback reports
  • eeee3fc fix(llama-cpp): clamp CPU fallback context and batch
  • dfd92cf fix(ollama): support native tool call payloads
  • a326db4 fix(macos): accept extracted ONNX dylibs without Mach-O header check
  • e18471c fix(android): exclude desktop ORT dylib validation on mobile
  • fdfa245 feat(settings): add provider-level streaming controls
  • 382092c fix(api): normalize request timeouts to 30 minutes
  • 1ac100d fix(sync): reset stale local change state
  • 4a192f7 fix(onnxruntime): ignore macos dSYM dylib artifacts

Android Dev Build #178.1 (bbb19e0)

29 Mar 23:00
bbb19e0

Choose a tag to compare

Pre-release

Automated Android build artifacts for commit bbb19e053e1219e7abea0f0c8e11e89c8fccc5d2.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23721161194

Changes since previous Android build android-dev-177-1-e18471c

Compare: android-dev-177-1-e18471c...bbb19e0

  • bbb19e0 fix(dynamic-memory): reconcile stale processing state on load
  • 3f23e5f fix(dynamic-memory): allow cancelling stale non-idle runs
  • f3d19e0 fix(llama-cpp): reuse loaded models across concurrent requests
  • f828e91 feat(llama-cpp): surface runtime fallback reports
  • eeee3fc fix(llama-cpp): clamp CPU fallback context and batch
  • dfd92cf fix(ollama): support native tool call payloads
  • a326db4 fix(macos): accept extracted ONNX dylibs without Mach-O header check

Desktop Release 1.0.3

28 Mar 22:24
a326db4

Choose a tag to compare

Automated desktop release build artifacts for commit a326db4681cf86c6e9e41a2c747c0598f997adf9.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23695665252

Changes

  • No previous successful Desktop release build tag found.

Android Release 1.3.3

28 Mar 21:46
e18471c

Choose a tag to compare

Automated Android release build artifacts for commit e18471ce639dc1d18907eea4cc97719161300aa3.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23695015549

Changes

  • No previous successful Android release build tag found.

Desktop Dev Build #148.1 (4a37683)

28 Mar 18:52
4a37683

Choose a tag to compare

Pre-release

Automated desktop build artifacts for commit 4a37683f4eef8d5a59916939960e8c3a969d6b88.
Workflow run: https://github.com/LettuceAI/app/actions/runs/23691988314

Changes since previous Desktop build desktop-dev-146-1-b6c0b64

Compare: desktop-dev-146-1-b6c0b64...4a37683

  • 4a37683 fix(onnxruntime): support macos x86_64 downloads