-
-
Notifications
You must be signed in to change notification settings - Fork 153
Fix issues encountered in the Android build #370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Leuconoe
commented
Nov 21, 2025
- Modified to allow model downloads at runtime when includeInBuild is false.
- Modified to enable loading grammar data in Android environments.
- Modified file transfer paths during build (antivirus issue).
- Modified exception handling for results in llama.cpp.
…etection when using the temporaryCachePath.
|
Thank you for the PR 🥇 !! |
amakropoulos
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| foreach (ModelEntry modelEntry in modelEntries) | ||
| { | ||
| if (!modelEntry.includeInBuild) continue; | ||
| if (!modelEntry.includeInBuild && string.IsNullOrEmpty(modelEntry.url)) continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the model is not included in the build we shouldn't download it.
if you want to achieve the above behavior you can include the model in the build but select the "Download on Build" option in the LLM manager
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amakropoulos So, does that mean it's not a feature that downloads at runtime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it downloads at runtime.
what would you like to achieve?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amakropoulos Hello. I apologize in advance as I'm using a translator, and there may be some misunderstandings.
I needed a feature to download models at runtime via URL without including them in the build.
After reviewing my setup, I confirmed that the current version works correctly with the following configuration:
includeInBuild: set to falsedownloadOnStart: set to trueLLM.WaitUntilModelSetup()to wait for the download to complete
I believe the reason I proposed code modifications earlier was due to incorrect settings on my end.
Thank you for the review.
Additional question: I couldn't find the "Download on Build" option you mentioned. Did you perhaps mean the "Download On Start" option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, correct, I meant "Download On Start"
| protected SemaphoreSlim chatLock = new SemaphoreSlim(1, 1); | ||
| protected string chatTemplate; | ||
| protected ChatTemplate template = null; | ||
| protected Task grammarTask; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great implementation for grammar! This is not needed anymore in release v3.0.0 because I load the grammar from the file directly and just keep it as a serialized string
| response = $"{{\"data\": [{responseArray}]}}"; | ||
| } | ||
| return getContent(JsonUtility.FromJson<Res>(response)); | ||
| try |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good error catching - this is now handled from LlamaLib internally.
|
Yes. I'll work on a new branch and submit a pull request. |