Skip to content

Conversation

@Leuconoe
Copy link

  1. Modified to allow model downloads at runtime when includeInBuild is false.
  2. Modified to enable loading grammar data in Android environments.
  3. Modified file transfer paths during build (antivirus issue).
  4. Modified exception handling for results in llama.cpp.

@Leuconoe
Copy link
Author

Related to issues #368 and #369.

@Leuconoe Leuconoe changed the title Fixed issues encountered in the Android build Fix issues encountered in the Android build Nov 21, 2025
@amakropoulos
Copy link
Collaborator

Thank you for the PR 🥇 !!
I'll look into it soon. I'm working on releasing v3.0.0 which will affect it but will help with the restructuring.

Copy link
Collaborator

@amakropoulos amakropoulos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for the PR!
Most changes are not needed anymore after the LlalaLib reimplementation.
Could you create a new branch branched off release/v3.0.0 and cherry-pick only the f2827d9 commit?
I have also done it myself here:
#373
but I think it will not show you as the author after merging

foreach (ModelEntry modelEntry in modelEntries)
{
if (!modelEntry.includeInBuild) continue;
if (!modelEntry.includeInBuild && string.IsNullOrEmpty(modelEntry.url)) continue;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the model is not included in the build we shouldn't download it.
if you want to achieve the above behavior you can include the model in the build but select the "Download on Build" option in the LLM manager

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amakropoulos So, does that mean it's not a feature that downloads at runtime?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it downloads at runtime.
what would you like to achieve?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amakropoulos Hello. I apologize in advance as I'm using a translator, and there may be some misunderstandings.

I needed a feature to download models at runtime via URL without including them in the build.

After reviewing my setup, I confirmed that the current version works correctly with the following configuration:

  • includeInBuild: set to false
  • downloadOnStart: set to true
  • LLM.WaitUntilModelSetup() to wait for the download to complete

I believe the reason I proposed code modifications earlier was due to incorrect settings on my end.

Thank you for the review.

Additional question: I couldn't find the "Download on Build" option you mentioned. Did you perhaps mean the "Download On Start" option?

image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, correct, I meant "Download On Start"

protected SemaphoreSlim chatLock = new SemaphoreSlim(1, 1);
protected string chatTemplate;
protected ChatTemplate template = null;
protected Task grammarTask;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great implementation for grammar! This is not needed anymore in release v3.0.0 because I load the grammar from the file directly and just keep it as a serialized string

response = $"{{\"data\": [{responseArray}]}}";
}
return getContent(JsonUtility.FromJson<Res>(response));
try
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good error catching - this is now handled from LlamaLib internally.

@Leuconoe
Copy link
Author

Leuconoe commented Dec 8, 2025

Yes. I'll work on a new branch and submit a pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants