You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
llama.cpp (via ramalama) detects the GPU and makes some output, but the output seems broken. A bug of llama.cpp, libkrun, or one of their dependencies?
Needs to consider how to specify the combination of the candidates of port forwarders, for each of protocols. Likely to need a YAML structure -- @jandubois
The team discussed the release of several software components including Airbnbread and Browser 2.2, with plans to address various integration issues and bugs before the scheduled release dates. They reviewed multiple open issues and agreed to merge some while postponing others to a later version, including discussions about auto-start functionality and port forwarding configurations. The team also covered GPU acceleration implementation in QEMU and related settings, with plans to release Browser 2.2 RC before November 5th and the final release on November 8th.
Next Steps
Jan: Review the auto start PR, focusing on macOS side testing
Jan: Implement additional functionality for the GitHub URL scheme
Akihiro: Merge the auto start PR after confirming tests are passing
Akihiro: Release Lima version 2.0 RC with Container D 2.2 RC before November 5th
Akihiro: Decide on Python 1.2 EOL - maintain for 3 months after version 2.0 release
Akihiro: Postpone direct IP port forwarding PR to version 2.1
Akihiro: Add GPU YAML field and CRI flag for GPU support
Ansuman: Work on adding GPU YAML field and flag
Ansuman: Test GPU inferencing with CPU to verify GPU acceleration is working properly
Summary
Airbnbread and Browser Release Updates:
Akihiro discussed the release of Airbnbread and its integration with Tehran Teach, which now includes GPU acceleration. He noted some issues with the output from Damage CPP, which he believes may be a bug, and mentioned that Ramadama is aware of the problem. Akihiro also outlined plans for the release of Browser 2.2, which is scheduled for November 5th, along with Browser 2.0 and a minimal version. He emphasized the need to merge remaining issues before the 2.0 release and considered maintaining Browser 1.02 for three more months after the release of Browser 2.0.
Auto-Start Merge and Issue Review:
Jan and Akihiro discussed the status of several open issues and agreed to merge some that were ready, while postponing others to version 2.1. They reviewed the auto-start functionality, noting that while some tests were failing, they could proceed with merging as it was a low-risk change. Jan agreed to review the auto-start feature, particularly the macOS side, but noted he might not have time until the weekend.
Direct IP Port Forwarding Implementation:
The team discussed direct IP port forwarding functionality, particularly for BM's IP address with PCNAT. Akihiro explained that this feature would bypass gRPC and SSH tunneling when direct IP access is available, using a new dial function that connects directly to the host network. Jan expressed confusion about how this fits with existing SSH and gRPC port forwarder settings in Lima.yaml, and Balaji clarified that this implementation is intended to provide an alternative to gRPC tunnels when VZNAT or similar solutions provide direct IP access.
Port Forwarding Configuration Simplification:
Jan and Akihiro discussed the configuration of port forwarding settings in Lima.yaml, focusing on the need to make them less confusing for users. They agreed that different variables should be used for TCP and UDP, and that the settings should be moved from environment variables to Lima.yaml. Jan suggested postponing the implementation to a later release, as they were making decisions on the fly and wanted to mark it as experimental.
Rental Price and CEI Updates:
Akihiro discussed a rental price update and a strategy for fixing CEI failures, which Jan confirmed was safe to merge. Jan mentioned he needed to implement a part of a project but could not promise a timeline, indicating it might be moved to version 1 if not completed. The conversation ended with Jan adding Anders to the attendee list.
QEMU GPU Support Implementation:
The team discussed implementing GPU support in QEMU, with Akihiro proposing to add a Boolean option for GPU acceleration in the generic options. Jan suggested using "GPU pass-through" as it might be more self-explanatory, though "GPU Excel" was also considered. The team agreed to use a Boolean option for now, with the possibility of adding more specific GPU settings in the future.
GPU Settings and Release Planning:
The team discussed GPU-related settings and documentation, agreeing to use "GPU" as the CLI option name unless future use cases require a different name. Jan committed to reviewing the auto starts PR and implementing additional GitHub URL scheme functionality. Akihiro confirmed that the GPU acceleration is working, though there are some output issues with the current implementation. The team agreed to release the 2.2 RC before November 5th, with the final release scheduled for November 8th, ahead of Akihiro's departure for KubeCon.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
How to join
The meetings are open to anyone, not just for maintainers and contributors.
Attendees
@AkihiroSuda @jandubois @abiosoft @unsuman @balajiv113 @afbjorklund
Agenda
Note taker: @AkihiroSuda
Lima v2.0
krunkit
Merged in v2.0.0-beta.0 https://lima-vm.io/docs/config/vmtype/krunkit/
llama.cpp (via ramalama) detects the GPU and makes some output, but the output seems broken. A bug of llama.cpp, libkrun, or one of their dependencies?
v2.0 ETA
Remaining issues and PRs
Nothing seems a blocker for releasing v2.0 GA -- @AkihiroSuda
launchdorsystemctl#4139limactl startCLI flags -- @AkihiroSudagithub:.../*.yamlreferences to be symlinks or redirects #4222LIMA_BATS_ALL_TESTS_RETRIESvariable #4224GPU accelerationYAML field and a CLI flag #4257--gputhere is still non-accelerated GPU -- @AkihiroSudav1.2 EOL
Tasks
github:.../*.yamlreferences to be symlinks or redirects #4222GPU accelerationYAML field and a CLI flag #4257Next meeting
2025-11-13 (Thursday) 10:15-14:00 @ KubeCon North America (Atlanta) - Building B | Level 1 | Exhibit Hall B3-B5 | Solutions Showcase
2025-11-20 (Thursday)
Beta Was this translation helpful? Give feedback.
All reactions