Skip to content

refine: Refine API of mm#88

Merged
junyu0312 merged 1 commit intomainfrom
refine
Mar 1, 2026
Merged

refine: Refine API of mm#88
junyu0312 merged 1 commit intomainfrom
refine

Conversation

@junyu0312
Copy link
Owner

@junyu0312 junyu0312 commented Mar 1, 2026

Summary by CodeRabbit

Release Notes

  • Refactor
    • Optimized memory access patterns throughout boot loaders, device management, and memory management layers by removing unnecessary synchronization mechanisms.
    • Improved performance and concurrent access efficiency for memory operations across the system.
    • Streamlined borrowing patterns in device communication to enhance code clarity and maintainability.

@coderabbitai
Copy link

coderabbitai bot commented Mar 1, 2026

📝 Walkthrough

Walkthrough

This pull request removes Mutex wrappers from memory management across the codebase and converts method signatures from mutable references to immutable references for MemoryAddressSpace and related types. Changes span bootloader implementations, kernel loaders, VirtIO devices, memory managers, and VM initialization components.

Changes

Cohort / File(s) Summary
Bootloader Trait & Implementations
crates/vm-bootloader/src/boot_loader.rs, crates/vm-bootloader/src/boot_loader/arch/aarch64.rs, crates/vm-bootloader/src/boot_loader/arch/x86_64.rs
Updated load method signatures and helper methods (load_image, load_initrd, load_dtb) to accept immutable references to MemoryAddressSpace instead of mutable references.
Kernel & Initrd Loaders
crates/vm-bootloader/src/kernel_loader.rs, crates/vm-bootloader/src/kernel_loader/linux/bzimage.rs, crates/vm-bootloader/src/kernel_loader/linux/image.rs, crates/vm-bootloader/src/initrd_loader.rs
Changed load method signatures to accept immutable MemoryAddressSpace references while preserving return types and error handling.
Memory Management (mm module)
crates/vm-mm/src/allocator.rs, crates/vm-mm/src/allocator/mmap_allocator.rs, crates/vm-mm/src/manager.rs, crates/vm-mm/src/region.rs
Updated to_hva() method receiver from &mut self to &self; changed public methods (gpa_to_hva, memset, copy_from_slice) to accept immutable self; refactored internal helpers (get_by_gpa, try_get_region_by_gpa) to use immutable accessors instead of _mut variants.
VirtIO Queue & Device
crates/vm-virtio/src/virt_queue.rs, crates/vm-virtio/src/virt_queue/virtq_desc_table.rs
Changed methods (desc_table_ref, avail_ring, used_ring, addr) to accept immutable MemoryAddressSpace references instead of mutable ones.
VirtIO Block Device
crates/vm-device/src/device/virtio/virtio_blk.rs
Replaced Arc<Mutex<MemoryAddressSpace<C>>> field with Arc<MemoryAddressSpace<C>>; removed all mutex locking/unlocking operations; updated constructor signature and all memory access patterns to use immutable borrows.
VM & Device Management
crates/vm-machine/src/device.rs, crates/vm-machine/src/vm.rs, crates/vm-machine/src/vm_builder.rs
Updated InitDevice trait and implementations to accept Arc<MemoryAddressSpace<C>> instead of Arc<Mutex<MemoryAddressSpace<C>>>; changed Vm memory field from mutex-wrapped to unwrapped Arc; removed memory locking in run() method.
Host Virtual Address (HVP) Layer
crates/vm-core/src/virt/hvp/mm.rs
Updated MemoryContainer implementation for MemoryWrapper to use to_hva(&self) instead of to_hva(&mut self).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • PR #74: Related through shared VirtIO block device changes involving removal of Arc<Mutex<MemoryAddressSpace>> and conversion of memory access patterns.
  • PR #75: Related through simultaneous updates to BootLoader::load method signature in crates/vm-bootloader/src/boot_loader.rs.

Poem

🐰 The mutex locks dissolve, references turn immutable,
Memory flows freely now, shared without the mutable—
Simpler paths, cleaner code, no more locking strife,
The bootloader hops along its easier new life! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Refine API of mm' accurately describes the main change: refining memory management APIs by changing mutability requirements across multiple memory-related modules and traits.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refine

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/vm-mm/src/manager.rs (2)

66-77: ⚠️ Potential issue | 🔴 Critical

Critical: missing len <= buf.len() guard before unsafe copy.

copy_from_slice validates region bounds but not source-slice bounds. If len > buf.len(), the subsequent copy_from call reads beyond the slice's valid memory, causing undefined behavior.

Suggested fix
     pub fn copy_from_slice(&self, gpa: u64, buf: &[u8], len: usize) -> Result<(), Error> {
         let region = self.try_get_region_by_gpa(gpa)?;
         let hva = region.to_hva();
         let offset = gpa - region.gpa;
 
+        if len > buf.len() {
+            return Err(Error::MemoryOverflow);
+        }
+
         if offset + len as u64 > region.len as u64 {
             return Err(Error::MemoryOverflow);
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm-mm/src/manager.rs` around lines 66 - 77, In copy_from_slice, add a
guard that ensures the requested len does not exceed the source slice length
before performing the unsafe copy: check if len <= buf.len() (or else return an
appropriate Err, e.g., Err(Error::InvalidInput)) prior to the unsafe block in fn
copy_from_slice so the subsequent call to hva.add(...).copy_from(buf.as_ptr(),
len) cannot read past buf; keep this check alongside the existing region bounds
check (use the function name copy_from_slice and the Error enum for the return).

43-79: ⚠️ Potential issue | 🔴 Critical

Critical: Unsafe memory mutation from shared reference violates explicit thread-safety bounds + missing buffer validation.

Methods gpa_to_hva, memset, and copy_from_slice use &self but perform unsafe writes via raw pointers. The MemoryContainer trait explicitly requires Send + Sync + 'static, proving concurrent shared access was intended—yet these mutating methods break those safety guarantees when MemoryAddressSpace is wrapped in Arc (as seen in vm.rs and device.rs).

Additionally, copy_from_slice accepts len as an independent parameter without validating len <= buf.len(), allowing out-of-bounds reads from the source buffer despite current call sites using correct patterns.

Restore &mut self for all mutating methods or introduce Mutex<MemoryAddressSpace> consistently throughout the codebase to preserve the explicit Send + Sync contract.

🔧 Suggested direction (restore exclusivity for mutating paths)
-    pub fn gpa_to_hva(&self, gpa: u64) -> Result<*mut u8, Error> {
+    pub fn gpa_to_hva(&mut self, gpa: u64) -> Result<*mut u8, Error> {
@@
-    pub fn memset(&self, gpa: u64, val: u8, len: usize) -> Result<(), Error> {
+    pub fn memset(&mut self, gpa: u64, val: u8, len: usize) -> Result<(), Error> {
@@
-    pub fn copy_from_slice(&self, gpa: u64, buf: &[u8], len: usize) -> Result<(), Error> {
+    pub fn copy_from_slice(&mut self, gpa: u64, buf: &[u8], len: usize) -> Result<(), Error> {
+        if len > buf.len() {
+            return Err(Error::InvalidLength);
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm-mm/src/manager.rs` around lines 43 - 79, The mutating methods
gpa_to_hva (if it can produce writable pointers), memset, and copy_from_slice
currently take &self and perform unsafe writes, violating the Send+Sync contract
of the MemoryContainer/MemoryAddressSpace; change the signatures to take &mut
self (or alternatively require/accept a Mutex<MemoryAddressSpace> and lock it at
call sites) for all methods that perform writes (memset, copy_from_slice and any
gpa_to_hva variants used for mutation), update callers to pass a mutable
reference or lock the mutex, and in copy_from_slice additionally validate that
len <= buf.len() before performing the copy (use
try_get_region_by_gpa/region.len checks already present to keep bounds checks
consistent).
🧹 Nitpick comments (3)
crates/vm-bootloader/src/kernel_loader/linux/bzimage.rs (1)

188-192: Consider removing or updating the commented-out code.

The commented-out install method still references &mut MemoryAddressSpace, which is inconsistent with the new API. If this code is intended for future use, consider updating it to use &MemoryAddressSpace. If it's no longer needed, consider removing it to reduce maintenance burden.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm-bootloader/src/kernel_loader/linux/bzimage.rs` around lines 188 -
192, The commented-out install method in bzimage.rs references an outdated
signature using &mut MemoryAddressSpace<V::Memory>; either remove the commented
block or update it to the new API signature (use &MemoryAddressSpace and the
current type parameters) so the comment no longer misleads—specifically adjust
or delete the commented fn install(...) that mentions MemoryAddressSpace and
V::Memory to match the current MemoryAddressSpace usage in this module.
crates/vm-mm/src/allocator.rs (1)

5-7: Consider documenting the safety contract for to_hva.

The method returns *mut u8 while taking &self, which is an interior mutability pattern. While this is valid (similar to UnsafeCell::get()), the safety requirements for callers using this raw pointer should be documented—particularly regarding concurrent access and lifetime guarantees.

📝 Suggested documentation
 pub trait MemoryContainer: Send + Sync + 'static {
+    /// Returns a raw mutable pointer to the host virtual address of the memory region.
+    ///
+    /// # Safety
+    /// Callers must ensure that:
+    /// - The pointer is not used after the `MemoryContainer` is dropped.
+    /// - Concurrent writes to overlapping regions are properly synchronized.
     fn to_hva(&self) -> *mut u8;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm-mm/src/allocator.rs` around lines 5 - 7, Document the safety
contract for MemoryContainer::to_hva: add a doc comment on the MemoryContainer
trait / to_hva method that specifies the returned *mut u8 is an interior-mutable
pointer valid for as long as the container is not dropped, callers must ensure
they do not use the pointer after the container is freed, describe concurrency
guarantees (whether concurrent mutable/immutable access is allowed or requires
external synchronization despite Send+Sync on the trait), and clarify
aliasing/ownership rules (e.g., callers must avoid undefined behavior from
simultaneous mutable accesses and must uphold any alignment/size expectations).
Reference the MemoryContainer trait and its to_hva method when adding the
documentation.
crates/vm-mm/src/manager.rs (1)

93-97: Recommended: use BTreeMap::range for region lookup instead of full scan.

values().find(...) is O(n). Since keys are sorted by gpa, a range lookup gives a cleaner O(log n) candidate selection.

♻️ Proposed refactor
     fn get_by_gpa(&self, gpa: u64) -> Option<&MemoryRegion<C>> {
-        self.regions
-            .values()
-            .find(|region| gpa >= region.gpa && gpa < region.gpa + region.len as u64)
-            .map(|v| v as _)
+        self.regions
+            .range(..=gpa)
+            .next_back()
+            .and_then(|(_, region)| {
+                (gpa < region.gpa + region.len as u64).then_some(region)
+            })
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm-mm/src/manager.rs` around lines 93 - 97, get_by_gpa currently scans
all regions with values().find(...) which is O(n); change it to use the BTreeMap
range iterator to locate the candidate region in O(log n). In get_by_gpa, query
self.regions.range(..=gpa).next_back() (or equivalent) to get the region with
the largest gpa <= requested gpa, then verify the requested gpa < region.gpa +
region.len as u64 before returning a reference to MemoryRegion<C>; update the
method to return None if no such range entry exists or the check fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@crates/vm-mm/src/manager.rs`:
- Around line 66-77: In copy_from_slice, add a guard that ensures the requested
len does not exceed the source slice length before performing the unsafe copy:
check if len <= buf.len() (or else return an appropriate Err, e.g.,
Err(Error::InvalidInput)) prior to the unsafe block in fn copy_from_slice so the
subsequent call to hva.add(...).copy_from(buf.as_ptr(), len) cannot read past
buf; keep this check alongside the existing region bounds check (use the
function name copy_from_slice and the Error enum for the return).
- Around line 43-79: The mutating methods gpa_to_hva (if it can produce writable
pointers), memset, and copy_from_slice currently take &self and perform unsafe
writes, violating the Send+Sync contract of the
MemoryContainer/MemoryAddressSpace; change the signatures to take &mut self (or
alternatively require/accept a Mutex<MemoryAddressSpace> and lock it at call
sites) for all methods that perform writes (memset, copy_from_slice and any
gpa_to_hva variants used for mutation), update callers to pass a mutable
reference or lock the mutex, and in copy_from_slice additionally validate that
len <= buf.len() before performing the copy (use
try_get_region_by_gpa/region.len checks already present to keep bounds checks
consistent).

---

Nitpick comments:
In `@crates/vm-bootloader/src/kernel_loader/linux/bzimage.rs`:
- Around line 188-192: The commented-out install method in bzimage.rs references
an outdated signature using &mut MemoryAddressSpace<V::Memory>; either remove
the commented block or update it to the new API signature (use
&MemoryAddressSpace and the current type parameters) so the comment no longer
misleads—specifically adjust or delete the commented fn install(...) that
mentions MemoryAddressSpace and V::Memory to match the current
MemoryAddressSpace usage in this module.

In `@crates/vm-mm/src/allocator.rs`:
- Around line 5-7: Document the safety contract for MemoryContainer::to_hva: add
a doc comment on the MemoryContainer trait / to_hva method that specifies the
returned *mut u8 is an interior-mutable pointer valid for as long as the
container is not dropped, callers must ensure they do not use the pointer after
the container is freed, describe concurrency guarantees (whether concurrent
mutable/immutable access is allowed or requires external synchronization despite
Send+Sync on the trait), and clarify aliasing/ownership rules (e.g., callers
must avoid undefined behavior from simultaneous mutable accesses and must uphold
any alignment/size expectations). Reference the MemoryContainer trait and its
to_hva method when adding the documentation.

In `@crates/vm-mm/src/manager.rs`:
- Around line 93-97: get_by_gpa currently scans all regions with
values().find(...) which is O(n); change it to use the BTreeMap range iterator
to locate the candidate region in O(log n). In get_by_gpa, query
self.regions.range(..=gpa).next_back() (or equivalent) to get the region with
the largest gpa <= requested gpa, then verify the requested gpa < region.gpa +
region.len as u64 before returning a reference to MemoryRegion<C>; update the
method to return None if no such range entry exists or the check fails.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53bdb1f and 3358d30.

📒 Files selected for processing (18)
  • crates/vm-bootloader/src/boot_loader.rs
  • crates/vm-bootloader/src/boot_loader/arch/aarch64.rs
  • crates/vm-bootloader/src/boot_loader/arch/x86_64.rs
  • crates/vm-bootloader/src/initrd_loader.rs
  • crates/vm-bootloader/src/kernel_loader.rs
  • crates/vm-bootloader/src/kernel_loader/linux/bzimage.rs
  • crates/vm-bootloader/src/kernel_loader/linux/image.rs
  • crates/vm-core/src/virt/hvp/mm.rs
  • crates/vm-device/src/device/virtio/virtio_blk.rs
  • crates/vm-machine/src/device.rs
  • crates/vm-machine/src/vm.rs
  • crates/vm-machine/src/vm_builder.rs
  • crates/vm-mm/src/allocator.rs
  • crates/vm-mm/src/allocator/mmap_allocator.rs
  • crates/vm-mm/src/manager.rs
  • crates/vm-mm/src/region.rs
  • crates/vm-virtio/src/virt_queue.rs
  • crates/vm-virtio/src/virt_queue/virtq_desc_table.rs

@junyu0312 junyu0312 merged commit 206c33c into main Mar 1, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant