Skip to content

Conversation

@TasmiyaNalatwad
Copy link
Contributor

@TasmiyaNalatwad TasmiyaNalatwad commented Dec 1, 2025

The fix is provided to settle with maxmemory value changes after perorming hotplug operation when hugepages are included.

When you add a memory device (model="dimm"), libvirt ensures the max memory is aligned to memory block size (usually 64 MiB or 128 MiB). This is normal behavior for hotplug support.

I have hugepages assigned with 2048kib in the test and test fails when validating max_memory value before and after hotplug operation was performed. The max_memory value had changed after hotplug and the test was failing. After analyses i found that max_memory value got increased by 64Mib to align to memory block size. This is because hotplugging a memory DIMM changes the VM’s runtime maximum memory, and libvirt automatically adjusts internally.

Before Fix :

python3 avocado-setup.py --run-suite guest_memory-bck --guest-os RHEL.8.devel.ppc64le --only-filter 'virtio_scsi virtio_net qcow2' --no-download 01:53:40 INFO : Env Config: /home/tests/config/wrapper/env.conf 01:53:40 INFO : No Run Config: /home/tests/config/wrapper/no_run_tests.conf 01:53:40 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested 01:53:40 INFO : Check for environment 01:53:42 INFO : Creating temporary mux dir 01:53:42 INFO : 01:53:42 INFO : Running Guest Tests Suite memory-bck 01:53:42 INFO : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/tests/data/avocado-vt/backends/libvirt/cfg/memory-bck.cfg --force-job-id 063573451849b1d2cb4411af32b516449891b8ce --job-results-dir /home/tests/results --vt-only-filter "virtio_scsi virtio_net qcow2 RHEL.8.devel.ppc64le" JOB ID : 063573451849b1d2cb4411af32b516449891b8ce JOB LOG : /home/tests/results/job-2025-12-01T01.53-0635734/job.log (1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED (1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (41.19 s) (2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: STARTED (2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: FAIL: Found unmatched memory setting from domain xml (30.50 s) (3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: STARTED (3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: FAIL: Found unmatched memory setting from domain xml (30.78 s) (4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: STARTED (4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: FAIL: Found unmatched memory setting from domain xml (53.92 s) (5/5) io-github-autotest-libvirt.remove_guest.without_disk: STARTED (5/5) io-github-autotest-libvirt.remove_guest.without_disk: PASS (4.12 s) RESULTS : PASS 2 | ERROR 0 | FAIL 3 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/tests/results/job-2025-12-01T01.53-0635734/results.html JOB TIME : 180.80 s

After Fix :

`python3 avocado-setup.py --run-suite guest_memory-bck --guest-os RHEL.8.devel.ppc64le --only-filter 'virtio_scsi virtio_net qcow2' --no-download
00:55:01 INFO : Env Config: /home/tests/config/wrapper/env.conf
00:55:01 INFO : No Run Config: /home/tests/config/wrapper/no_run_tests.conf
00:55:01 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested
00:55:01 INFO : Check for environment
00:55:02 INFO : Creating temporary mux dir
00:55:02 INFO :
00:55:02 INFO : Running Guest Tests Suite memory-bck
00:55:02 INFO : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/tests/data/avocado-vt/backends/libvirt/cfg/memory-bck.cfg --force-job-id 3481891a8148cd06f953d1fddabc678182d7979f --job-results-dir /home/tests/results --vt-only-filter "virtio_scsi virtio_net qcow2 RHEL.8.devel.ppc64le"
JOB ID : 3481891a8148cd06f953d1fddabc678182d7979f
JOB LOG : /home/tests/results/job-2025-12-01T00.55-3481891/job.log
(1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED
(1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (39.91 s)
(2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: STARTED
(2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: PASS (69.05 s)
(3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: STARTED
(3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: PASS (69.66 s)
(4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: STARTED
(4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: PASS (93.14 s)
(5/5) io-github-autotest-libvirt.remove_guest.without_disk: STARTED
(5/5) io-github-autotest-libvirt.remove_guest.without_disk: PASS (4.20 s)
RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML : /home/tests/results/job-2025-12-01T00.55-3481891/results.html
JOB TIME : 295.56 s
01:00:00 INFO :
01:00:00 INFO : Summary of test results can be found below:
TestSuite TestRun Summary

guest_memory-bck Run Successfully executed
/home/tests/results/job-2025-12-01T00.55-3481891/job.log
| PASS 5 || CANCEL 0 || ERRORS 0 || FAILURES 0 || SKIP 0 || WARN 0 || INTERRUPT 0 |`

Summary by CodeRabbit

  • Tests
    • Adjusted memory validation tests to allow runtime memory growth on ppc64 architectures (e.g., with hugepages) while keeping strict equality checks for other architectures.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 1, 2025

Walkthrough

A conditional check is added to an assertion in the check_dom_xml function in libvirt/tests/src/libvirt_mem.py. The assertion that compared max_mem_rt to xml_max_mem_rt now allows max_mem_rt <= xml_max_mem_rt for architectures containing "ppc64", while preserving max_mem_rt == xml_max_mem_rt for other architectures. A comment explaining the PPC64-specific rationale was added.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Single file (libvirt/tests/src/libvirt_mem.py) and single function changed
  • Small conditional logic addition; low cognitive load
  • Areas to check:
    • Correct detection of PPC64 architectures (string containment)
    • Comment accuracy and whether additional test coverage is needed for PPC64 behavior

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title describes max memory value changes after hotplug, which directly relates to the primary change: adjusting test assertions for architecture-specific max memory behavior.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b6edca4 and 4e0327f.

📒 Files selected for processing (1)
  • libvirt/tests/src/libvirt_mem.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Python 3.11
  • GitHub Check: Python 3.9
  • GitHub Check: Python 3.8
  • GitHub Check: Python 3.12

Comment @coderabbitai help to get the list of available commands and usage tips.

@TasmiyaNalatwad TasmiyaNalatwad changed the title The fix is provided to settle with maxmemory value changes after perf… The fix is provided to settle with maxmemory value changes after perorming hotplug operation when hugepages are included. Dec 1, 2025
@TasmiyaNalatwad
Copy link
Contributor Author

@chloerh Would you please have a look at this PR and help me with review comments. Thanks for your time and support.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
libvirt/tests/src/libvirt_mem.py (1)

243-245: Clarify comment: Issue is not hugepage-specific.

The comment associates this behavior with hugepages, but according to the PR description, libvirt aligns max memory to the memory block size (64 MiB or 128 MiB) during any memory hotplug operation, not just when hugepages are used. Consider rephrasing for accuracy.

-            # for arch ppc64 and hugepages, the run time max memory grows and never shrinks.
-            # libvirt rounded max memory boundaries to align with hugepage requirements
+            # On ppc64, libvirt aligns max memory to memory block size (typically 64/128 MiB)
+            # during memory hotplug operations, which may increase the runtime max memory value.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3288923 and b6edca4.

📒 Files selected for processing (1)
  • libvirt/tests/src/libvirt_mem.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Python 3.8
  • GitHub Check: Python 3.12
  • GitHub Check: Python 3.11
  • GitHub Check: Python 3.9

Comment on lines 242 to 247
if 'ppc64' in arch:
# for arch ppc64 and hugepages, the run time max memory grows and never shrinks.
# libvirt rounded max memory boundaries to align with hugepage requirements
assert int(max_mem_rt) <= xml_max_mem_rt
else:
assert int(max_mem_rt) == xml_max_mem_rt
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: NameError due to undefined variable arch.

The variable arch is only defined at line 518 within a conditional block (if 'align_256m' in params.get('name', '')). When check_dom_xml is called in test scenarios where the test name doesn't contain 'align_256m', the reference to arch at line 242 will raise a NameError, causing the test to crash.

Apply one of these fixes:

Option 1 (recommended): Define arch unconditionally early in the run function

+    arch = platform.machine()
+
     pre_vm_state = params.get("pre_vm_state", "running")

Then remove the redundant assignment at line 518:

     if 'align_256m' in params.get('name', ''):
-        arch = platform.machine()
         if arch.lower() != 'ppc64le':

Option 2: Call platform.machine() directly in the conditional

-            if 'ppc64' in arch:
+            if 'ppc64' in platform.machine():

Option 3: Add a fallback default

+            arch = globals().get('arch') or platform.machine()
             if 'ppc64' in arch:

Option 1 is cleanest and aligns with the existing pattern of defining configuration variables at the start of run().

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if 'ppc64' in arch:
# for arch ppc64 and hugepages, the run time max memory grows and never shrinks.
# libvirt rounded max memory boundaries to align with hugepage requirements
assert int(max_mem_rt) <= xml_max_mem_rt
else:
assert int(max_mem_rt) == xml_max_mem_rt
if 'ppc64' in platform.machine():
# for arch ppc64 and hugepages, the run time max memory grows and never shrinks.
# libvirt rounded max memory boundaries to align with hugepage requirements
assert int(max_mem_rt) <= xml_max_mem_rt
else:
assert int(max_mem_rt) == xml_max_mem_rt
🤖 Prompt for AI Agents
In libvirt/tests/src/libvirt_mem.py around lines 242 to 247, the code references
`arch` which is only defined much later (line ~518) causing a NameError in many
test paths; define `arch = platform.machine()` (or equivalent) unconditionally
near the start of the run() function alongside other config variables, then
remove the duplicate assignment at line 518 so `arch` is always available when
check_dom_xml runs.

@TasmiyaNalatwad TasmiyaNalatwad force-pushed the fix_libvirt_maxmem_support_ppc64 branch from b6edca4 to 4e0327f Compare December 1, 2025 07:17
…orming hotplug operation when hugepages are included.

When you add a memory device (model="dimm"), libvirt ensures the max memory is aligned to memory block size (usually 64 MiB or 128 MiB). This is normal behavior for hotplug support.

I have hugepages assigned with 2048kib in the test and test fails when validating max_memory value before and after hotplug operation was performed. The max_memory value had changed after hotplug and the test was failing. After analyses i found that max_memory value got increased by 64Mib to align to memory block size. This is because hotplugging a memory DIMM changes the VM’s runtime maximum memory, and libvirt automatically adjusts <maxMemory> internally.

Signed-off-by: Tasmiya Nalatwad <Tasmiya.Nalatwad@ibm.com>
@TasmiyaNalatwad TasmiyaNalatwad changed the title The fix is provided to settle with maxmemory value changes after perorming hotplug operation when hugepages are included. The fix is provided to settle with maxmemory value changes after perorming hotplug operation. Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant