-
Notifications
You must be signed in to change notification settings - Fork 183
The fix is provided to settle with maxmemory value changes after perorming hotplug operation. #6695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
The fix is provided to settle with maxmemory value changes after perorming hotplug operation. #6695
Conversation
WalkthroughA conditional check is added to an assertion in the Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
Comment |
|
@chloerh Would you please have a look at this PR and help me with review comments. Thanks for your time and support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
libvirt/tests/src/libvirt_mem.py (1)
243-245: Clarify comment: Issue is not hugepage-specific.The comment associates this behavior with hugepages, but according to the PR description, libvirt aligns max memory to the memory block size (64 MiB or 128 MiB) during any memory hotplug operation, not just when hugepages are used. Consider rephrasing for accuracy.
- # for arch ppc64 and hugepages, the run time max memory grows and never shrinks. - # libvirt rounded max memory boundaries to align with hugepage requirements + # On ppc64, libvirt aligns max memory to memory block size (typically 64/128 MiB) + # during memory hotplug operations, which may increase the runtime max memory value.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
libvirt/tests/src/libvirt_mem.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Python 3.8
- GitHub Check: Python 3.12
- GitHub Check: Python 3.11
- GitHub Check: Python 3.9
| if 'ppc64' in arch: | ||
| # for arch ppc64 and hugepages, the run time max memory grows and never shrinks. | ||
| # libvirt rounded max memory boundaries to align with hugepage requirements | ||
| assert int(max_mem_rt) <= xml_max_mem_rt | ||
| else: | ||
| assert int(max_mem_rt) == xml_max_mem_rt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: NameError due to undefined variable arch.
The variable arch is only defined at line 518 within a conditional block (if 'align_256m' in params.get('name', '')). When check_dom_xml is called in test scenarios where the test name doesn't contain 'align_256m', the reference to arch at line 242 will raise a NameError, causing the test to crash.
Apply one of these fixes:
Option 1 (recommended): Define arch unconditionally early in the run function
+ arch = platform.machine()
+
pre_vm_state = params.get("pre_vm_state", "running")Then remove the redundant assignment at line 518:
if 'align_256m' in params.get('name', ''):
- arch = platform.machine()
if arch.lower() != 'ppc64le':Option 2: Call platform.machine() directly in the conditional
- if 'ppc64' in arch:
+ if 'ppc64' in platform.machine():Option 3: Add a fallback default
+ arch = globals().get('arch') or platform.machine()
if 'ppc64' in arch:Option 1 is cleanest and aligns with the existing pattern of defining configuration variables at the start of run().
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if 'ppc64' in arch: | |
| # for arch ppc64 and hugepages, the run time max memory grows and never shrinks. | |
| # libvirt rounded max memory boundaries to align with hugepage requirements | |
| assert int(max_mem_rt) <= xml_max_mem_rt | |
| else: | |
| assert int(max_mem_rt) == xml_max_mem_rt | |
| if 'ppc64' in platform.machine(): | |
| # for arch ppc64 and hugepages, the run time max memory grows and never shrinks. | |
| # libvirt rounded max memory boundaries to align with hugepage requirements | |
| assert int(max_mem_rt) <= xml_max_mem_rt | |
| else: | |
| assert int(max_mem_rt) == xml_max_mem_rt |
🤖 Prompt for AI Agents
In libvirt/tests/src/libvirt_mem.py around lines 242 to 247, the code references
`arch` which is only defined much later (line ~518) causing a NameError in many
test paths; define `arch = platform.machine()` (or equivalent) unconditionally
near the start of the run() function alongside other config variables, then
remove the duplicate assignment at line 518 so `arch` is always available when
check_dom_xml runs.
b6edca4 to
4e0327f
Compare
…orming hotplug operation when hugepages are included. When you add a memory device (model="dimm"), libvirt ensures the max memory is aligned to memory block size (usually 64 MiB or 128 MiB). This is normal behavior for hotplug support. I have hugepages assigned with 2048kib in the test and test fails when validating max_memory value before and after hotplug operation was performed. The max_memory value had changed after hotplug and the test was failing. After analyses i found that max_memory value got increased by 64Mib to align to memory block size. This is because hotplugging a memory DIMM changes the VM’s runtime maximum memory, and libvirt automatically adjusts <maxMemory> internally. Signed-off-by: Tasmiya Nalatwad <Tasmiya.Nalatwad@ibm.com>
The fix is provided to settle with maxmemory value changes after perorming hotplug operation when hugepages are included.
When you add a memory device (model="dimm"), libvirt ensures the max memory is aligned to memory block size (usually 64 MiB or 128 MiB). This is normal behavior for hotplug support.
I have hugepages assigned with 2048kib in the test and test fails when validating max_memory value before and after hotplug operation was performed. The max_memory value had changed after hotplug and the test was failing. After analyses i found that max_memory value got increased by 64Mib to align to memory block size. This is because hotplugging a memory DIMM changes the VM’s runtime maximum memory, and libvirt automatically adjusts internally.
Before Fix :
python3 avocado-setup.py --run-suite guest_memory-bck --guest-os RHEL.8.devel.ppc64le --only-filter 'virtio_scsi virtio_net qcow2' --no-download 01:53:40 INFO : Env Config: /home/tests/config/wrapper/env.conf 01:53:40 INFO : No Run Config: /home/tests/config/wrapper/no_run_tests.conf 01:53:40 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested 01:53:40 INFO : Check for environment 01:53:42 INFO : Creating temporary mux dir 01:53:42 INFO : 01:53:42 INFO : Running Guest Tests Suite memory-bck 01:53:42 INFO : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/tests/data/avocado-vt/backends/libvirt/cfg/memory-bck.cfg --force-job-id 063573451849b1d2cb4411af32b516449891b8ce --job-results-dir /home/tests/results --vt-only-filter "virtio_scsi virtio_net qcow2 RHEL.8.devel.ppc64le" JOB ID : 063573451849b1d2cb4411af32b516449891b8ce JOB LOG : /home/tests/results/job-2025-12-01T01.53-0635734/job.log (1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED (1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (41.19 s) (2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: STARTED (2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: FAIL: Found unmatched memory setting from domain xml (30.50 s) (3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: STARTED (3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: FAIL: Found unmatched memory setting from domain xml (30.78 s) (4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: STARTED (4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: FAIL: Found unmatched memory setting from domain xml (53.92 s) (5/5) io-github-autotest-libvirt.remove_guest.without_disk: STARTED (5/5) io-github-autotest-libvirt.remove_guest.without_disk: PASS (4.12 s) RESULTS : PASS 2 | ERROR 0 | FAIL 3 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/tests/results/job-2025-12-01T01.53-0635734/results.html JOB TIME : 180.80 sAfter Fix :
`python3 avocado-setup.py --run-suite guest_memory-bck --guest-os RHEL.8.devel.ppc64le --only-filter 'virtio_scsi virtio_net qcow2' --no-download
00:55:01 INFO : Env Config: /home/tests/config/wrapper/env.conf
00:55:01 INFO : No Run Config: /home/tests/config/wrapper/no_run_tests.conf
00:55:01 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested
00:55:01 INFO : Check for environment
00:55:02 INFO : Creating temporary mux dir
00:55:02 INFO :
00:55:02 INFO : Running Guest Tests Suite memory-bck
00:55:02 INFO : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/tests/data/avocado-vt/backends/libvirt/cfg/memory-bck.cfg --force-job-id 3481891a8148cd06f953d1fddabc678182d7979f --job-results-dir /home/tests/results --vt-only-filter "virtio_scsi virtio_net qcow2 RHEL.8.devel.ppc64le"
JOB ID : 3481891a8148cd06f953d1fddabc678182d7979f
JOB LOG : /home/tests/results/job-2025-12-01T00.55-3481891/job.log
(1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED
(1/5) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (39.91 s)
(2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: STARTED
(2/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug: PASS (69.05 s)
(3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: STARTED
(3/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.cold_plug_discard: PASS (69.66 s)
(4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: STARTED
(4/5) type_specific.io-github-autotest-libvirt.libvirt_mem.positive_test.hugepages.no_numatune.discard: PASS (93.14 s)
(5/5) io-github-autotest-libvirt.remove_guest.without_disk: STARTED
(5/5) io-github-autotest-libvirt.remove_guest.without_disk: PASS (4.20 s)
RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML : /home/tests/results/job-2025-12-01T00.55-3481891/results.html
JOB TIME : 295.56 s
01:00:00 INFO :
01:00:00 INFO : Summary of test results can be found below:
TestSuite TestRun Summary
guest_memory-bck Run Successfully executed
/home/tests/results/job-2025-12-01T00.55-3481891/job.log
| PASS 5 || CANCEL 0 || ERRORS 0 || FAILURES 0 || SKIP 0 || WARN 0 || INTERRUPT 0 |`
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.