-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[lldb][test] Refactor and expand TestMemoryRegionDirtyPages.py #156035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This started as me being annoyed that I got loads of this when inspecting memory regions on Mac: Modified memory (dirty) page list provided, 0 entries. So I thought I should test the existing behaviour, which lead me to refactor the existing test to run the same checks on all regions. In the process I realised that the output is not wrong. There is a difference between knowing that no pages are dirty and not knowing anything about dirty pages. So saying 0 are currently dirty is in fact correct. So the test now checks "memory region" output as well as API use. There were also some checks only run on certain regions, like page size, which now run for all of them.
@llvm/pr-subscribers-lldb Author: David Spickett (DavidSpickett) ChangesThis started as me being annoyed that I got loads of this when inspecting memory regions on Mac: So I thought I should test the existing behaviour, which lead me to refactor the existing test to run the same checks on all regions. In the process I realised that the output is not wrong. There is a difference between knowing that no pages are dirty and not knowing anything about dirty pages. We print that there are 0 entries so the user knows that difference. The test case now checks "memory region" output as well as API use. There were also some checks only run on certain regions, like page size, which now run for all of them. Full diff: https://github.com/llvm/llvm-project/pull/156035.diff 1 Files Affected:
diff --git a/lldb/test/API/functionalities/gdb_remote_client/TestMemoryRegionDirtyPages.py b/lldb/test/API/functionalities/gdb_remote_client/TestMemoryRegionDirtyPages.py
index 9d7e0c0f7af6c..695faf896ef5d 100644
--- a/lldb/test/API/functionalities/gdb_remote_client/TestMemoryRegionDirtyPages.py
+++ b/lldb/test/API/functionalities/gdb_remote_client/TestMemoryRegionDirtyPages.py
@@ -5,60 +5,102 @@
from lldbsuite.test.lldbgdbclient import GDBRemoteTestBase
+class TestRegion(object):
+ def __init__(self, start_addr, size, dirty_pages):
+ self.start_addr = start_addr
+ self.size = size
+ self.dirty_pages = dirty_pages
+
+ def as_packet(self):
+ dirty_pages = ""
+ if self.dirty_pages is not None:
+ dirty_pages = (
+ "dirty-pages:"
+ + ",".join([format(a, "x") for a in self.dirty_pages])
+ + ";"
+ )
+ return f"start:{self.start_addr:x};size:{self.size};permissions:r;{dirty_pages}"
+
+ def expected_command_output(self):
+ if self.dirty_pages is None:
+ return [
+ "Modified memory (dirty) page list provided",
+ "Dirty pages:",
+ ], False
+
+ expected = [
+ f"Modified memory (dirty) page list provided, {len(self.dirty_pages)} entries."
+ ]
+ if self.dirty_pages:
+ expected.append(
+ "Dirty pages: "
+ + ", ".join([format(a, "#x") for a in self.dirty_pages])
+ + "."
+ )
+ return expected, True
+
+
class TestMemoryRegionDirtyPages(GDBRemoteTestBase):
@skipIfXmlSupportMissing
def test(self):
+ test_regions = [
+ # A memory region where we don't know anything about dirty pages
+ TestRegion(0, 0x100000000, None),
+ # A memory region with dirty page information -- and zero dirty pages
+ TestRegion(0x100000000, 4000, []),
+ # A memory region with one dirty page
+ TestRegion(0x100004000, 4000, [0x100004000]),
+ # A memory region with multple dirty pages
+ TestRegion(
+ 0x1000A2000,
+ 5000,
+ [0x1000A2000, 0x1000A3000, 0x1000A4000, 0x1000A5000, 0x1000A6000],
+ ),
+ ]
+
class MyResponder(MockGDBServerResponder):
def qHostInfo(self):
return "ptrsize:8;endian:little;vm-page-size:4096;"
def qMemoryRegionInfo(self, addr):
- if addr == 0:
- return "start:0;size:100000000;"
- if addr == 0x100000000:
- return "start:100000000;size:4000;permissions:rx;dirty-pages:;"
- if addr == 0x100004000:
- return (
- "start:100004000;size:4000;permissions:r;dirty-pages:100004000;"
- )
- if addr == 0x1000A2000:
- return "start:1000a2000;size:5000;permissions:r;dirty-pages:1000a2000,1000a3000,1000a4000,1000a5000,1000a6000;"
+ for region in test_regions:
+ if region.start_addr == addr:
+ return region.as_packet()
self.server.responder = MyResponder()
target = self.dbg.CreateTarget("")
if self.TraceOn():
self.runCmd("log enable gdb-remote packets")
self.addTearDownHook(lambda: self.runCmd("log disable gdb-remote packets"))
+
process = self.connect(target)
+ lldbutil.expect_state_changes(
+ self, self.dbg.GetListener(), process, [lldb.eStateStopped]
+ )
- # A memory region where we don't know anything about dirty pages
- region = lldb.SBMemoryRegionInfo()
- err = process.GetMemoryRegionInfo(0, region)
- self.assertSuccess(err)
- self.assertFalse(region.HasDirtyMemoryPageList())
- self.assertEqual(region.GetNumDirtyPages(), 0)
- region.Clear()
+ for test_region in test_regions:
+ region = lldb.SBMemoryRegionInfo()
+ err = process.GetMemoryRegionInfo(test_region.start_addr, region)
+ self.assertSuccess(err)
+ self.assertEqual(region.GetPageSize(), 4096)
- # A memory region with dirty page information -- and zero dirty pages
- err = process.GetMemoryRegionInfo(0x100000000, region)
- self.assertSuccess(err)
- self.assertTrue(region.HasDirtyMemoryPageList())
- self.assertEqual(region.GetNumDirtyPages(), 0)
- self.assertEqual(region.GetPageSize(), 4096)
- region.Clear()
+ if test_region.dirty_pages is None:
+ self.assertFalse(region.HasDirtyMemoryPageList())
+ self.assertEqual(0, region.GetNumDirtyPages())
+ else:
+ self.assertTrue(region.HasDirtyMemoryPageList())
+ self.assertEqual(
+ len(test_region.dirty_pages), region.GetNumDirtyPages()
+ )
- # A memory region with one dirty page
- err = process.GetMemoryRegionInfo(0x100004000, region)
- self.assertSuccess(err)
- self.assertTrue(region.HasDirtyMemoryPageList())
- self.assertEqual(region.GetNumDirtyPages(), 1)
- self.assertEqual(region.GetDirtyPageAddressAtIndex(0), 0x100004000)
- region.Clear()
+ for i, expected_dirty_page in enumerate(test_region.dirty_pages):
+ self.assertEqual(
+ expected_dirty_page, region.GetDirtyPageAddressAtIndex(i)
+ )
- # A memory region with multple dirty pages
- err = process.GetMemoryRegionInfo(0x1000A2000, region)
- self.assertSuccess(err)
- self.assertTrue(region.HasDirtyMemoryPageList())
- self.assertEqual(region.GetNumDirtyPages(), 5)
- self.assertEqual(region.GetDirtyPageAddressAtIndex(4), 0x1000A6000)
- region.Clear()
+ substrs, matching = test_region.expected_command_output()
+ self.expect(
+ f"memory region 0x{test_region.start_addr:x}",
+ substrs=substrs,
+ matching=matching,
+ )
|
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/163/builds/27102 Here is the relevant piece of the build log for the reference
|
* main: (502 commits) GlobalISel: Adjust insert point when expanding G_[SU]DIVREM (llvm#160683) [LV] Add coverage for fixing-up scalar resume values (llvm#160492) AMDGPU: Convert wave_any test to use update_mc_test_checks [LV] Add partial reduction tests multiplying extend with constants. Revert "[MLIR] Implement remark emitting policies in MLIR" (llvm#160681) [NFC][InstSimplify] Refactor fminmax-folds.ll test (llvm#160504) [LoongArch] Pre-commit tests for [x]vldi instructions with special constant splats (llvm#159228) [BOLT] Fix dwarf5-dwoid-no-dwoname.s (llvm#160676) [lldb][test] Refactor and expand TestMemoryRegionDirtyPages.py (llvm#156035) [gn build] Port 833d5f0 AMDGPU: Ensure both wavesize features are not set (llvm#159234) [LoopInterchange] Bail out when finding a dependency with all `*` elements (llvm#149049) [libc++] Avoid constructing additional objects when using map::at (llvm#157866) [lldb][test] Make hex prefix optional in DWARF union types test [X86] Add missing prefixes to trunc-sat tests (llvm#160662) [AMDGPU] Fix vector legalization for bf16 valu ops (llvm#158439) [LoongArch][NFC] Pre-commit tests for `[x]vadda.{b/h/w/d}` [mlir][tosa] Relax constraint on matmul verifier requiring equal operand types (llvm#155799) [clang][Sema] Accept gnu format attributes (llvm#160255) [LoongArch][NFC] Add tests for element extraction from binary add operation (llvm#159725) ...
…156035) This started as me being annoyed that I got loads of this when inspecting memory regions on Mac: Modified memory (dirty) page list provided, 0 entries. So I thought I should test the existing behaviour, which led me to refactor the existing test to run the same checks on all regions. In the process I realised that the output is not wrong. There is a difference between knowing that no pages are dirty and not knowing anything about dirty pages. We print that there are 0 entries so the user knows that difference. The test case now checks "memory region" output as well as API use. There were also some checks only run on certain regions, like page size, which now run for all of them.
…156035) This started as me being annoyed that I got loads of this when inspecting memory regions on Mac: Modified memory (dirty) page list provided, 0 entries. So I thought I should test the existing behaviour, which led me to refactor the existing test to run the same checks on all regions. In the process I realised that the output is not wrong. There is a difference between knowing that no pages are dirty and not knowing anything about dirty pages. We print that there are 0 entries so the user knows that difference. The test case now checks "memory region" output as well as API use. There were also some checks only run on certain regions, like page size, which now run for all of them.
This started as me being annoyed that I got loads of this when inspecting memory regions on Mac:
Modified memory (dirty) page list provided, 0 entries.
So I thought I should test the existing behaviour, which led me to refactor the existing test to run the same checks on all regions.
In the process I realised that the output is not wrong. There is a difference between knowing that no pages are dirty and not knowing anything about dirty pages. We print that there are 0 entries so the user knows that difference.
The test case now checks "memory region" output as well as API use. There were also some checks only run on certain regions, like page size, which now run for all of them.