-
Notifications
You must be signed in to change notification settings - Fork 18.1k
runtime: MADV_HUGEPAGE causes stalls when allocating memory #61718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yeah, this is unfortunate. The immediate workaround is to set Here's one idea to resolve this: only use One other idea is to use the new Linux defaults are working against us here. They're really not great! The best, quickest fix would be a way to clear the |
@randall77 Pointed out to me that we might be able to mitigate the issue by eagerly accessing the memory region we just set to |
As an update here, we've been discussing this on the Gophers Slack. @dominikh sent me a smaller reproducer (https://go.dev/play/p/4d11Jc5nNDi.go) that produces some fairly large outliers on his system, but I've been unable to reproduce in two different VMs so far, after setting the hugepage sysfs parameters to align. @dominikh noted that if he shuts down a whole bunch of applications then it seemingly goes away. He reported that he can reproduce while stress testing with the tool https://github.com/resurrecting-open-source-projects/stress. I've been trying with that, but also with a homebrew GC stress test. We've also been making sure In the end, I can produce high latencies, but the distribution is approximately the same between Go 1.20 and tip-of-tree. I'm not yet sure what the difference is. I've been trying on Linux 5.15 and 6.3, @dominikh is using 6.2. |
Meanwhile I can reliably reproduce the issue with Go at master, and not at all with Go 1.20. In fact, seeing such high latencies with Go 1.20 seems particularly weird, as it's not actually making use of huge pages much at all. The problem is very sensitive to the amount of memory fragmentation, which can make it difficult to trigger. There have to be few enough (or none) allocations available for huge pages, and "direct reclaim" must not be able to quickly merge pages, either. I did end up having more luck with Michael's stress test than with It's also worth noting that the stress test acts as an easier way of reproducing the problem, not a requirement. I originally reproduced the problem just by having a lot of typical, heavy desktop software open (Firefox with a significant amount of tabs, Discord, Slack), which left memory quite fragmented. The stress test is a much easier way of using up large allocations. |
There are actually a few times Go 1.20 might call Though, FWIW, I'm willing to believe that I'm just doing something wrong here in my attempt to reproduce this. The upshot is we have a way to reproduce it somewhere. The trouble is there still doesn't seem like there's a clear path forward. I neglected to mention that the earlier suggestion of trying to eagerly force the "direct reclaim" didn't really change anything, unfortunately. |
And I forgot to say, thank you for your time and effort in looking into this @dominikh! |
I don't think there's a hands-off solution that will make everyone happy. If Go makes explicit use of transparent huge pages, then it opens itself up to all of the common issues with THP, such as compaction stalls, khugepaged going crazy, being sensitive to other programs running on the system, and so on¹. It effectively requires some users to 1) be aware of THP and 2) tweak their system configuration, either tuning or disabling THP. On the other hand, not making use of transparent huge pages wastes performance for some workloads, and there would currently be no other way for users to make use of THP that doesn't involve avoiding Go's allocator altogether. I don't think the problem described in this issue is unique to Go; it also haunts other allocators that make use of THP, and there doesn't seem to be a way to control THP precisely enough — we'll always have to deal with the fact that different Linux distributions ship different defaults, some of which work worse for us than others. Specifically, the following two approaches seem to be impossible using the current APIs offered by the kernel:
Personally, I don't think that the Go runtime knows enough about the workload, the system, or the requirements to decide whether stalling on allocations is acceptable, worth it, or detrimental. On the other hand, Go's focus is on server software, and maybe it's okay to assume that server environments have enough memory for our process, or are configured appropriately with regard to THP. However, Go is used for all kinds of applications, and run in environments where the user cannot change THP settings globally, so there should probably be a way to disable the use of THP when it's known to work poorly for the application, e.g. via a GODEBUG variable. This would be somewhat similar to ¹: I've likely only encountered this issue because most of system memory was already in use by the ZFS ARC — which does get dropped when needed, but apparently not to allow for pages to be merged into huge pages. There are probably other unique combinations leading to issues and Go cannot predict all of them. |
I agree with your assessment of the situation. One additional question I have from you is when in your reproducer (small or big) does the stall happen? Is it close to application start? Does it happen many times while the application is running? My hypothesis is that you're fairly likely to see this at process start, and then no more (once the huge page is installed). The reason is that the runtime calls Furthermore, I've been poking around kernel mailing list messages and I'm even more convinced that However, I do think
Regarding the issue of Linux seems to take the stance that everyone should just tune the huge page settings to their application. Unfortunately I thin that forces our hand for the most part. I still believe the foundation 8fa9e3b is based on is sound. Specifically that most of the time, the Go heap really should just be backed by huge pages. We've got a first-fit allocator, and many objects are small. They pack densely into pages reaching up to the heap goal, at which point there may be some fragmentation, which the scavenger picks at. Taking all of that into account, here's what I think we should do:
I admit the third point seems awfully specific, but there are way too many people out there trying to figure out what is going on with memory on their Linux system when it comes to transparent huge pages. If I can help that just a little bit, I think it's worthwhile. These are small changes. I'll prototype this, benchmark it, and see what the effect is. I'll also share the patch with you @dominikh if you're up for trying your reproducer again. |
The stalls happen many times. Presumably every time the runtime has to allocate more memory from the OS.
But few Go programs allocate all their memory at process start? Heaps grow over time. Or they shrink, return the memory to the OS, and grow again.
Happily. |
It's true that they don't allocate all their memory at process start, I failed to say what I meant. What I meant was that eventually in some steady-state you'll stop seeing new stalls because the heap will have stretched to its peak size in terms of mapped memory. Because we don't unmap heap memory, shrinking and regrowing the heap shouldn't cause new stalls in the current implementation, except if there's a substantial amount of time between shrink/regrowth. I still think this is not very good, but I mainly wanted to confirm that what you were seeing was the former case (stalls on up-front heap growth) vs. the latter case (stalls from the scavenger setting |
I was concretely seeing stalls in a graphical application that (unfortunately) allocates when rendering frames, and the occurrence of stalls lasted long enough for me to debug a single process for several minutes. The application allocates a significant amount of memory upfront when loading traces, but then allocates much smaller amounts of memory as it renders frames. This causes the heap to grow slowly, with no GC cycles because of the high GC target, but frequent stalls.
Can you define "substantial amount of time" for me? I was under the impression that we returned memory to the OS every ~3 minutes via the background scavenger, and actively during allocations. That would mean that stalls affect bursty workloads, too, if they happen apart far enough, though this wouldn't apply to my concrete reproducers. I can see how a steady workload can reach a steady state where we no longer need to ask the OS for new pages, but I agree that that's "still not very good." |
The conditions for
If you're not GCing frequently, the maximum time the cycle can take is 2 minutes to let the scavenger see the free memory, plus whatever time it takes for the scavenger to find the available memory, plus the time until enough allocations happen in that chunk to set it as Thanks for the detail on the application though, I think that narrows down the issue to heap growth (which you already knew). It also makes sense that since the heap grows relatively slowly, you just end up seeing this over and over. I started working on a patch, but it occurs to me that |
Here's the CL: https://go.dev/cl/516795 If it works, I won't be surprised, given everything I now know about your application. Still, it would be good to confirm. |
The CL seems to fix the stalls in the minimal reproducer. Go master latency
Go 1.20 latency
Go MADV_COLLAPSE latency
I've also collected some key statistics about timing and memory usage. Go master statistics
Go 1.20 statistics
Go collapse statistics
However, looking at the number of huge pages used by the process, the I've also tested it with my actual application: The CL manages to After ~4 minutes, the process allocates exactly 2048 KiB worth of huge I've left the process running for several more minutes, after which huge Usage of huge pages was measured via the AnonHugePages field in /proc/<pid>/smaps¹. ¹: For anyone trying to reproduce the stutter, make sure not to be reading from smaps periodically while doing so. Acessing smaps itself can introduce stutter. |
Given your THP settings, this is roughly what I would expect (huge pages are only allocated on The 1.37 GiB from Go 1.20 is likely a result of a combination of the old I'm benchmarking with a THP setting of One thing I'm curious about is whether the stall problem still exists if we I'll send another patch, if you're still on-board for trying it out! |
Here it is: https://go.dev/cl/516995, which is meant to be patched on top of https://go.dev/cl/516795. |
Actual application: 2048 KiB of huge pages right away, 4096 KiB after several minutes. No stalls during normal use, but impossible for me to time my manual testing with the scavenger running. Output of strace on the minimal reproducer, for calls to madvise:
|
I was able to reproduce that and I figured out the issue. |
Change https://go.dev/cl/516795 mentions this issue: |
We've been discussing on the Gophers Slack. Summarizing (and skipping some of the wild goose chasing):
I think https://go.dev/cl/516795 might be the fix. I've asked @dominikh to run my homebrew GC stress test with |
Currently the runtime marks all new memory as MADV_HUGEPAGE on Linux and manages its hugepage eligibility status. Unfortunately, the default THP behavior on most Linux distros is that MADV_HUGEPAGE blocks while the kernel eagerly reclaims and compacts memory to allocate a hugepage. This direct reclaim and compaction is unbounded, and may result in significant application thread stalls. In really bad cases, this can exceed 100s of ms or even seconds. Really all we want is to undo MADV_NOHUGEPAGE marks and let the default Linux paging behavior take over, but the only way to unmark a region as MADV_NOHUGEPAGE is to also mark it MADV_HUGEPAGE. The overall strategy of trying to keep hugepages for the heap unbroken however is sound. So instead let's use the new shiny MADV_COLLAPSE if it exists. MADV_COLLAPSE makes a best-effort synchronous attempt at collapsing the physical memory backing a memory region into a hugepage. We'll use MADV_COLLAPSE where we would've used MADV_HUGEPAGE, and stop using MADV_NOHUGEPAGE altogether. Because MADV_COLLAPSE is synchronous, it's also important to not re-collapse huge pages if the huge pages are likely part of some large allocation. Although in many cases it's advantageous to back these allocations with hugepages because they're contiguous, eagerly collapsing every hugepage means having to page in at least part of the large allocation. However, because we won't use MADV_NOHUGEPAGE anymore, we'll no longer handle the fact that khugepaged might come in and back some memory we returned to the OS with a hugepage. I've come to the conclusion that this is basically unavoidable without a new madvise flag and that it's just not a good default. If this change lands, advice about Linux huge page settings will be added to the GC guide. Verified that this change doesn't regress Sweet, at least not on my machine with: /sys/kernel/mm/transparent_hugepage/enabled [always or madvise] /sys/kernel/mm/transparent_hugepage/defrag [madvise] /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none [0 or 511] Unfortunately, this workaround means that we only get forced hugepages on Linux 6.1+. Fixes golang#61718. Change-Id: I7f4a7ba397847de29f800a99f9cb66cb2720a533 Reviewed-on: https://go-review.googlesource.com/c/go/+/516795 Reviewed-by: Austin Clements <[email protected]> TryBot-Result: Gopher Robot <[email protected]> Run-TryBot: Michael Knyszek <[email protected]> Auto-Submit: Michael Knyszek <[email protected]>
@gopherbot Please open a backport issue for Go 1.21. This can cause unbounded stalls on Linux in some cases with no workaround. |
Backport issue(s) opened: #62329 (for 1.21). Remember to create the cherry-pick CL(s) as soon as the patch is submitted to master, according to https://go.dev/wiki/MinorReleases. |
Change https://go.dev/cl/523655 mentions this issue: |
Currently the runtime marks all new memory as MADV_HUGEPAGE on Linux and manages its hugepage eligibility status. Unfortunately, the default THP behavior on most Linux distros is that MADV_HUGEPAGE blocks while the kernel eagerly reclaims and compacts memory to allocate a hugepage. This direct reclaim and compaction is unbounded, and may result in significant application thread stalls. In really bad cases, this can exceed 100s of ms or even seconds. Really all we want is to undo MADV_NOHUGEPAGE marks and let the default Linux paging behavior take over, but the only way to unmark a region as MADV_NOHUGEPAGE is to also mark it MADV_HUGEPAGE. The overall strategy of trying to keep hugepages for the heap unbroken however is sound. So instead let's use the new shiny MADV_COLLAPSE if it exists. MADV_COLLAPSE makes a best-effort synchronous attempt at collapsing the physical memory backing a memory region into a hugepage. We'll use MADV_COLLAPSE where we would've used MADV_HUGEPAGE, and stop using MADV_NOHUGEPAGE altogether. Because MADV_COLLAPSE is synchronous, it's also important to not re-collapse huge pages if the huge pages are likely part of some large allocation. Although in many cases it's advantageous to back these allocations with hugepages because they're contiguous, eagerly collapsing every hugepage means having to page in at least part of the large allocation. However, because we won't use MADV_NOHUGEPAGE anymore, we'll no longer handle the fact that khugepaged might come in and back some memory we returned to the OS with a hugepage. I've come to the conclusion that this is basically unavoidable without a new madvise flag and that it's just not a good default. If this change lands, advice about Linux huge page settings will be added to the GC guide. Verified that this change doesn't regress Sweet, at least not on my machine with: /sys/kernel/mm/transparent_hugepage/enabled [always or madvise] /sys/kernel/mm/transparent_hugepage/defrag [madvise] /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none [0 or 511] Unfortunately, this workaround means that we only get forced hugepages on Linux 6.1+. For #61718. Fixes #62329. Change-Id: I7f4a7ba397847de29f800a99f9cb66cb2720a533 Reviewed-on: https://go-review.googlesource.com/c/go/+/516795 Reviewed-by: Austin Clements <[email protected]> TryBot-Result: Gopher Robot <[email protected]> Run-TryBot: Michael Knyszek <[email protected]> Auto-Submit: Michael Knyszek <[email protected]> (cherry picked from commit 9f9bb26) Reviewed-on: https://go-review.googlesource.com/c/go/+/523655 LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Dmitri Shuralyov <[email protected]>
@kevinconaway It certainly could be. Go 1.21.1, which went out today, fixes this issue for Go 1.21. You can give that a try. If you want another way to check if you're affected, check the output of:
where you're running your code. If it's |
Go
Below is the output:
Are there any metrics that we could have helped us track this down further, or similar issues in the future? All we were able to tell was that the memory was being retained somewhere "off heap" but we had little visibility into what it was. |
Glad to hear!
Unfortunately Linux doesn't provide a very good way to observe how much memory is going to huge pages. You can occasionally dump Fortunately, I don't think you'll have to worry about this being a problem from Go in the future. Now that we have a better understanding of the landscape of hugepage-related |
|
Change https://go.dev/cl/526615 mentions this issue: |
For golang/go#8832. For golang/go#55328. For golang/go#61718. Change-Id: I1ee51424dc2591a84f09ca8687c113f0af3550d1 Reviewed-on: https://go-review.googlesource.com/c/website/+/526615 Auto-Submit: Michael Knyszek <[email protected]> Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
Change https://go.dev/cl/531816 mentions this issue: |
Change https://go.dev/cl/532117 mentions this issue: |
This has caused performance issues in production environments. Disable it until further notice. Fixes #63334. Related to #61718 and #59960. Change-Id: If84c5a8685825d43c912a71418f2597e44e867e5 Reviewed-on: https://go-review.googlesource.com/c/go/+/531816 Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Michael Knyszek <[email protected]>
After the previous CL, this is now all dead code. This change is separated out to make the previous one easy to backport. For #63334. Related to #61718 and #59960. Change-Id: I109673ed97c62c472bbe2717dfeeb5aa4fc883ea Reviewed-on: https://go-review.googlesource.com/c/go/+/532117 Reviewed-by: Michael Pratt <[email protected]> Auto-Submit: Michael Knyszek <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
Change https://go.dev/cl/532255 mentions this issue: |
This has caused performance issues in production environments. MADV_COLLAPSE can go into direct reclaim, but we call it with the heap lock held. This means that the process could end up stalled fairly quickly if just one allocating goroutine ends up in the madvise call, at least until the madvise(MADV_COLLAPSE) call returns. A similar issue occurred with madvise(MADV_HUGEPAGE), because that could go into direct reclaim on any page fault for MADV_HUGEPAGE-marked memory. My understanding was that the calls to madvise(MADV_COLLAPSE) were fairly rare, and it's "best-effort" nature prevented it from going into direct reclaim often, but this was wrong. It tends to be fairly heavyweight even when it doesn't end up in direct reclaim, and it's almost certainly not worth it. Disable it until further notice and let the kernel fully dictate hugepage policy. The updated scavenger policy is still more hugepage friendly by delaying scavening until hugepages are no longer densely packed, so we don't lose all that much. The Sweet benchmarks show a minimal difference. A couple less realistic benchmarks seem to slow down a bit; they might just be getting unlucky with what the kernel decides to back with a huge page. Some benchmarks on the other hand improve. Overall, it's a wash. name old time/op new time/op delta BiogoIgor 13.1s ± 1% 13.2s ± 2% ~ (p=0.182 n=9+10) BiogoKrishna 12.0s ± 1% 12.1s ± 1% +1.23% (p=0.002 n=9+10) BleveIndexBatch100 4.51s ± 4% 4.56s ± 3% ~ (p=0.393 n=10+10) EtcdPut 20.2ms ± 4% 19.8ms ± 2% ~ (p=0.079 n=10+9) EtcdSTM 109ms ± 3% 111ms ± 3% +1.63% (p=0.035 n=10+10) GoBuildKubelet 31.2s ± 1% 31.3s ± 1% ~ (p=0.780 n=9+10) GoBuildKubeletLink 7.77s ± 0% 7.81s ± 2% ~ (p=0.237 n=8+10) GoBuildIstioctl 31.8s ± 1% 31.7s ± 0% ~ (p=0.136 n=9+9) GoBuildIstioctlLink 7.88s ± 1% 7.89s ± 1% ~ (p=0.720 n=9+10) GoBuildFrontend 11.7s ± 1% 11.8s ± 1% ~ (p=0.278 n=10+9) GoBuildFrontendLink 1.15s ± 4% 1.15s ± 5% ~ (p=0.387 n=9+9) GopherLuaKNucleotide 19.7s ± 1% 20.6s ± 0% +4.48% (p=0.000 n=10+10) MarkdownRenderXHTML 194ms ± 3% 196ms ± 3% ~ (p=0.356 n=9+10) Tile38QueryLoad 633µs ± 2% 629µs ± 2% ~ (p=0.075 n=10+10) name old average-RSS-bytes new average-RSS-bytes delta BiogoIgor 69.2MB ± 3% 68.4MB ± 1% ~ (p=0.190 n=10+10) BiogoKrishna 4.40GB ± 0% 4.40GB ± 0% ~ (p=0.605 n=9+9) BleveIndexBatch100 195MB ± 3% 195MB ± 2% ~ (p=0.853 n=10+10) EtcdPut 107MB ± 4% 108MB ± 3% ~ (p=0.190 n=10+10) EtcdSTM 91.6MB ± 5% 92.6MB ± 4% ~ (p=0.481 n=10+10) GoBuildKubelet 2.26GB ± 1% 2.28GB ± 1% +1.22% (p=0.000 n=10+10) GoBuildIstioctl 1.53GB ± 0% 1.53GB ± 0% +0.21% (p=0.017 n=9+10) GoBuildFrontend 556MB ± 1% 554MB ± 2% ~ (p=0.497 n=9+10) GopherLuaKNucleotide 39.0MB ± 3% 39.0MB ± 1% ~ (p=1.000 n=10+8) MarkdownRenderXHTML 21.2MB ± 2% 21.4MB ± 3% ~ (p=0.190 n=10+10) Tile38QueryLoad 5.99GB ± 2% 6.02GB ± 0% ~ (p=0.243 n=10+9) name old peak-RSS-bytes new peak-RSS-bytes delta BiogoIgor 90.2MB ± 4% 89.2MB ± 2% ~ (p=0.143 n=10+10) BiogoKrishna 4.49GB ± 0% 4.49GB ± 0% ~ (p=0.190 n=10+10) BleveIndexBatch100 283MB ± 8% 274MB ± 6% ~ (p=0.075 n=10+10) EtcdPut 147MB ± 4% 149MB ± 2% +1.55% (p=0.034 n=10+8) EtcdSTM 117MB ± 5% 117MB ± 4% ~ (p=0.905 n=9+10) GopherLuaKNucleotide 44.9MB ± 1% 44.6MB ± 1% ~ (p=0.083 n=8+8) MarkdownRenderXHTML 22.0MB ± 8% 22.1MB ± 9% ~ (p=0.436 n=10+10) Tile38QueryLoad 6.24GB ± 2% 6.29GB ± 2% ~ (p=0.218 n=10+10) name old peak-VM-bytes new peak-VM-bytes delta BiogoIgor 1.33GB ± 0% 1.33GB ± 0% ~ (p=0.504 n=10+9) BiogoKrishna 5.77GB ± 0% 5.77GB ± 0% ~ (p=1.000 n=10+9) BleveIndexBatch100 3.53GB ± 0% 3.53GB ± 0% ~ (p=0.642 n=10+10) EtcdPut 12.1GB ± 0% 12.1GB ± 0% ~ (p=0.564 n=10+10) EtcdSTM 12.1GB ± 0% 12.1GB ± 0% ~ (p=0.633 n=10+10) GopherLuaKNucleotide 1.26GB ± 0% 1.26GB ± 0% ~ (p=0.297 n=9+10) MarkdownRenderXHTML 1.26GB ± 0% 1.26GB ± 0% ~ (p=0.069 n=10+10) Tile38QueryLoad 7.47GB ± 2% 7.53GB ± 2% ~ (p=0.280 n=10+10) name old p50-latency-ns new p50-latency-ns delta EtcdPut 19.8M ± 5% 19.3M ± 3% -2.74% (p=0.043 n=10+9) EtcdSTM 81.4M ± 4% 83.4M ± 4% +2.46% (p=0.029 n=10+10) Tile38QueryLoad 241k ± 1% 240k ± 1% ~ (p=0.393 n=10+10) name old p90-latency-ns new p90-latency-ns delta EtcdPut 30.4M ± 5% 30.6M ± 5% ~ (p=0.971 n=10+10) EtcdSTM 222M ± 3% 226M ± 4% ~ (p=0.063 n=10+10) Tile38QueryLoad 687k ± 2% 691k ± 1% ~ (p=0.173 n=10+8) name old p99-latency-ns new p99-latency-ns delta EtcdPut 42.3M ±10% 41.4M ± 7% ~ (p=0.353 n=10+10) EtcdSTM 486M ± 7% 487M ± 4% ~ (p=0.579 n=10+10) Tile38QueryLoad 6.43M ± 2% 6.37M ± 3% ~ (p=0.280 n=10+10) name old ops/s new ops/s delta EtcdPut 48.6k ± 3% 49.5k ± 2% ~ (p=0.065 n=10+9) EtcdSTM 9.09k ± 2% 8.95k ± 3% -1.56% (p=0.045 n=10+10) Tile38QueryLoad 28.4k ± 1% 28.6k ± 1% +0.87% (p=0.016 n=9+10) Fixes #63335. For #63334. Related to #61718 and #59960. Change-Id: If84c5a8685825d43c912a71418f2597e44e867e5 Reviewed-on: https://go-review.googlesource.com/c/go/+/531816 Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Michael Knyszek <[email protected]> (cherry picked from commit 595deec) Reviewed-on: https://go-review.googlesource.com/c/go/+/532255 Auto-Submit: Dmitri Shuralyov <[email protected]>
For golang/go#8832. For golang/go#55328. For golang/go#61718. Change-Id: I1ee51424dc2591a84f09ca8687c113f0af3550d1 Reviewed-on: https://go-review.googlesource.com/c/website/+/526615 Auto-Submit: Michael Knyszek <[email protected]> Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
Environment: linux/amd64
I've bisected stalls in one of my applications to 8fa9e3b — after discussion with @mknyszek, the stalls seem to be caused by Linux directly reclaiming pages, and taking significant time to do so (100+ ms in my case.)
The direct reclaiming is caused by the combination of Go setting memory as MADV_HUGEPAGE and Transparent Huge Pages being configured as such on my system (which AFAICT is a NixOS default; I don't recall changing this:)
In particular, the
madvise
setting fordefrag
has the following effect:with
always
meaningIt seems to me that one of the reasons for setting MADV_HUGEPAGE is to undo setting MADV_NOHUGEPAGE and that there is no other way to do that.
The text was updated successfully, but these errors were encountered: