Skip to content

Commit 7cb2ef5

Browse files
hikerockiestorvalds
authored andcommitted
mm: fix aio performance regression for database caused by THP
I am working with a tool that simulates oracle database I/O workload. This tool (orion to be specific - <http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#autoId24>) allocates hugetlbfs pages using shmget() with SHM_HUGETLB flag. It then does aio into these pages from flash disks using various common block sizes used by database. I am looking at performance with two of the most common block sizes - 1M and 64K. aio performance with these two block sizes plunged after Transparent HugePages was introduced in the kernel. Here are performance numbers: pre-THP 2.6.39 3.11-rc5 1M read 8384 MB/s 5629 MB/s 6501 MB/s 64K read 7867 MB/s 4576 MB/s 4251 MB/s I have narrowed the performance impact down to the overheads introduced by THP in __get_page_tail() and put_compound_page() routines. perf top shows >40% of cycles being spent in these two routines. Every time direct I/O to hugetlbfs pages starts, kernel calls get_page() to grab a reference to the pages and calls put_page() when I/O completes to put the reference away. THP introduced significant amount of locking overhead to get_page() and put_page() when dealing with compound pages because hugepages can be split underneath get_page() and put_page(). It added this overhead irrespective of whether it is dealing with hugetlbfs pages or transparent hugepages. This resulted in 20%-45% drop in aio performance when using hugetlbfs pages. Since hugetlbfs pages can not be split, there is no reason to go through all the locking overhead for these pages from what I can see. I added code to __get_page_tail() and put_compound_page() to bypass all the locking code when working with hugetlbfs pages. This improved performance significantly. Performance numbers with this patch: pre-THP 3.11-rc5 3.11-rc5 + Patch 1M read 8384 MB/s 6501 MB/s 8371 MB/s 64K read 7867 MB/s 4251 MB/s 6510 MB/s Performance with 64K read is still lower than what it was before THP, but still a 53% improvement. It does mean there is more work to be done but I will take a 53% improvement for now. Please take a look at the following patch and let me know if it looks reasonable. [[email protected]: tweak comments] Signed-off-by: Khalid Aziz <[email protected]> Cc: Pravin B Shelar <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 3a7200a commit 7cb2ef5

File tree

1 file changed

+52
-25
lines changed

1 file changed

+52
-25
lines changed

mm/swap.c

Lines changed: 52 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@
3131
#include <linux/memcontrol.h>
3232
#include <linux/gfp.h>
3333
#include <linux/uio.h>
34+
#include <linux/hugetlb.h>
3435

3536
#include "internal.h"
3637

@@ -81,6 +82,19 @@ static void __put_compound_page(struct page *page)
8182

8283
static void put_compound_page(struct page *page)
8384
{
85+
/*
86+
* hugetlbfs pages cannot be split from under us. If this is a
87+
* hugetlbfs page, check refcount on head page and release the page if
88+
* the refcount becomes zero.
89+
*/
90+
if (PageHuge(page)) {
91+
page = compound_head(page);
92+
if (put_page_testzero(page))
93+
__put_compound_page(page);
94+
95+
return;
96+
}
97+
8498
if (unlikely(PageTail(page))) {
8599
/* __split_huge_page_refcount can run under us */
86100
struct page *page_head = compound_trans_head(page);
@@ -184,38 +198,51 @@ bool __get_page_tail(struct page *page)
184198
* proper PT lock that already serializes against
185199
* split_huge_page().
186200
*/
187-
unsigned long flags;
188201
bool got = false;
189-
struct page *page_head = compound_trans_head(page);
202+
struct page *page_head;
190203

191-
if (likely(page != page_head && get_page_unless_zero(page_head))) {
204+
/*
205+
* If this is a hugetlbfs page it cannot be split under us. Simply
206+
* increment refcount for the head page.
207+
*/
208+
if (PageHuge(page)) {
209+
page_head = compound_head(page);
210+
atomic_inc(&page_head->_count);
211+
got = true;
212+
} else {
213+
unsigned long flags;
214+
215+
page_head = compound_trans_head(page);
216+
if (likely(page != page_head &&
217+
get_page_unless_zero(page_head))) {
218+
219+
/* Ref to put_compound_page() comment. */
220+
if (PageSlab(page_head)) {
221+
if (likely(PageTail(page))) {
222+
__get_page_tail_foll(page, false);
223+
return true;
224+
} else {
225+
put_page(page_head);
226+
return false;
227+
}
228+
}
192229

193-
/* Ref to put_compound_page() comment. */
194-
if (PageSlab(page_head)) {
230+
/*
231+
* page_head wasn't a dangling pointer but it
232+
* may not be a head page anymore by the time
233+
* we obtain the lock. That is ok as long as it
234+
* can't be freed from under us.
235+
*/
236+
flags = compound_lock_irqsave(page_head);
237+
/* here __split_huge_page_refcount won't run anymore */
195238
if (likely(PageTail(page))) {
196239
__get_page_tail_foll(page, false);
197-
return true;
198-
} else {
199-
put_page(page_head);
200-
return false;
240+
got = true;
201241
}
242+
compound_unlock_irqrestore(page_head, flags);
243+
if (unlikely(!got))
244+
put_page(page_head);
202245
}
203-
204-
/*
205-
* page_head wasn't a dangling pointer but it
206-
* may not be a head page anymore by the time
207-
* we obtain the lock. That is ok as long as it
208-
* can't be freed from under us.
209-
*/
210-
flags = compound_lock_irqsave(page_head);
211-
/* here __split_huge_page_refcount won't run anymore */
212-
if (likely(PageTail(page))) {
213-
__get_page_tail_foll(page, false);
214-
got = true;
215-
}
216-
compound_unlock_irqrestore(page_head, flags);
217-
if (unlikely(!got))
218-
put_page(page_head);
219246
}
220247
return got;
221248
}

0 commit comments

Comments
 (0)