Skip to content

Commit b42aa3f

Browse files
committed
MIPS: tlbex: Fix build_restore_pagemask KScratch restore
build_restore_pagemask() will restore the value of register $1/$at when its restore_scratch argument is non-zero, and aims to do so by filling a branch delay slot. Commit 0b24cae ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") added an EHB instruction (Execution Hazard Barrier) prior to restoring $1 from a KScratch register, in order to resolve a hazard that can result in stale values of the KScratch register being observed. In particular, P-class CPUs from MIPS with out of order execution pipelines such as the P5600 & P6600 are affected. Unfortunately this EHB instruction was inserted in the branch delay slot causing the MFC0 instruction which performs the restoration to no longer execute along with the branch. The result is that the $1 register isn't actually restored, ie. the TLB refill exception handler clobbers it - which is exactly the problem the EHB is meant to avoid for the P-class CPUs. Similarly build_get_pgd_vmalloc() will restore the value of $1/$at when its mode argument equals refill_scratch, and suffers from the same problem. Fix this by in both cases moving the EHB earlier in the emitted code. There's no reason it needs to immediately precede the MFC0 - it simply needs to be between the MTC0 & MFC0. This bug only affects Cavium Octeon systems which use build_fast_tlb_refill_handler(). Signed-off-by: Paul Burton <[email protected]> Fixes: 0b24cae ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") Cc: Dmitry Korotin <[email protected]> Cc: [email protected] # v3.15+ Cc: [email protected] Cc: [email protected]
1 parent e4f5cb1 commit b42aa3f

File tree

1 file changed

+15
-8
lines changed

1 file changed

+15
-8
lines changed

arch/mips/mm/tlbex.c

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -653,6 +653,13 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
653653
int restore_scratch)
654654
{
655655
if (restore_scratch) {
656+
/*
657+
* Ensure the MFC0 below observes the value written to the
658+
* KScratch register by the prior MTC0.
659+
*/
660+
if (scratch_reg >= 0)
661+
uasm_i_ehb(p);
662+
656663
/* Reset default page size */
657664
if (PM_DEFAULT_MASK >> 16) {
658665
uasm_i_lui(p, tmp, PM_DEFAULT_MASK >> 16);
@@ -667,12 +674,10 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
667674
uasm_i_mtc0(p, 0, C0_PAGEMASK);
668675
uasm_il_b(p, r, lid);
669676
}
670-
if (scratch_reg >= 0) {
671-
uasm_i_ehb(p);
677+
if (scratch_reg >= 0)
672678
UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
673-
} else {
679+
else
674680
UASM_i_LW(p, 1, scratchpad_offset(0), 0);
675-
}
676681
} else {
677682
/* Reset default page size */
678683
if (PM_DEFAULT_MASK >> 16) {
@@ -921,6 +926,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
921926
}
922927
if (mode != not_refill && check_for_high_segbits) {
923928
uasm_l_large_segbits_fault(l, *p);
929+
930+
if (mode == refill_scratch && scratch_reg >= 0)
931+
uasm_i_ehb(p);
932+
924933
/*
925934
* We get here if we are an xsseg address, or if we are
926935
* an xuseg address above (PGDIR_SHIFT+PGDIR_BITS) boundary.
@@ -939,12 +948,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
939948
uasm_i_jr(p, ptr);
940949

941950
if (mode == refill_scratch) {
942-
if (scratch_reg >= 0) {
943-
uasm_i_ehb(p);
951+
if (scratch_reg >= 0)
944952
UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
945-
} else {
953+
else
946954
UASM_i_LW(p, 1, scratchpad_offset(0), 0);
947-
}
948955
} else {
949956
uasm_i_nop(p);
950957
}

0 commit comments

Comments
 (0)