Skip to content

Commit 56cf375

Browse files
Daniel Sneddongregkh
Daniel Sneddon
authored andcommitted
x86/speculation: Add RSB VM Exit protections
commit 2b12993 upstream. tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by: Daniel Sneddon <[email protected]> Co-developed-by: Pawan Gupta <[email protected]> Signed-off-by: Pawan Gupta <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> [ bp: Adjust patch to account for kvm entry being in c ] Signed-off-by: Suraj Jitindar Singh <[email protected]> Signed-off-by: Suleiman Souhlal <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 7eb3e2a commit 56cf375

File tree

8 files changed

+103
-30
lines changed

8 files changed

+103
-30
lines changed

Documentation/admin-guide/hw-vuln/spectre.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -422,6 +422,14 @@ The possible values in this file are:
422422
'RSB filling' Protection of RSB on context switch enabled
423423
============= ===========================================
424424

425+
- EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status:
426+
427+
=========================== =======================================================
428+
'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VMEXIT enabled
429+
'PBRSB-eIBRS: Vulnerable' CPU is vulnerable
430+
'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB
431+
=========================== =======================================================
432+
425433
Full mitigation might require a microcode update from the CPU
426434
vendor. When the necessary microcode is not available, the kernel will
427435
report vulnerability.

arch/x86/include/asm/cpufeatures.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -291,6 +291,7 @@
291291
#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
292292
#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
293293
#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
294+
#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
294295

295296
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
296297
#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
@@ -406,5 +407,6 @@
406407
#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
407408
#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
408409
#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */
410+
#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
409411

410412
#endif /* _ASM_X86_CPUFEATURES_H */

arch/x86/include/asm/msr-index.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,10 @@
130130
* are restricted to targets in
131131
* kernel.
132132
*/
133+
#define ARCH_CAP_PBRSB_NO BIT(24) /*
134+
* Not susceptible to Post-Barrier
135+
* Return Stack Buffer Predictions.
136+
*/
133137

134138
#define MSR_IA32_FLUSH_CMD 0x0000010b
135139
#define L1D_FLUSH BIT(0) /*

arch/x86/include/asm/nospec-branch.h

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,13 @@
7373
add $(BITS_PER_LONG/8) * nr, sp;
7474
#endif
7575

76+
#define ISSUE_UNBALANCED_RET_GUARD(sp) \
77+
call 992f; \
78+
int3; \
79+
992: \
80+
add $(BITS_PER_LONG/8), sp; \
81+
lfence;
82+
7683
#ifdef __ASSEMBLY__
7784

7885
/*
@@ -278,9 +285,11 @@ static __always_inline void vmexit_fill_RSB(void)
278285
unsigned long loops;
279286

280287
asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
281-
ALTERNATIVE("jmp 910f",
282-
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
283-
X86_FEATURE_RSB_VMEXIT)
288+
ALTERNATIVE_2("jmp 910f", "", X86_FEATURE_RSB_VMEXIT,
289+
"jmp 911f", X86_FEATURE_RSB_VMEXIT_LITE)
290+
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1))
291+
"911:"
292+
__stringify(ISSUE_UNBALANCED_RET_GUARD(%1))
284293
"910:"
285294
: "=r" (loops), ASM_CALL_CONSTRAINT
286295
: : "memory" );

arch/x86/kernel/cpu/bugs.c

Lines changed: 64 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1198,6 +1198,54 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
11981198
}
11991199
}
12001200

1201+
static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
1202+
{
1203+
/*
1204+
* Similar to context switches, there are two types of RSB attacks
1205+
* after VM exit:
1206+
*
1207+
* 1) RSB underflow
1208+
*
1209+
* 2) Poisoned RSB entry
1210+
*
1211+
* When retpoline is enabled, both are mitigated by filling/clearing
1212+
* the RSB.
1213+
*
1214+
* When IBRS is enabled, while #1 would be mitigated by the IBRS branch
1215+
* prediction isolation protections, RSB still needs to be cleared
1216+
* because of #2. Note that SMEP provides no protection here, unlike
1217+
* user-space-poisoned RSB entries.
1218+
*
1219+
* eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB
1220+
* bug is present then a LITE version of RSB protection is required,
1221+
* just a single call needs to retire before a RET is executed.
1222+
*/
1223+
switch (mode) {
1224+
case SPECTRE_V2_NONE:
1225+
return;
1226+
1227+
case SPECTRE_V2_EIBRS_LFENCE:
1228+
case SPECTRE_V2_EIBRS:
1229+
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB) &&
1230+
(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) {
1231+
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
1232+
pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
1233+
}
1234+
return;
1235+
1236+
case SPECTRE_V2_EIBRS_RETPOLINE:
1237+
case SPECTRE_V2_RETPOLINE:
1238+
case SPECTRE_V2_LFENCE:
1239+
case SPECTRE_V2_IBRS:
1240+
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
1241+
pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
1242+
return;
1243+
}
1244+
1245+
pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
1246+
dump_stack();
1247+
}
1248+
12011249
static void __init spectre_v2_select_mitigation(void)
12021250
{
12031251
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -1347,28 +1395,7 @@ static void __init spectre_v2_select_mitigation(void)
13471395
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
13481396
pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
13491397

1350-
/*
1351-
* Similar to context switches, there are two types of RSB attacks
1352-
* after vmexit:
1353-
*
1354-
* 1) RSB underflow
1355-
*
1356-
* 2) Poisoned RSB entry
1357-
*
1358-
* When retpoline is enabled, both are mitigated by filling/clearing
1359-
* the RSB.
1360-
*
1361-
* When IBRS is enabled, while #1 would be mitigated by the IBRS branch
1362-
* prediction isolation protections, RSB still needs to be cleared
1363-
* because of #2. Note that SMEP provides no protection here, unlike
1364-
* user-space-poisoned RSB entries.
1365-
*
1366-
* eIBRS, on the other hand, has RSB-poisoning protections, so it
1367-
* doesn't need RSB clearing after vmexit.
1368-
*/
1369-
if (boot_cpu_has(X86_FEATURE_RETPOLINE) ||
1370-
boot_cpu_has(X86_FEATURE_KERNEL_IBRS))
1371-
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
1398+
spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
13721399

13731400
/*
13741401
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
@@ -2096,6 +2123,19 @@ static char *ibpb_state(void)
20962123
return "";
20972124
}
20982125

2126+
static char *pbrsb_eibrs_state(void)
2127+
{
2128+
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
2129+
if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) ||
2130+
boot_cpu_has(X86_FEATURE_RSB_VMEXIT))
2131+
return ", PBRSB-eIBRS: SW sequence";
2132+
else
2133+
return ", PBRSB-eIBRS: Vulnerable";
2134+
} else {
2135+
return ", PBRSB-eIBRS: Not affected";
2136+
}
2137+
}
2138+
20992139
static ssize_t spectre_v2_show_state(char *buf)
21002140
{
21012141
if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
@@ -2108,12 +2148,13 @@ static ssize_t spectre_v2_show_state(char *buf)
21082148
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
21092149
return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
21102150

2111-
return sprintf(buf, "%s%s%s%s%s%s\n",
2151+
return sprintf(buf, "%s%s%s%s%s%s%s\n",
21122152
spectre_v2_strings[spectre_v2_enabled],
21132153
ibpb_state(),
21142154
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
21152155
stibp_state(),
21162156
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
2157+
pbrsb_eibrs_state(),
21172158
spectre_v2_module_string());
21182159
}
21192160

arch/x86/kernel/cpu/common.c

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -955,6 +955,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
955955
#define NO_SWAPGS BIT(6)
956956
#define NO_ITLB_MULTIHIT BIT(7)
957957
#define NO_MMIO BIT(8)
958+
#define NO_EIBRS_PBRSB BIT(9)
958959

959960
#define VULNWL(_vendor, _family, _model, _whitelist) \
960961
{ X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
@@ -996,7 +997,7 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
996997

997998
VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
998999
VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
999-
VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
1000+
VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
10001001

10011002
/*
10021003
* Technically, swapgs isn't serializing on AMD (despite it previously
@@ -1006,7 +1007,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
10061007
* good enough for our purposes.
10071008
*/
10081009

1009-
VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT),
1010+
VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB),
1011+
VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB),
1012+
VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB),
10101013

10111014
/* AMD Family 0xf - 0x12 */
10121015
VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
@@ -1178,6 +1181,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
11781181
setup_force_cpu_bug(X86_BUG_RETBLEED);
11791182
}
11801183

1184+
if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
1185+
!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
1186+
!(ia32_cap & ARCH_CAP_PBRSB_NO))
1187+
setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
1188+
11811189
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
11821190
return;
11831191

arch/x86/kvm/vmx.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11022,8 +11022,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
1102211022
* entries and (in some cases) RSB underflow.
1102311023
*
1102411024
* eIBRS has its own protection against poisoned RSB, so it doesn't
11025-
* need the RSB filling sequence. But it does need to be enabled
11026-
* before the first unbalanced RET.
11025+
* need the RSB filling sequence. But it does need to be enabled, and a
11026+
* single call to retire, before the first unbalanced RET.
1102711027
*
1102811028
* So no RETs before vmx_spec_ctrl_restore_host() below.
1102911029
*/

tools/arch/x86/include/asm/cpufeatures.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -271,6 +271,7 @@
271271

272272
/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:0 (EDX), word 11 */
273273
#define X86_FEATURE_CQM_LLC (11*32+ 1) /* LLC QoS if 1 */
274+
#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */
274275

275276
/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:1 (EDX), word 12 */
276277
#define X86_FEATURE_CQM_OCCUP_LLC (12*32+ 0) /* LLC occupancy monitoring */

0 commit comments

Comments
 (0)