-
Notifications
You must be signed in to change notification settings - Fork 130
selftests/bpf: Fix alignment of .BTF_ids #136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Master branch: 963ec27 |
Master branch: ea7da1d |
e17687c
to
ca5cb1e
Compare
Master branch: f4d385e |
ca5cb1e
to
fab5e0f
Compare
Fix a build failure on arm64, due to missing alignment information for the .BTF_ids section: resolve_btfids.test.o: in function `test_resolve_btfids': tools/testing/selftests/bpf/prog_tests/resolve_btfids.c:140:(.text+0x29c): relocation truncated to fit: R_AARCH64_LDST32_ABS_LO12_NC against `.BTF_ids' ld: tools/testing/selftests/bpf/prog_tests/resolve_btfids.c:140: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined In vmlinux, the .BTF_ids section is aligned to 4 bytes by vmlinux.lds.h. In test_progs however, .BTF_ids doesn't have alignment constraints. The arm64 linker expects the btf_id_set.cnt symbol, a u32, to be naturally aligned but finds it misaligned and cannot apply the relocation. Enforce alignment of .BTF_ids to 4 bytes. Fixes: cd04b04 ("selftests/bpf: Add set test to resolve_btfids") Signed-off-by: Jean-Philippe Brucker <[email protected]> Acked-by: Jiri Olsa <[email protected]>
Master branch: f4d385e |
fab5e0f
to
4f01a1b
Compare
At least one diff in series https://patchwork.kernel.org/project/bpf/list/?series=357679 irrelevant now. Closing PR. |
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]>
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]>
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]>
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]> Tested-by: Tiezhu Yang <[email protected]> Acked-by: Paul Chaignon <[email protected]>
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Cc: Johan Almbladh <[email protected]> Cc: Paul Chaignon <[email protected]> Cc: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]> Tested-by: Tiezhu Yang <[email protected]> Acked-by: Paul Chaignon <[email protected]>
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Tiezhu Yang <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]> Acked-by: Paul Chaignon <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Link: https://lore.kernel.org/bpf/[email protected]
The tailcall_3 test program uses bpf_tail_call_static() where the JIT would patch a direct jump. Add a new tailcall_6 test program replicating exactly the same test just ensuring that bpf_tail_call() uses a map index where the verifier cannot make assumptions this time. In other words, this will now cover both on x86-64 JIT, meaning, JIT images with emit_bpf_tail_call_direct() emission as well as JIT images with emit_bpf_tail_call_indirect() emission. # echo 1 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK #136/7 tailcalls/tailcall_bpf2bpf_1:OK #136/8 tailcalls/tailcall_bpf2bpf_2:OK #136/9 tailcalls/tailcall_bpf2bpf_3:OK #136/10 tailcalls/tailcall_bpf2bpf_4:OK #136/11 tailcalls/tailcall_bpf2bpf_5:OK #136 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED # echo 0 > /proc/sys/net/core/bpf_jit_enable # ./test_progs -t tailcalls #136/1 tailcalls/tailcall_1:OK #136/2 tailcalls/tailcall_2:OK #136/3 tailcalls/tailcall_3:OK #136/4 tailcalls/tailcall_4:OK #136/5 tailcalls/tailcall_5:OK #136/6 tailcalls/tailcall_6:OK [...] For interpreter, the tailcall_1-6 tests are passing as well. The later tailcall_bpf2bpf_* are failing due lack of bpf2bpf + tailcall support in interpreter, so this is expected. Also, manual inspection shows that both loaded programs from tailcall_3 and tailcall_6 test case emit the expected opcodes: * tailcall_3 disasm, emit_bpf_tail_call_direct(): [...] b: push %rax c: push %rbx d: push %r13 f: mov %rdi,%rbx 12: movabs $0xffff8d3f5afb0200,%r13 1c: mov %rbx,%rdi 1f: mov %r13,%rsi 22: xor %edx,%edx _ 24: mov -0x4(%rbp),%eax | limit check 2a: cmp $0x20,%eax | 2d: ja 0x0000000000000046 | 2f: add $0x1,%eax | 32: mov %eax,-0x4(%rbp) |_ 38: nopl 0x0(%rax,%rax,1) 3d: pop %r13 3f: pop %rbx 40: pop %rax 41: jmpq 0xffffffffffffe377 [...] * tailcall_6 disasm, emit_bpf_tail_call_indirect(): [...] 47: movabs $0xffff8d3f59143a00,%rsi 51: mov %edx,%edx 53: cmp %edx,0x24(%rsi) 56: jbe 0x0000000000000093 _ 58: mov -0x4(%rbp),%eax | limit check 5e: cmp $0x20,%eax | 61: ja 0x0000000000000093 | 63: add $0x1,%eax | 66: mov %eax,-0x4(%rbp) |_ 6c: mov 0x110(%rsi,%rdx,8),%rcx 74: test %rcx,%rcx 77: je 0x0000000000000093 79: pop %rax 7a: mov 0x30(%rcx),%rcx 7e: add $0xb,%rcx 82: callq 0x000000000000008e 87: pause 89: lfence 8c: jmp 0x0000000000000087 8e: mov %rcx,(%rsp) 92: retq [...] Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Tiezhu Yang <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Johan Almbladh <[email protected]> Acked-by: Paul Chaignon <[email protected]> Link: https://lore.kernel.org/bpf/CAM1=_QRyRVCODcXo_Y6qOm1iT163HoiSj8U2pZ8Rj3hzMTT=HQ@mail.gmail.com Link: https://lore.kernel.org/bpf/[email protected]
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]>
In commit 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") ->remote_port field changed from __u32 to __be16. However, narrow load tests which exercise 1-byte sized loads from offsetof(struct bpf_sk_lookup, remote_port) were not adopted to reflect the change. As a result, on little-endian we continue testing loads from addresses: - (__u8 *)&ctx->remote_port + 3 - (__u8 *)&ctx->remote_port + 4 which map to the zero padding following the remote_port field, and don't break the tests because there is no observable change. While on big-endian, we observe breakage because tests expect to see zeros for values loaded from: - (__u8 *)&ctx->remote_port - 1 - (__u8 *)&ctx->remote_port - 2 Above addresses map to ->remote_ip6 field, which precedes ->remote_port, and are populated during the bpf_sk_lookup IPv6 tests. Unsurprisingly, on s390x we observe: #136/38 sk_lookup/narrow access to ctx v4:OK #136/39 sk_lookup/narrow access to ctx v6:FAIL Fix it by removing the checks for 1-byte loads from offsets outside of the ->remote_port field. Fixes: 9a69e2b ("bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide") Suggested-by: Ilya Leoshkevich <[email protected]> Signed-off-by: Jakub Sitnicki <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
Fix our pointer offset usage in error_state_read when there is no i915_gpu_coredump but buf offset is non-zero. This fixes a kernel page fault can happen when multiple tests are running concurrently in a loop and one is producing engine resets and consuming the i915 error_state dump while the other is forcing full GT resets. (takes a while to trigger). The dmesg call trace: [ 5590.803000] BUG: unable to handle page fault for address: ffffffffa0b0e000 [ 5590.803009] #PF: supervisor read access in kernel mode [ 5590.803013] #PF: error_code(0x0000) - not-present page [ 5590.803016] PGD 5814067 P4D 5814067 PUD 5815063 PMD 109de4067 PTE 0 [ 5590.803022] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 5590.803026] CPU: 5 PID: 13656 Comm: i915_hangman Tainted: G U 5.17.0-rc5-ups69-guc-err-capt-rev6+ #136 [ 5590.803033] Hardware name: Intel Corporation Alder Lake Client Platform/AlderLake-M LP4x RVP, BIOS ADLPFWI1.R00. 3031.A02.2201171222 01/17/2022 [ 5590.803039] RIP: 0010:memcpy_erms+0x6/0x10 [ 5590.803045] Code: fe ff ff cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe [ 5590.803054] RSP: 0018:ffffc90003a8fdf0 EFLAGS: 00010282 [ 5590.803057] RAX: ffff888107ee9000 RBX: ffff888108cb1a00 RCX: 0000000000000f8f [ 5590.803061] RDX: 0000000000001000 RSI: ffffffffa0b0e000 RDI: ffff888107ee9071 [ 5590.803065] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000001 [ 5590.803069] R10: 0000000000000001 R11: 0000000000000002 R12: 0000000000000019 [ 5590.803073] R13: 0000000000174fff R14: 0000000000001000 R15: ffff888107ee9000 [ 5590.803077] FS: 00007f62a99bee80(0000) GS:ffff88849f880000(0000) knlGS:0000000000000000 [ 5590.803082] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5590.803085] CR2: ffffffffa0b0e000 CR3: 000000010a1a8004 CR4: 0000000000770ee0 [ 5590.803089] PKRU: 55555554 [ 5590.803091] Call Trace: [ 5590.803093] <TASK> [ 5590.803096] error_state_read+0xa1/0xd0 [i915] [ 5590.803175] kernfs_fop_read_iter+0xb2/0x1b0 [ 5590.803180] new_sync_read+0x116/0x1a0 [ 5590.803185] vfs_read+0x114/0x1b0 [ 5590.803189] ksys_read+0x63/0xe0 [ 5590.803193] do_syscall_64+0x38/0xc0 [ 5590.803197] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 5590.803201] RIP: 0033:0x7f62aaea5912 [ 5590.803204] Code: c0 e9 b2 fe ff ff 50 48 8d 3d 5a b9 0c 00 e8 05 19 02 00 0f 1f 44 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 5590.803213] RSP: 002b:00007fff5b659ae8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 5590.803218] RAX: ffffffffffffffda RBX: 0000000000100000 RCX: 00007f62aaea5912 [ 5590.803221] RDX: 000000000008b000 RSI: 00007f62a8c4000f RDI: 0000000000000006 [ 5590.803225] RBP: 00007f62a8bcb00f R08: 0000000000200010 R09: 0000000000101000 [ 5590.803229] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000006 [ 5590.803233] R13: 0000000000075000 R14: 00007f62a8acb010 R15: 0000000000200000 [ 5590.803238] </TASK> [ 5590.803240] Modules linked in: i915 ttm drm_buddy drm_dp_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops prime_numbers nfnetlink br_netfilter overlay mei_pxp mei_hdcp x86_pkg_temp_thermal coretemp kvm_intel snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core snd_pcm mei_me mei fuse ip_tables x_tables crct10dif_pclmul e1000e crc32_pclmul ptp i2c_i801 ghash_clmulni_intel i2c_smbus pps_core [last unloa ded: ttm] [ 5590.803277] CR2: ffffffffa0b0e000 [ 5590.803280] ---[ end trace 0000000000000000 ]--- Fixes: 0e39037 ("drm/i915: Cache the error string") Signed-off-by: Alan Previn <[email protected]> Reviewed-by: John Harrison <[email protected]> Signed-off-by: John Harrison <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] (cherry picked from commit 3304033) Signed-off-by: Jani Nikula <[email protected]>
LE Create CIS command shall not be sent before all CIS Established events from its previous invocation have been processed. Currently it is sent via hci_sync but that only waits for the first event, but there can be multiple. Make it wait for all events, and simplify the CIS creation as follows: Add new flag HCI_CONN_CREATE_CIS, which is set if Create CIS has been sent for the connection but it is not yet completed. Make BT_CONNECT state to mean the connection wants Create CIS. On events after which new Create CIS may need to be sent, send it if possible and some connections need it. These events are: hci_connect_cis, iso_connect_cfm, hci_cs_le_create_cis, hci_le_cis_estabilished_evt. The Create CIS status/completion events shall queue new Create CIS only if at least one of the connections transitions away from BT_CONNECT, so that we don't loop if controller is sending bogus events. This fixes sending multiple CIS Create for the same CIS in the "ISO AC 6(i) - Success" BlueZ test case: < HCI Command: LE Create Co.. (0x08|0x0064) plen 9 #129 [hci0] Number of CIS: 2 CIS Handle: 257 ACL Handle: 42 CIS Handle: 258 ACL Handle: 42 > HCI Event: Command Status (0x0f) plen 4 #130 [hci0] LE Create Connected Isochronous Stream (0x08|0x0064) ncmd 1 Status: Success (0x00) > HCI Event: LE Meta Event (0x3e) plen 29 #131 [hci0] LE Connected Isochronous Stream Established (0x19) Status: Success (0x00) Connection Handle: 257 ... < HCI Command: LE Setup Is.. (0x08|0x006e) plen 13 #132 [hci0] ... > HCI Event: Command Complete (0x0e) plen 6 #133 [hci0] LE Setup Isochronous Data Path (0x08|0x006e) ncmd 1 ... < HCI Command: LE Create Co.. (0x08|0x0064) plen 5 #134 [hci0] Number of CIS: 1 CIS Handle: 258 ACL Handle: 42 > HCI Event: Command Status (0x0f) plen 4 #135 [hci0] LE Create Connected Isochronous Stream (0x08|0x0064) ncmd 1 Status: ACL Connection Already Exists (0x0b) > HCI Event: LE Meta Event (0x3e) plen 29 #136 [hci0] LE Connected Isochronous Stream Established (0x19) Status: Success (0x00) Connection Handle: 258 ... Fixes: c09b80b ("Bluetooth: hci_conn: Fix not waiting for HCI_EVT_LE_CIS_ESTABLISHED") Signed-off-by: Pauli Virtanen <[email protected]> Signed-off-by: Luiz Augusto von Dentz <[email protected]>
Inject fault while probing kunit-example-test.ko, if kstrdup() fails in mod_sysfs_setup() in load_module(), the mod->state will switch from MODULE_STATE_COMING to MODULE_STATE_GOING instead of from MODULE_STATE_LIVE to MODULE_STATE_GOING, so only kunit_module_exit() will be called without kunit_module_init(), and the mod->kunit_suites is no set correctly and the free in kunit_free_suite_set() will cause below wild-memory-access bug. The mod->state state machine when load_module() succeeds: MODULE_STATE_UNFORMED ---> MODULE_STATE_COMING ---> MODULE_STATE_LIVE ^ | | | delete_module +---------------- MODULE_STATE_GOING <---------+ The mod->state state machine when load_module() fails at mod_sysfs_setup(): MODULE_STATE_UNFORMED ---> MODULE_STATE_COMING ---> MODULE_STATE_GOING ^ | | | +-----------------------------------------------+ Call kunit_module_init() at MODULE_STATE_COMING state to fix the issue because MODULE_STATE_LIVE is transformed from it. Unable to handle kernel paging request at virtual address ffffff341e942a88 KASAN: maybe wild-memory-access in range [0x0003f9a0f4a15440-0x0003f9a0f4a15447] Mem abort info: ESR = 0x0000000096000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x04: level 0 translation fault Data abort info: ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 CM = 0, WnR = 0, TnD = 0, TagAccess = 0 GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000000441ea000 [ffffff341e942a88] pgd=0000000000000000, p4d=0000000000000000 Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP Modules linked in: kunit_example_test(-) cfg80211 rfkill 8021q garp mrp stp llc ipv6 [last unloaded: kunit_example_test] CPU: 3 PID: 2035 Comm: modprobe Tainted: G W N 6.5.0-next-20230828+ #136 Hardware name: linux,dummy-virt (DT) pstate: a0000005 (NzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : kfree+0x2c/0x70 lr : kunit_free_suite_set+0xcc/0x13c sp : ffff8000829b75b0 x29: ffff8000829b75b0 x28: ffff8000829b7b90 x27: 0000000000000000 x26: dfff800000000000 x25: ffffcd07c82a7280 x24: ffffcd07a50ab300 x23: ffffcd07a50ab2e8 x22: 1ffff00010536ec0 x21: dfff800000000000 x20: ffffcd07a50ab2f0 x19: ffffcd07a50ab2f0 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: ffffcd07c24b6764 x14: ffffcd07c24b63c0 x13: ffffcd07c4cebb94 x12: ffff700010536ec7 x11: 1ffff00010536ec6 x10: ffff700010536ec6 x9 : dfff800000000000 x8 : 00008fffefac913a x7 : 0000000041b58ab3 x6 : 0000000000000000 x5 : 1ffff00010536ec5 x4 : ffff8000829b7628 x3 : dfff800000000000 x2 : ffffff341e942a80 x1 : ffffcd07a50aa000 x0 : fffffc0000000000 Call trace: kfree+0x2c/0x70 kunit_free_suite_set+0xcc/0x13c kunit_module_notify+0xd8/0x360 blocking_notifier_call_chain+0xc4/0x128 load_module+0x382c/0x44a4 init_module_from_file+0xd4/0x128 idempotent_init_module+0x2c8/0x524 __arm64_sys_finit_module+0xac/0x100 invoke_syscall+0x6c/0x258 el0_svc_common.constprop.0+0x160/0x22c do_el0_svc+0x44/0x5c el0_svc+0x38/0x78 el0t_64_sync_handler+0x13c/0x158 el0t_64_sync+0x190/0x194 Code: aa0003e1 b25657e0 d34cfc42 8b021802 (f9400440) ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Oops: Fatal exception SMP: stopping secondary CPUs Kernel Offset: 0x4d0742200000 from 0xffff800080000000 PHYS_OFFSET: 0xffffee43c0000000 CPU features: 0x88000203,3c020000,1000421b Memory Limit: none Rebooting in 1 seconds.. Fixes: 3d6e446 ("kunit: unify module and builtin suite definitions") Signed-off-by: Jinjie Ruan <[email protected]> Reviewed-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
Pull request for series with
subject: selftests/bpf: Fix alignment of .BTF_ids
version: 1
url: https://patchwork.kernel.org/project/bpf/list/?series=357679