Skip to content

[msan] Improve packed multiply-add instrumentation #152941

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 26 commits into from
Aug 13, 2025

Conversation

thurstond
Copy link
Contributor

@thurstond thurstond commented Aug 10, 2025

The current instrumentation has false positives: if there is a single uninitialized bit in any of the operands, the entire output is poisoned. This does not take into account that multiplying an uninitialized value with zero results in an initialized zero value.

This step allows elements that are zero to clear the corresponding shadow during the multiplication step. The horizontal add step and accumulation step (if any) are modeled using bitwise OR.

Future work can apply this improved handler to the AVX512 equivalent intrinsics (x86_avx512_pmaddw_d_512, x86_avx512_pmaddubs_w_512.) and AVX VNNI intrinsics.

The current instrumentation has false positives: if there is a single
uninitialized bit in any of the operands, the entire output is poisoned.
This does not take into account that multiplying an uninitialized bit
with zero results in an initialized zero value.

This patch improves the instrumentation by underapproximating the
multiplication as bitwise AND, with correct handling of ANDing with
zero. This is an underapproximation (no false positives) because it is
ignoring the carry from multiplication.

The horizontal add step is modeled precisely.

It also applies the handler to the AVX512 equivalents:
llvm.avx512.pmaddw.d.512, llvm.avx512.pmaddubs.w.512
@llvmbot
Copy link
Member

llvmbot commented Aug 10, 2025

@llvm/pr-subscribers-compiler-rt-sanitizer

@llvm/pr-subscribers-llvm-transforms

Author: Thurston Dang (thurstond)

Changes

The current instrumentation has false positives: if there is a single uninitialized bit in any of the operands, the entire output is poisoned. This does not take into account that multiplying an uninitialized value with zero results in an initialized zero value.

This patch improves the instrumentation by underapproximating the multiplication as bitwise AND, with correct handling of ANDing with zero. This has no false positives but does have significant false negatives.

The horizontal add step is modeled precisely.

It also applies the handler to the AVX512 equivalents, which are currently strictly handled: llvm.avx512.pmaddw.d.512, llvm.avx512.pmaddubs.w.512


Patch is 56.79 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/152941.diff

10 Files Affected:

  • (modified) llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp (+64-10)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll (+30-15)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll (+46-52)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics.ll (+46-51)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll (+27-13)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/sse2-intrinsics-x86.ll (+10-5)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/i386/avx2-intrinsics-i386.ll (+31-16)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/i386/mmx-intrinsics.ll (+27-13)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/i386/sse2-intrinsics-i386.ll (+10-5)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_arith.ll (+21-9)
diff --git a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
index 6e8138725375a..f938b0acaa0be 100644
--- a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
@@ -2865,6 +2865,9 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
   }
 
   void visitMul(BinaryOperator &I) {
+    // TODO: this can only handle zero bits that are part of statically-known
+    //       constants. Consider under-approximating the multiplication as AND
+    //       (which ignores the carry), and using the visitAnd() logic.
     Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0));
     Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1));
     if (constOp0 && !constOp1)
@@ -3827,19 +3830,68 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
   }
 
   // Instrument multiply-add intrinsic.
+  //
+  // e.g., <4 x i32> @llvm.x86.sse2.pmadd.wd(<8 x i16> %a, <8 x i16> %b)
+  //       <1 x i64> @llvm.x86.mmx.pmadd.wd(<1 x i64> %a, <1 x i64> %b)
   void handleVectorPmaddIntrinsic(IntrinsicInst &I,
                                   unsigned MMXEltSizeInBits = 0) {
-    Type *ResTy =
-        MMXEltSizeInBits ? getMMXVectorTy(MMXEltSizeInBits * 2) : I.getType();
     IRBuilder<> IRB(&I);
-    auto *Shadow0 = getShadow(&I, 0);
-    auto *Shadow1 = getShadow(&I, 1);
-    Value *S = IRB.CreateOr(Shadow0, Shadow1);
-    S = IRB.CreateBitCast(S, ResTy);
-    S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)),
-                       ResTy);
-    S = IRB.CreateBitCast(S, getShadowTy(&I));
-    setShadow(&I, S);
+
+    Type *ReturnType =
+        MMXEltSizeInBits ? getMMXVectorTy(MMXEltSizeInBits * 2) : I.getType();
+    assert(isa<FixedVectorType>(ReturnType));
+
+    assert(I.arg_size() == 2);
+    [[maybe_unused]] FixedVectorType *ParamType =
+        cast<FixedVectorType>(I.getArgOperand(0)->getType());
+    assert(ParamType == I.getArgOperand(1)->getType());
+
+    if (!MMXEltSizeInBits)
+      assert(ParamType->getNumElements() ==
+             2 * cast<FixedVectorType>(ReturnType)->getNumElements());
+
+    assert(ParamType->getPrimitiveSizeInBits() ==
+           ReturnType->getPrimitiveSizeInBits());
+
+    // Step 1: multiplication of corresponding vector elements
+    // We want to take into account the fact that multiplying zero by an
+    // uninitialized bit results in an initialized value of zero.
+    // We under-approximate multiplication by treating it as bitwise AND; this
+    // has no false positives but substantial false negatives. We then
+    // compute the shadow using the same logic as visitAnd().
+    Value *S1 = getShadow(&I, 0);
+    Value *S2 = getShadow(&I, 1);
+    Value *V1 = I.getOperand(0);
+    Value *V2 = I.getOperand(1);
+
+    Value *S1S2 = IRB.CreateAnd(S1, S2);
+    Value *V1S2 = IRB.CreateAnd(V1, S2);
+    Value *S1V2 = IRB.CreateAnd(S1, V2);
+
+    // After multiplying e.g., <8 x i16> %a, <8 x i16> %b, we have
+    // <8 x i16> %ab.
+    Value *ShadowAB = IRB.CreateOr({S1S2, V1S2, S1V2});
+    // For MMX, %ab has a misleading type e.g., <1 x i64>.
+    if (MMXEltSizeInBits)
+      ShadowAB = IRB.CreateBitCast(ShadowAB, getMMXVectorTy(MMXEltSizeInBits));
+
+    // Step 2: pairwise/horizontal add
+    // Handle it similarly to handlePairwiseShadowOrIntrinsic().
+    unsigned TotalNumElems =
+        cast<FixedVectorType>(ReturnType)->getNumElements() * 2;
+    SmallVector<int, 8> EvenMask;
+    SmallVector<int, 8> OddMask;
+    for (unsigned X = 0; X < TotalNumElems - 1; X += 2) {
+      EvenMask.push_back(X);
+      OddMask.push_back(X + 1);
+    }
+    Value *EvenShadow = IRB.CreateShuffleVector(ShadowAB, EvenMask);
+    Value *OddShadow = IRB.CreateShuffleVector(ShadowAB, OddMask);
+
+    Value *OrShadow = IRB.CreateOr(EvenShadow, OddShadow);
+    OrShadow = CreateShadowCast(IRB, OrShadow, getShadowTy(&I));
+
+    setShadow(&I, OrShadow);
     setOriginForNaryOp(I);
   }
 
@@ -5378,6 +5430,8 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
     case Intrinsic::x86_avx2_pmadd_wd:
     case Intrinsic::x86_ssse3_pmadd_ub_sw_128:
     case Intrinsic::x86_avx2_pmadd_ub_sw:
+    case Intrinsic::x86_avx512_pmaddw_d_512:
+    case Intrinsic::x86_avx512_pmaddubs_w_512:
       handleVectorPmaddIntrinsic(I);
       break;
 
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
index f916130fe53e5..3b38fcba35f98 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
@@ -140,11 +140,16 @@ define <8 x i32> @test_x86_avx2_pmadd_wd(<16 x i16> %a0, <16 x i16> %a1) #0 {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <16 x i16>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <16 x i16>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 32) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP3:%.*]] = or <16 x i16> [[TMP1]], [[TMP2]]
-; CHECK-NEXT:    [[TMP4:%.*]] = bitcast <16 x i16> [[TMP3]] to <8 x i32>
-; CHECK-NEXT:    [[TMP5:%.*]] = icmp ne <8 x i32> [[TMP4]], zeroinitializer
-; CHECK-NEXT:    [[TMP6:%.*]] = sext <8 x i1> [[TMP5]] to <8 x i32>
-; CHECK-NEXT:    [[RES:%.*]] = call <8 x i32> @llvm.x86.avx2.pmadd.wd(<16 x i16> [[A0:%.*]], <16 x i16> [[A1:%.*]])
+; CHECK-NEXT:    [[TMP3:%.*]] = and <16 x i16> [[TMP1]], [[TMP2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = and <16 x i16> [[A0:%.*]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and <16 x i16> [[TMP1]], [[A1:%.*]]
+; CHECK-NEXT:    [[TMP11:%.*]] = or <16 x i16> [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[TMP7:%.*]] = or <16 x i16> [[TMP11]], [[TMP5]]
+; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <16 x i16> [[TMP7]], <16 x i16> poison, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
+; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <16 x i16> [[TMP7]], <16 x i16> poison, <8 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15>
+; CHECK-NEXT:    [[TMP10:%.*]] = or <8 x i16> [[TMP8]], [[TMP9]]
+; CHECK-NEXT:    [[TMP6:%.*]] = zext <8 x i16> [[TMP10]] to <8 x i32>
+; CHECK-NEXT:    [[RES:%.*]] = call <8 x i32> @llvm.x86.avx2.pmadd.wd(<16 x i16> [[A0]], <16 x i16> [[A1]])
 ; CHECK-NEXT:    store <8 x i32> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i32> [[RES]]
 ;
@@ -677,11 +682,16 @@ define <16 x i16> @test_x86_avx2_pmadd_ub_sw(<32 x i8> %a0, <32 x i8> %a1) #0 {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <32 x i8>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <32 x i8>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 32) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP3:%.*]] = or <32 x i8> [[TMP1]], [[TMP2]]
-; CHECK-NEXT:    [[TMP4:%.*]] = bitcast <32 x i8> [[TMP3]] to <16 x i16>
-; CHECK-NEXT:    [[TMP5:%.*]] = icmp ne <16 x i16> [[TMP4]], zeroinitializer
-; CHECK-NEXT:    [[TMP6:%.*]] = sext <16 x i1> [[TMP5]] to <16 x i16>
-; CHECK-NEXT:    [[RES:%.*]] = call <16 x i16> @llvm.x86.avx2.pmadd.ub.sw(<32 x i8> [[A0:%.*]], <32 x i8> [[A1:%.*]])
+; CHECK-NEXT:    [[TMP3:%.*]] = and <32 x i8> [[TMP1]], [[TMP2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = and <32 x i8> [[A0:%.*]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and <32 x i8> [[TMP1]], [[A1:%.*]]
+; CHECK-NEXT:    [[TMP11:%.*]] = or <32 x i8> [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[TMP7:%.*]] = or <32 x i8> [[TMP11]], [[TMP5]]
+; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <32 x i8> [[TMP7]], <32 x i8> poison, <16 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <32 x i8> [[TMP7]], <32 x i8> poison, <16 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31>
+; CHECK-NEXT:    [[TMP10:%.*]] = or <16 x i8> [[TMP8]], [[TMP9]]
+; CHECK-NEXT:    [[TMP6:%.*]] = zext <16 x i8> [[TMP10]] to <16 x i16>
+; CHECK-NEXT:    [[RES:%.*]] = call <16 x i16> @llvm.x86.avx2.pmadd.ub.sw(<32 x i8> [[A0]], <32 x i8> [[A1]])
 ; CHECK-NEXT:    store <16 x i16> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i16> [[RES]]
 ;
@@ -706,11 +716,16 @@ define <16 x i16> @test_x86_avx2_pmadd_ub_sw_load_op0(ptr %ptr, <32 x i8> %a1) #
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
 ; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP7]], align 32
-; CHECK-NEXT:    [[TMP8:%.*]] = or <32 x i8> [[_MSLD]], [[TMP2]]
-; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <32 x i8> [[TMP8]] to <16 x i16>
-; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <16 x i16> [[TMP9]], zeroinitializer
-; CHECK-NEXT:    [[TMP11:%.*]] = sext <16 x i1> [[TMP10]] to <16 x i16>
-; CHECK-NEXT:    [[RES:%.*]] = call <16 x i16> @llvm.x86.avx2.pmadd.ub.sw(<32 x i8> [[A0]], <32 x i8> [[A1:%.*]])
+; CHECK-NEXT:    [[TMP8:%.*]] = and <32 x i8> [[_MSLD]], [[TMP2]]
+; CHECK-NEXT:    [[TMP9:%.*]] = and <32 x i8> [[A0]], [[TMP2]]
+; CHECK-NEXT:    [[TMP10:%.*]] = and <32 x i8> [[_MSLD]], [[A1:%.*]]
+; CHECK-NEXT:    [[TMP16:%.*]] = or <32 x i8> [[TMP8]], [[TMP9]]
+; CHECK-NEXT:    [[TMP12:%.*]] = or <32 x i8> [[TMP16]], [[TMP10]]
+; CHECK-NEXT:    [[TMP13:%.*]] = shufflevector <32 x i8> [[TMP12]], <32 x i8> poison, <16 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT:    [[TMP14:%.*]] = shufflevector <32 x i8> [[TMP12]], <32 x i8> poison, <16 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31>
+; CHECK-NEXT:    [[TMP15:%.*]] = or <16 x i8> [[TMP13]], [[TMP14]]
+; CHECK-NEXT:    [[TMP11:%.*]] = zext <16 x i8> [[TMP15]] to <16 x i16>
+; CHECK-NEXT:    [[RES:%.*]] = call <16 x i16> @llvm.x86.avx2.pmadd.ub.sw(<32 x i8> [[A0]], <32 x i8> [[A1]])
 ; CHECK-NEXT:    store <16 x i16> [[TMP11]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i16> [[RES]]
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
index 02df9c49a010b..54e9939ace7c3 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
@@ -7,8 +7,6 @@
 ; - llvm.x86.avx512.dbpsadbw.512
 ; - llvm.x86.avx512.packssdw.512, llvm.x86.avx512.packsswb.512
 ; - llvm.x86.avx512.packusdw.512, llvm.x86.avx512.packuswb.512
-; - llvm.x86.avx512.pmaddubs.w.512
-; - llvm.x86.avx512.pmaddw.d.512
 ;
 ; Heuristically handled:
 ; - llvm.sadd.sat.v32i16, llvm.sadd.sat.v64i8
@@ -4930,18 +4928,17 @@ define <32 x i16> @test_int_x86_avx512_pmaddubs_w_512(<64 x i8> %x0, <64 x i8> %
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <64 x i8>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <64 x i8>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 64) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <64 x i8> [[TMP1]] to i512
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP3]], 0
-; CHECK-NEXT:    [[TMP4:%.*]] = bitcast <64 x i8> [[TMP2]] to i512
-; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP4]], 0
-; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
-; CHECK:       6:
-; CHECK-NEXT:    [[TMP7:%.*]] = call <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8> [[X0:%.*]], <64 x i8> [[X1:%.*]])
-; CHECK-NEXT:    store <32 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT:    [[TMP3:%.*]] = and <64 x i8> [[TMP1]], [[TMP2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = and <64 x i8> [[X0:%.*]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and <64 x i8> [[TMP1]], [[X1:%.*]]
+; CHECK-NEXT:    [[TMP6:%.*]] = or <64 x i8> [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[TMP12:%.*]] = or <64 x i8> [[TMP6]], [[TMP5]]
+; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <64 x i8> [[TMP12]], <64 x i8> poison, <32 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30, i32 32, i32 34, i32 36, i32 38, i32 40, i32 42, i32 44, i32 46, i32 48, i32 50, i32 52, i32 54, i32 56, i32 58, i32 60, i32 62>
+; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <64 x i8> [[TMP12]], <64 x i8> poison, <32 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31, i32 33, i32 35, i32 37, i32 39, i32 41, i32 43, i32 45, i32 47, i32 49, i32 51, i32 53, i32 55, i32 57, i32 59, i32 61, i32 63>
+; CHECK-NEXT:    [[TMP10:%.*]] = or <32 x i8> [[TMP8]], [[TMP9]]
+; CHECK-NEXT:    [[TMP11:%.*]] = zext <32 x i8> [[TMP10]] to <32 x i16>
+; CHECK-NEXT:    [[TMP7:%.*]] = call <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8> [[X0]], <64 x i8> [[X1]])
+; CHECK-NEXT:    store <32 x i16> [[TMP11]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <32 x i16> [[TMP7]]
 ;
   %res = call <32 x i16> @llvm.x86.avx512.mask.pmaddubs.w.512(<64 x i8> %x0, <64 x i8> %x1, <32 x i16> %x2, i32 -1)
@@ -4955,22 +4952,21 @@ define <32 x i16> @test_int_x86_avx512_mask_pmaddubs_w_512(<64 x i8> %x0, <64 x
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i32, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 192) to ptr), align 8
 ; CHECK-NEXT:    [[TMP4:%.*]] = load <32 x i16>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 128) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <64 x i8> [[TMP1]] to i512
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP5]], 0
-; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <64 x i8> [[TMP2]] to i512
-; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP6]], 0
-; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
-; CHECK:       8:
-; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8> [[X0:%.*]], <64 x i8> [[X1:%.*]])
+; CHECK-NEXT:    [[TMP5:%.*]] = and <64 x i8> [[TMP1]], [[TMP2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = and <64 x i8> [[X0:%.*]], [[TMP2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = and <64 x i8> [[TMP1]], [[X1:%.*]]
+; CHECK-NEXT:    [[TMP8:%.*]] = or <64 x i8> [[TMP5]], [[TMP6]]
+; CHECK-NEXT:    [[TMP17:%.*]] = or <64 x i8> [[TMP8]], [[TMP7]]
+; CHECK-NEXT:    [[TMP18:%.*]] = shufflevector <64 x i8> [[TMP17]], <64 x i8> poison, <32 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30, i32 32, i32 34, i32 36, i32 38, i32 40, i32 42, i32 44, i32 46, i32 48, i32 50, i32 52, i32 54, i32 56, i32 58, i32 60, i32 62>
+; CHECK-NEXT:    [[TMP19:%.*]] = shufflevector <64 x i8> [[TMP17]], <64 x i8> poison, <32 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31, i32 33, i32 35, i32 37, i32 39, i32 41, i32 43, i32 45, i32 47, i32 49, i32 51, i32 53, i32 55, i32 57, i32 59, i32 61, i32 63>
+; CHECK-NEXT:    [[TMP20:%.*]] = or <32 x i8> [[TMP18]], [[TMP19]]
+; CHECK-NEXT:    [[TMP21:%.*]] = zext <32 x i8> [[TMP20]] to <32 x i16>
+; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8> [[X0]], <64 x i8> [[X1]])
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i32 [[TMP3]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i32 [[X3:%.*]] to <32 x i1>
-; CHECK-NEXT:    [[TMP12:%.*]] = select <32 x i1> [[TMP11]], <32 x i16> zeroinitializer, <32 x i16> [[TMP4]]
+; CHECK-NEXT:    [[TMP12:%.*]] = select <32 x i1> [[TMP11]], <32 x i16> [[TMP21]], <32 x i16> [[TMP4]]
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor <32 x i16> [[TMP9]], [[X2:%.*]]
-; CHECK-NEXT:    [[TMP14:%.*]] = or <32 x i16> [[TMP13]], zeroinitializer
+; CHECK-NEXT:    [[TMP14:%.*]] = or <32 x i16> [[TMP13]], [[TMP21]]
 ; CHECK-NEXT:    [[TMP15:%.*]] = or <32 x i16> [[TMP14]], [[TMP4]]
 ; CHECK-NEXT:    [[_MSPROP_SELECT:%.*]] = select <32 x i1> [[TMP10]], <32 x i16> [[TMP15]], <32 x i16> [[TMP12]]
 ; CHECK-NEXT:    [[TMP16:%.*]] = select <32 x i1> [[TMP11]], <32 x i16> [[TMP9]], <32 x i16> [[X2]]
@@ -4988,18 +4984,17 @@ define <16 x i32> @test_int_x86_avx512_pmaddw_d_512(<32 x i16> %x0, <32 x i16> %
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <32 x i16>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <32 x i16>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 64) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <32 x i16> [[TMP1]] to i512
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP3]], 0
-; CHECK-NEXT:    [[TMP4:%.*]] = bitcast <32 x i16> [[TMP2]] to i512
-; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP4]], 0
-; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
-; CHECK:       6:
-; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x i32> @llvm.x86.avx512.pmaddw.d.512(<32 x i16> [[X0:%.*]], <32 x i16> [[X1:%.*]])
-; CHECK-NEXT:    store <16 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT:    [[TMP3:%.*]] = and <32 x i16> [[TMP1]], [[TMP2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = and <32 x i16> [[X0:%.*]], [[TMP2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and <32 x i16> [[TMP1]], [[X1:%.*]]
+; CHECK-NEXT:    [[TMP6:%.*]] = or <32 x i16> [[TMP3]], [[TMP4]]
+; CHECK-NEXT:    [[TMP12:%.*]] = or <32 x i16> [[TMP6]], [[TMP5]]
+; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <32 x i16> [[TMP12]], <32 x i16> poison, <16 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <32 x i16> [[TMP12]], <32 x i16> poison, <16 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31>
+; CHECK-NEXT:    [[TMP10:%.*]] = or <16 x i16> [[TMP8]], [[TMP9]]
+; CHECK-NEXT:    [[TMP11:%.*]] = zext <16 x i16> [[TMP10]] to <16 x i32>
+; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x i32> @llvm.x86.avx512.pmaddw.d.512(<32 x i16> [[X0]], <32 x i16> [[X1]])
+; CHECK-NEXT:    store <16 x i32> [[TMP11]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP7]]
 ;
   %res = call <16 x i32> @llvm.x86.avx512.mask.pmaddw.d.512(<32 x i16> %x0, <32 x i16> %x1, <16 x i32> %x2, i16 -1)
@@ -5013,22 +5008,21 @@ define <16 x i32> @test_int_x86_avx512_mask_pmaddw_d_512(<32 x i16> %x0, <32 x i
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i16, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 192) to ptr), align 8
 ; CHECK-NEXT:    [[TMP4:%.*]] = load <16 x i32>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 128) to ptr), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <32 x i16> [[TMP1]] to i512
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP5]], 0
-; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <32 x i16> [[TMP2]] to i512
-; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP6]], 0
-; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
-; CHECK:       8:
-; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i32> @llvm.x86.avx512.pmaddw.d.512(<32 x i16> [[X0:%.*]], <32 x i16> [[X1:%.*...
[truncated]

@thurstond thurstond closed this Aug 10, 2025
@thurstond
Copy link
Contributor Author

The instrumentation does have false positives. Closing for now.

@thurstond thurstond reopened this Aug 11, 2025
Copy link

github-actions bot commented Aug 11, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

thurstond added a commit to thurstond/llvm-project that referenced this pull request Aug 11, 2025
The functionality is used by two helper functions, and will be used
even more in the future (e.g., llvm#152941).
thurstond added a commit that referenced this pull request Aug 11, 2025
The functionality is used by two helper functions, and will be used even more in the future (e.g.,
#152941).
@thurstond thurstond marked this pull request as draft August 12, 2025 00:08
@thurstond thurstond changed the title [msan] Improve packed multiply-add instrumentation [msan] Improve packed multiply-add instrumentation, and apply to AVX512 Aug 12, 2025
@thurstond thurstond marked this pull request as ready for review August 12, 2025 00:20
@thurstond thurstond changed the title [msan] Improve packed multiply-add instrumentation, and apply to AVX512 [msan] Improve packed multiply-add instrumentation Aug 12, 2025
thurstond added a commit to thurstond/llvm-project that referenced this pull request Aug 12, 2025
The tests largely cover AVX-VNNI (Vector Neural Network Instructions):
- vpdpbusd, vpdpbusds
- vpdpwssd, vpdpwssds

AVX-VNNI-INT8:
- vpdpbssd, vpdpbssds
- vpdpbsud, vpdpbsuds
- vpdpbuud, vpdpbuuds

AVX-VNNI-INT16:
- vpdpwsud, vpdpwsuds
- vpdpwusd, vpdpwusds
- vpdpwuud, vpdpwuuds

These instructions are currently heuristically handled (by OR'ing together
the vectors). This is incorrect because:
1) multiplication by a zero should result in an initialized value
2) the addition is horizontal (within vectors, not "vertically" between vectors).

Future work can improve the instrumentation by applying the updated
handleVectorPmaddIntrinsic() from llvm#152941
thurstond added a commit that referenced this pull request Aug 12, 2025
The tests largely cover AVX-VNNI (Vector Neural Network Instructions):
- vpdpbusd, vpdpbusds
- vpdpwssd, vpdpwssds

AVX-VNNI-INT8:
- vpdpbssd, vpdpbssds
- vpdpbsud, vpdpbsuds
- vpdpbuud, vpdpbuuds

AVX-VNNI-INT16:
- vpdpwsud, vpdpwsuds
- vpdpwusd, vpdpwusds
- vpdpwuud, vpdpwuuds

These instructions are currently heuristically handled (by OR'ing
together the vectors). This is incorrect because:
1) multiplication by a zero should result in an initialized value 2) the
addition is horizontal (within vectors, not "vertically" between
vectors).

Future work can improve the instrumentation by applying the updated
handleVectorPmaddIntrinsic() from
#152941
llvm-sync bot pushed a commit to arm/arm-toolchain that referenced this pull request Aug 12, 2025
The tests largely cover AVX-VNNI (Vector Neural Network Instructions):
- vpdpbusd, vpdpbusds
- vpdpwssd, vpdpwssds

AVX-VNNI-INT8:
- vpdpbssd, vpdpbssds
- vpdpbsud, vpdpbsuds
- vpdpbuud, vpdpbuuds

AVX-VNNI-INT16:
- vpdpwsud, vpdpwsuds
- vpdpwusd, vpdpwusds
- vpdpwuud, vpdpwuuds

These instructions are currently heuristically handled (by OR'ing
together the vectors). This is incorrect because:
1) multiplication by a zero should result in an initialized value 2) the
addition is horizontal (within vectors, not "vertically" between
vectors).

Future work can improve the instrumentation by applying the updated
handleVectorPmaddIntrinsic() from
llvm/llvm-project#152941
ReturnType->getPrimitiveSizeInBits());

// Step 1: instrument multiplication of corresponding vector elements
Value *S1 = getShadow(&I, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for I.arg_size() == 3 S1 is not the shadow of V1. Is that intentional? If it is, that is quite confusing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, that was a bug.

I've removed the 3-variable case and will put it in a separate pull request, since it the code as-is didn't have test coverage.

I also renamed the variables to Va/Vb/Sa/Sb and assigned them closer together, to avoid repeating the bug in the future pull request.


if (EltSizeInBits) {
if (I.arg_size() != 3)
ReturnType = cast<FixedVectorType>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's kind of confusing to reassign the ReturnType, as that seems an unchangable fact about a function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@thurstond thurstond requested a review from fmayer August 13, 2025 01:19
@fmayer
Copy link
Contributor

fmayer commented Aug 13, 2025

Please fix formatting

@thurstond
Copy link
Contributor Author

Failing CI test appears to be unrelated:

  Unresolved Tests (1):
    lldb-api :: functionalities/statusline/TestStatusline.py

@thurstond thurstond merged commit ba603b5 into llvm:main Aug 13, 2025
8 of 9 checks passed
thurstond added a commit to thurstond/llvm-project that referenced this pull request Aug 13, 2025
This applies the pmadd handler (recently improved in
llvm#152941) to the Avx512
equivalent of the pmaddw and pmaddubs intrinsics:
  <16 x i32> @llvm.x86.avx512.pmaddw.d.512(<32 x i16>, <32 x i16>)
  <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8>, <64 x i8>)
thurstond added a commit that referenced this pull request Aug 13, 2025
llvm-sync bot pushed a commit to arm/arm-toolchain that referenced this pull request Aug 13, 2025
thurstond added a commit to thurstond/llvm-project that referenced this pull request Aug 16, 2025
This applies the pmadd handler (recently improved in
llvm#152941) to the Avx512
equivalent of the pmaddw and pmaddubs intrinsics:
  <16 x i32> @llvm.x86.avx512.pmaddw.d.512(<32 x i16>, <32 x i16>)
  <32 x i16> @llvm.x86.avx512.pmaddubs.w.512(<64 x i8>, <64 x i8>)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants