Skip to content

[AMDGPU][SIPreEmitPeephole] mustRetainExeczBranch: estimate ThenBlock cost using MachineTraceInfo #111117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jmmartinez
Copy link
Contributor

This is a draft to compare what would happen if SIPreEmitPeephole::mustRetainExeczBranch used MachineTraceInfo to compute the ThenBlock cost.

Ignore all but the last commit. The main PR is #109818

I still have to fix the BranchCost to consider branch taken vs non-taken cost. That should make the transformation less aggressive.

The transformation becomes more aggressive when handling memory accesses. However, I'm concerned about blocks with instructions that I'm not familiar with, such as global_atomic_add_f32 or image_sample.

@llvmbot
Copy link
Member

llvmbot commented Oct 4, 2024

@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-llvm-globalisel

Author: Juan Manuel Martinez Caamaño (jmmartinez)

Changes

This is a draft to compare what would happen if SIPreEmitPeephole::mustRetainExeczBranch used MachineTraceInfo to compute the ThenBlock cost.

Ignore all but the last commit. The main PR is #109818

I still have to fix the BranchCost to consider branch taken vs non-taken cost. That should make the transformation less aggressive.

The transformation becomes more aggressive when handling memory accesses. However, I'm concerned about blocks with instructions that I'm not familiar with, such as global_atomic_add_f32 or image_sample.


Patch is 111.58 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/111117.diff

35 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIPreEmitPeephole.cpp (+65-25)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.ll (+4-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/vni8-across-blocks.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-demote-scc-branches.ll (+39-38)
  • (modified) llvm/test/CodeGen/AMDGPU/atomic_optimizations_pixelshader.ll (+11-22)
  • (modified) llvm/test/CodeGen/AMDGPU/atomicrmw-expand.ll (+3-6)
  • (modified) llvm/test/CodeGen/AMDGPU/branch-condition-and.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/cgp-addressing-modes-flat.ll (+24-48)
  • (modified) llvm/test/CodeGen/AMDGPU/collapse-endcf.ll (+4-8)
  • (modified) llvm/test/CodeGen/AMDGPU/else.ll (-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoi.i128.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/i1-copy-from-loop.ll (+2-1)
  • (renamed) llvm/test/CodeGen/AMDGPU/insert-handle-flat-vmem-ds.mir (+7-13)
  • (modified) llvm/test/CodeGen/AMDGPU/insert-skips-gfx10.mir (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/insert-skips-gfx12.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/insert-skips-gws.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/insert-skips-ignored-insts.mir (+26-4)
  • (modified) llvm/test/CodeGen/AMDGPU/llc-pipeline.ll (+8)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.reduce.umax.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.reduce.umin.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.softwqm.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/local-atomicrmw-fadd.ll (+21-44)
  • (modified) llvm/test/CodeGen/AMDGPU/machine-sink-temporal-divergence-swdev407790.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/remove-short-exec-branches-gpr-idx-mode.mir (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/remove-short-exec-branches-special-instructions.mir (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/ret_jump.ll (-1)
  • (modified) llvm/test/CodeGen/AMDGPU/scheduler-rp-calc-one-successor-two-predecessors-bug.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/set-inactive-wwm-overwrite.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/si-unify-exit-return-unreachable.ll (+9-4)
  • (modified) llvm/test/CodeGen/AMDGPU/skip-branch-taildup-ret.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/uniform-phi-with-undef.ll (+1-2)
  • (modified) llvm/test/CodeGen/AMDGPU/vni8-across-blocks.ll (+6-12)
  • (modified) llvm/test/CodeGen/AMDGPU/wqm.ll (+2-4)
diff --git a/llvm/lib/Target/AMDGPU/SIPreEmitPeephole.cpp b/llvm/lib/Target/AMDGPU/SIPreEmitPeephole.cpp
index 1334029544f999..2052b11ea1486e 100644
--- a/llvm/lib/Target/AMDGPU/SIPreEmitPeephole.cpp
+++ b/llvm/lib/Target/AMDGPU/SIPreEmitPeephole.cpp
@@ -15,25 +15,23 @@
 #include "GCNSubtarget.h"
 #include "MCTargetDesc/AMDGPUMCTargetDesc.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineTraceMetrics.h"
+#include "llvm/CodeGen/TargetSchedule.h"
+#include "llvm/InitializePasses.h"
+#include "llvm/Support/BranchProbability.h"
 
 using namespace llvm;
 
 #define DEBUG_TYPE "si-pre-emit-peephole"
 
-static unsigned SkipThreshold;
-
-static cl::opt<unsigned, true> SkipThresholdFlag(
-    "amdgpu-skip-threshold", cl::Hidden,
-    cl::desc(
-        "Number of instructions before jumping over divergent control flow"),
-    cl::location(SkipThreshold), cl::init(12));
-
 namespace {
 
 class SIPreEmitPeephole : public MachineFunctionPass {
 private:
   const SIInstrInfo *TII = nullptr;
   const SIRegisterInfo *TRI = nullptr;
+  MachineTraceMetrics *Traces = nullptr;
+  MachineTraceMetrics::Ensemble *MinInstr;
 
   bool optimizeVccBranch(MachineInstr &MI) const;
   bool optimizeSetGPR(MachineInstr &First, MachineInstr &MI) const;
@@ -41,10 +39,16 @@ class SIPreEmitPeephole : public MachineFunctionPass {
                             MachineBasicBlock *&TrueMBB,
                             MachineBasicBlock *&FalseMBB,
                             SmallVectorImpl<MachineOperand> &Cond);
-  bool mustRetainExeczBranch(const MachineBasicBlock &From,
-                             const MachineBasicBlock &To) const;
+  bool mustRetainExeczBranch(const MachineBasicBlock &Head,
+                             const MachineBasicBlock &From,
+                             const MachineBasicBlock &To);
   bool removeExeczBranch(MachineInstr &MI, MachineBasicBlock &SrcMBB);
 
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<MachineTraceMetrics>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
+
 public:
   static char ID;
 
@@ -57,8 +61,11 @@ class SIPreEmitPeephole : public MachineFunctionPass {
 
 } // End anonymous namespace.
 
-INITIALIZE_PASS(SIPreEmitPeephole, DEBUG_TYPE,
-                "SI peephole optimizations", false, false)
+INITIALIZE_PASS_BEGIN(SIPreEmitPeephole, DEBUG_TYPE,
+                      "SI peephole optimizations", false, false)
+INITIALIZE_PASS_DEPENDENCY(MachineTraceMetrics)
+INITIALIZE_PASS_END(SIPreEmitPeephole, DEBUG_TYPE, "SI peephole optimizations",
+                    false, false)
 
 char SIPreEmitPeephole::ID = 0;
 
@@ -304,11 +311,20 @@ bool SIPreEmitPeephole::getBlockDestinations(
   return true;
 }
 
-bool SIPreEmitPeephole::mustRetainExeczBranch(
-    const MachineBasicBlock &From, const MachineBasicBlock &To) const {
-  unsigned NumInstr = 0;
-  const MachineFunction *MF = From.getParent();
+bool SIPreEmitPeephole::mustRetainExeczBranch(const MachineBasicBlock &Head,
+                                              const MachineBasicBlock &From,
+                                              const MachineBasicBlock &To) {
 
+  const auto *FromIt = find(Head.successors(), &From);
+  assert(FromIt != Head.succ_end());
+
+  auto BranchProb = Head.getSuccProbability(FromIt);
+  assert(!BranchProb.isUnknown());
+  unsigned BranchCost = TII->getSchedModel().computeInstrLatency(
+      &*Head.getFirstTerminator(), false);
+
+  unsigned ThenCyclesCost = 0;
+  const MachineFunction *MF = From.getParent();
   for (MachineFunction::const_iterator MBBI(&From), ToI(&To), End = MF->end();
        MBBI != End && MBBI != ToI; ++MBBI) {
     const MachineBasicBlock &MBB = *MBBI;
@@ -326,15 +342,30 @@ bool SIPreEmitPeephole::mustRetainExeczBranch(
       if (TII->hasUnwantedEffectsWhenEXECEmpty(MI))
         return true;
 
-      // These instructions are potentially expensive even if EXEC = 0.
-      if (TII->isSMRD(MI) || TII->isVMEM(MI) || TII->isFLAT(MI) ||
-          TII->isDS(MI) || TII->isWaitcnt(MI.getOpcode()))
-        return true;
-
-      ++NumInstr;
-      if (NumInstr >= SkipThreshold)
+      if (TII->isWaitcnt(MI.getOpcode()))
         return true;
     }
+
+    if (!MinInstr)
+      MinInstr = Traces->getEnsemble(MachineTraceStrategy::TS_Local);
+
+    auto Trace = MinInstr->getTrace(&From);
+    ThenCyclesCost +=
+        std::max(Trace.getCriticalPath(), Trace.getResourceDepth(true));
+
+    // Consider `P = N/D` to be the probability of execz being true
+    // The transformation is profitable if always executing the 'then' block
+    // is cheaper than executing sometimes 'then' and always
+    // executing s_cbranch_execz:
+    // * ThenCost <= P*ThenCost + BranchCost
+    // * (1-P) * ThenCost <= BranchCost
+    // * (D-N)/D * ThenCost <= BranchCost
+    uint64_t Numerator = BranchProb.getNumerator();
+    uint64_t Denominator = BranchProb.getDenominator();
+    bool IsProfitable =
+        (Denominator - Numerator) * ThenCyclesCost <= Denominator * BranchCost;
+    if (!IsProfitable)
+      return true;
   }
 
   return false;
@@ -343,6 +374,10 @@ bool SIPreEmitPeephole::mustRetainExeczBranch(
 // Returns true if the skip branch instruction is removed.
 bool SIPreEmitPeephole::removeExeczBranch(MachineInstr &MI,
                                           MachineBasicBlock &SrcMBB) {
+
+  if (!TII->getSchedModel().hasInstrSchedModelOrItineraries())
+    return false;
+
   MachineBasicBlock *TrueMBB = nullptr;
   MachineBasicBlock *FalseMBB = nullptr;
   SmallVector<MachineOperand, 1> Cond;
@@ -351,8 +386,11 @@ bool SIPreEmitPeephole::removeExeczBranch(MachineInstr &MI,
     return false;
 
   // Consider only the forward branches.
-  if ((SrcMBB.getNumber() >= TrueMBB->getNumber()) ||
-      mustRetainExeczBranch(*FalseMBB, *TrueMBB))
+  if (SrcMBB.getNumber() >= TrueMBB->getNumber())
+    return false;
+
+  // Consider only when it is legal and profitable
+  if (mustRetainExeczBranch(SrcMBB, *FalseMBB, *TrueMBB))
     return false;
 
   LLVM_DEBUG(dbgs() << "Removing the execz branch: " << MI);
@@ -366,6 +404,8 @@ bool SIPreEmitPeephole::runOnMachineFunction(MachineFunction &MF) {
   const GCNSubtarget &ST = MF.getSubtarget<GCNSubtarget>();
   TII = ST.getInstrInfo();
   TRI = &TII->getRegisterInfo();
+  Traces = &getAnalysis<MachineTraceMetrics>();
+  MinInstr = nullptr;
   bool Changed = false;
 
   MF.RenumberBlocks();
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.ll
index b27d8fdc24ff73..e08d4f0da86b88 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.ll
@@ -249,11 +249,10 @@ define void @divergent_i1_xor_used_outside_loop_larger_loop_body(i32 %num.elts,
 ; GFX10-NEXT:  .LBB3_6: ; %Flow1
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX10-NEXT:    s_and_saveexec_b32 s4, s6
-; GFX10-NEXT:    s_cbranch_execz .LBB3_8
 ; GFX10-NEXT:  ; %bb.7: ; %block.after.loop
 ; GFX10-NEXT:    v_mov_b32_e32 v0, 5
 ; GFX10-NEXT:    flat_store_dword v[3:4], v0
-; GFX10-NEXT:  .LBB3_8: ; %exit
+; GFX10-NEXT:  ; %bb.8: ; %exit
 ; GFX10-NEXT:    s_waitcnt_depctr 0xffe3
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
@@ -315,7 +314,6 @@ define void @divergent_i1_icmp_used_outside_loop(i32 %v0, i32 %v1, ptr addrspace
 ; GFX10-NEXT:    v_mov_b32_e32 v4, v5
 ; GFX10-NEXT:    v_cmp_eq_u32_e32 vcc_lo, v0, v4
 ; GFX10-NEXT:    s_and_saveexec_b32 s7, vcc_lo
-; GFX10-NEXT:    s_cbranch_execz .LBB4_4
 ; GFX10-NEXT:  ; %bb.3: ; %if.block.0
 ; GFX10-NEXT:    ; in Loop: Header=BB4_2 Depth=1
 ; GFX10-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
@@ -323,7 +321,7 @@ define void @divergent_i1_icmp_used_outside_loop(i32 %v0, i32 %v1, ptr addrspace
 ; GFX10-NEXT:    v_add_co_u32 v8, s4, v2, v8
 ; GFX10-NEXT:    v_add_co_ci_u32_e64 v9, s4, v3, v9, s4
 ; GFX10-NEXT:    global_store_dword v[8:9], v4, off
-; GFX10-NEXT:  .LBB4_4: ; %loop.break.block
+; GFX10-NEXT:  ; %bb.4: ; %loop.break.block
 ; GFX10-NEXT:    ; in Loop: Header=BB4_2 Depth=1
 ; GFX10-NEXT:    s_waitcnt_depctr 0xffe3
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s7
@@ -342,10 +340,9 @@ define void @divergent_i1_icmp_used_outside_loop(i32 %v0, i32 %v1, ptr addrspace
 ; GFX10-NEXT:  .LBB4_6: ; %cond.block.1
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s5
 ; GFX10-NEXT:    s_and_saveexec_b32 s4, s6
-; GFX10-NEXT:    s_cbranch_execz .LBB4_8
 ; GFX10-NEXT:  ; %bb.7: ; %if.block.1
 ; GFX10-NEXT:    global_store_dword v[6:7], v4, off
-; GFX10-NEXT:  .LBB4_8: ; %exit
+; GFX10-NEXT:  ; %bb.8: ; %exit
 ; GFX10-NEXT:    s_waitcnt_depctr 0xffe3
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
@@ -536,11 +533,10 @@ define amdgpu_cs void @loop_with_1break(ptr addrspace(1) %x, ptr addrspace(1) %a
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s0
 ; GFX10-NEXT:    s_and_saveexec_b32 s0, s1
 ; GFX10-NEXT:    s_xor_b32 s0, exec_lo, s0
-; GFX10-NEXT:    s_cbranch_execz .LBB6_6
 ; GFX10-NEXT:  ; %bb.5: ; %break.body
 ; GFX10-NEXT:    v_mov_b32_e32 v0, 10
 ; GFX10-NEXT:    global_store_dword v[4:5], v0, off
-; GFX10-NEXT:  .LBB6_6: ; %exit
+; GFX10-NEXT:  ; %bb.6: ; %exit
 ; GFX10-NEXT:    s_endpgm
 entry:
   br label %A
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.ll
index 1698f84eea5185..50af8cb739e48c 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.ll
@@ -437,11 +437,10 @@ define amdgpu_cs void @loop_with_div_break_with_body(ptr addrspace(1) %x, ptr ad
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s0
 ; GFX10-NEXT:    s_and_saveexec_b32 s0, s1
 ; GFX10-NEXT:    s_xor_b32 s0, exec_lo, s0
-; GFX10-NEXT:    s_cbranch_execz .LBB5_6
 ; GFX10-NEXT:  ; %bb.5: ; %break.body
 ; GFX10-NEXT:    v_mov_b32_e32 v0, 10
 ; GFX10-NEXT:    global_store_dword v[4:5], v0, off
-; GFX10-NEXT:  .LBB5_6: ; %exit
+; GFX10-NEXT:  ; %bb.6: ; %exit
 ; GFX10-NEXT:    s_endpgm
 entry:
   br label %A
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.ll
index 1855ede0483def..ea671a7adc7384 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.ll
@@ -152,12 +152,11 @@ define amdgpu_cs void @loop_with_1break(ptr addrspace(1) %x, i32 %x.size, ptr ad
 ; GFX10-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX10-NEXT:    s_and_saveexec_b32 s1, s0
 ; GFX10-NEXT:    s_xor_b32 s1, exec_lo, s1
-; GFX10-NEXT:    s_cbranch_execz .LBB2_7
 ; GFX10-NEXT:  ; %bb.6: ; %break.body
 ; GFX10-NEXT:    v_mov_b32_e32 v0, 10
 ; GFX10-NEXT:    v_mov_b32_e32 v1, 0
 ; GFX10-NEXT:    global_store_dword v1, v0, s[2:3]
-; GFX10-NEXT:  .LBB2_7: ; %exit
+; GFX10-NEXT:  ; %bb.7: ; %exit
 ; GFX10-NEXT:    s_endpgm
 entry:
   br label %A
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/vni8-across-blocks.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/vni8-across-blocks.ll
index 386e34f72ab734..98d064fcd3c7aa 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/vni8-across-blocks.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/vni8-across-blocks.ll
@@ -68,10 +68,9 @@ define amdgpu_kernel void @v4i8_liveout(ptr addrspace(1) %src1, ptr addrspace(1)
 ; GFX906-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX906-NEXT:    global_load_dword v1, v2, s[4:5]
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB1_2
 ; GFX906-NEXT:  ; %bb.1: ; %bb.1
 ; GFX906-NEXT:    global_load_dword v1, v2, s[6:7]
-; GFX906-NEXT:  .LBB1_2: ; %bb.2
+; GFX906-NEXT:  ; %bb.2: ; %bb.2
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:    v_mov_b32_e32 v0, 0
 ; GFX906-NEXT:    s_waitcnt vmcnt(0)
@@ -149,10 +148,9 @@ define amdgpu_kernel void @v8i8_liveout(ptr addrspace(1) %src1, ptr addrspace(1)
 ; GFX906-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX906-NEXT:    global_load_dwordx2 v[1:2], v3, s[4:5]
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB3_2
 ; GFX906-NEXT:  ; %bb.1: ; %bb.1
 ; GFX906-NEXT:    global_load_dwordx2 v[1:2], v3, s[6:7]
-; GFX906-NEXT:  .LBB3_2: ; %bb.2
+; GFX906-NEXT:  ; %bb.2: ; %bb.2
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:    v_mov_b32_e32 v0, 0
 ; GFX906-NEXT:    s_waitcnt vmcnt(0)
@@ -185,10 +183,9 @@ define amdgpu_kernel void @v16i8_liveout(ptr addrspace(1) %src1, ptr addrspace(1
 ; GFX906-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX906-NEXT:    global_load_dwordx4 v[1:4], v5, s[4:5]
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB4_2
 ; GFX906-NEXT:  ; %bb.1: ; %bb.1
 ; GFX906-NEXT:    global_load_dwordx4 v[1:4], v5, s[6:7]
-; GFX906-NEXT:  .LBB4_2: ; %bb.2
+; GFX906-NEXT:  ; %bb.2: ; %bb.2
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:    v_mov_b32_e32 v0, 0
 ; GFX906-NEXT:    s_waitcnt vmcnt(0)
@@ -222,11 +219,10 @@ define amdgpu_kernel void @v32i8_liveout(ptr addrspace(1) %src1, ptr addrspace(1
 ; GFX906-NEXT:    global_load_dwordx4 v[1:4], v9, s[4:5]
 ; GFX906-NEXT:    global_load_dwordx4 v[5:8], v9, s[4:5] offset:16
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB5_2
 ; GFX906-NEXT:  ; %bb.1: ; %bb.1
 ; GFX906-NEXT:    global_load_dwordx4 v[1:4], v9, s[6:7]
 ; GFX906-NEXT:    global_load_dwordx4 v[5:8], v9, s[6:7] offset:16
-; GFX906-NEXT:  .LBB5_2: ; %bb.2
+; GFX906-NEXT:  ; %bb.2: ; %bb.2
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:    v_mov_b32_e32 v0, 0
 ; GFX906-NEXT:    s_waitcnt vmcnt(1)
@@ -486,14 +482,13 @@ define amdgpu_kernel void @v8i8_phi_chain(ptr addrspace(1) %src1, ptr addrspace(
 ; GFX906-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX906-NEXT:    global_load_dwordx2 v[1:2], v3, s[4:5]
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB8_2
 ; GFX906-NEXT:  ; %bb.1: ; %bb.1
 ; GFX906-NEXT:    global_load_dwordx2 v[1:2], v3, s[6:7]
 ; GFX906-NEXT:    v_cmp_gt_u32_e32 vcc, 7, v0
 ; GFX906-NEXT:    s_andn2_b64 s[0:1], s[0:1], exec
 ; GFX906-NEXT:    s_and_b64 s[4:5], exec, vcc
 ; GFX906-NEXT:    s_or_b64 s[0:1], s[0:1], s[4:5]
-; GFX906-NEXT:  .LBB8_2: ; %Flow
+; GFX906-NEXT:  ; %bb.2: ; %Flow
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], s[0:1]
 ; GFX906-NEXT:    s_cbranch_execz .LBB8_4
@@ -547,11 +542,10 @@ define amdgpu_kernel void @v8i8_multi_block(ptr addrspace(1) %src1, ptr addrspac
 ; GFX906-NEXT:    global_load_dwordx2 v[1:2], v5, s[6:7]
 ; GFX906-NEXT:    v_cmp_gt_u32_e32 vcc, 7, v0
 ; GFX906-NEXT:    s_and_saveexec_b64 s[2:3], vcc
-; GFX906-NEXT:    s_cbranch_execz .LBB9_3
 ; GFX906-NEXT:  ; %bb.2: ; %bb.2
 ; GFX906-NEXT:    v_mov_b32_e32 v0, 0
 ; GFX906-NEXT:    global_store_dwordx2 v0, v[3:4], s[8:9]
-; GFX906-NEXT:  .LBB9_3: ; %Flow
+; GFX906-NEXT:  ; %bb.3: ; %Flow
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[2:3]
 ; GFX906-NEXT:  .LBB9_4: ; %bb.3
 ; GFX906-NEXT:    s_or_b64 exec, exec, s[0:1]
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-demote-scc-branches.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-demote-scc-branches.ll
index c293891140008d..3bee004bf17c37 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-demote-scc-branches.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-demote-scc-branches.ll
@@ -55,8 +55,11 @@ define void @uniform_br_unprofitable(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX9:       ; %bb.0: ; %entry
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    s_cmp_lt_i32 s21, 1
-; GFX9-NEXT:    s_cbranch_scc1 .LBB1_2
-; GFX9-NEXT:  ; %bb.1: ; %if.then
+; GFX9-NEXT:    s_cbranch_scc0 .LBB1_2
+; GFX9-NEXT:  ; %bb.1: ; %if.end
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+; GFX9-NEXT:  .LBB1_2: ; %if.then
 ; GFX9-NEXT:    s_mov_b32 s11, s18
 ; GFX9-NEXT:    s_mov_b32 s10, s17
 ; GFX9-NEXT:    s_mov_b32 s9, s16
@@ -64,7 +67,6 @@ define void @uniform_br_unprofitable(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, s6
 ; GFX9-NEXT:    v_mov_b32_e32 v1, s19
 ; GFX9-NEXT:    buffer_store_dword v0, v1, s[8:11], 0 offen
-; GFX9-NEXT:  .LBB1_2: ; %if.end
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
@@ -72,8 +74,11 @@ define void @uniform_br_unprofitable(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX10:       ; %bb.0: ; %entry
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    s_cmp_lt_i32 s21, 1
-; GFX10-NEXT:    s_cbranch_scc1 .LBB1_2
-; GFX10-NEXT:  ; %bb.1: ; %if.then
+; GFX10-NEXT:    s_cbranch_scc0 .LBB1_2
+; GFX10-NEXT:  ; %bb.1: ; %if.end
+; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT:    s_setpc_b64 s[30:31]
+; GFX10-NEXT:  .LBB1_2: ; %if.then
 ; GFX10-NEXT:    v_mov_b32_e32 v0, s6
 ; GFX10-NEXT:    v_mov_b32_e32 v1, s19
 ; GFX10-NEXT:    s_mov_b32 s11, s18
@@ -81,7 +86,6 @@ define void @uniform_br_unprofitable(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX10-NEXT:    s_mov_b32 s9, s16
 ; GFX10-NEXT:    s_mov_b32 s8, s7
 ; GFX10-NEXT:    buffer_store_dword v0, v1, s[8:11], 0 offen
-; GFX10-NEXT:  .LBB1_2: ; %if.end
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 entry:
@@ -150,7 +154,6 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    v_cmp_lt_i32_e32 vcc, 0, v0
 ; GFX9-NEXT:    s_and_saveexec_b64 s[4:5], vcc
-; GFX9-NEXT:    s_cbranch_execz .LBB3_2
 ; GFX9-NEXT:  ; %bb.1: ; %if.then
 ; GFX9-NEXT:    s_mov_b32 s11, s18
 ; GFX9-NEXT:    s_mov_b32 s10, s17
@@ -159,7 +162,7 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, s6
 ; GFX9-NEXT:    v_mov_b32_e32 v1, s19
 ; GFX9-NEXT:    buffer_store_dword v0, v1, s[8:11], 0 offen
-; GFX9-NEXT:  .LBB3_2: ; %if.end
+; GFX9-NEXT:  ; %bb.2: ; %if.end
 ; GFX9-NEXT:    s_or_b64 exec, exec, s[4:5]
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
@@ -169,7 +172,6 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX1010-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX1010-NEXT:    v_cmp_lt_i32_e32 vcc_lo, 0, v0
 ; GFX1010-NEXT:    s_and_saveexec_b32 s4, vcc_lo
-; GFX1010-NEXT:    s_cbranch_execz .LBB3_2
 ; GFX1010-NEXT:  ; %bb.1: ; %if.then
 ; GFX1010-NEXT:    v_mov_b32_e32 v0, s6
 ; GFX1010-NEXT:    v_mov_b32_e32 v1, s19
@@ -178,7 +180,7 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX1010-NEXT:    s_mov_b32 s9, s16
 ; GFX1010-NEXT:    s_mov_b32 s8, s7
 ; GFX1010-NEXT:    buffer_store_dword v0, v1, s[8:11], 0 offen
-; GFX1010-NEXT:  .LBB3_2: ; %if.end
+; GFX1010-NEXT:  ; %bb.2: ; %if.end
 ; GFX1010-NEXT:    s_waitcnt_depctr 0xffe3
 ; GFX1010-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX1010-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
@@ -189,7 +191,6 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX1030-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX1030-NEXT:    s_mov_b32 s4, exec_lo
 ; GFX1030-NEXT:    v_cmpx_lt_i32_e32 0, v0
-; GFX1030-NEXT:    s_cbranch_execz .LBB3_2
 ; GFX1030-NEXT:  ; %bb.1: ; %if.then
 ; GFX1030-NEXT:    v_mov_b32_e32 v0, s6
 ; GFX1030-NEXT:    v_mov_b32_e32 v1, s19
@@ -198,7 +199,7 @@ define void @divergent_br_no_metadata(i32 noundef inreg %value, ptr addrspace(8)
 ; GFX1030-NEXT:    s_mov_b32 s9, s16
 ; GFX1030-NEXT:    s_mov_b32 s8, s7
 ; GFX1030-NEXT:    buffer_store_dword v0, v1, s[8:11], 0 offen
-; GFX1030-NEXT:  .LBB3_2: ; %if.end
+; GFX1030-NEXT:  ; %bb.2: ; %if.end
 ; GFX1030-NEXT:    s_or_b32 exec_lo, exec_lo, s4
 ; GFX1030-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX1030-NEXT:    s_setpc_b64 s[30:31]
@@ -221,8 +222,12 @@ define void @divergent_br_unprofitable(i32 noundef inreg %value, ptr addrspace(8
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    v_cmp_lt_i32_e32 vcc, 0, v0
 ; GFX9-NEXT:    s_and_saveexec_b64 s[4:5], vcc
-; GFX9-NEXT:    s_cbranch_ex...
[truncated]

@jmmartinez jmmartinez force-pushed the amd/if-cvt-v2-machine-trace branch 2 times, most recently from 10640a2 to eacad95 Compare October 4, 2024 11:22
Comment on lines 34 to 35
MachineTraceMetrics *Traces = nullptr;
MachineTraceMetrics::Ensemble *MinInstr;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment these?

@jmmartinez jmmartinez force-pushed the amd/if-cvt-v2-machine-trace branch from eacad95 to dccd9fe Compare October 10, 2024 11:49
@@ -13,7 +13,6 @@ define amdgpu_cs void @if_then(ptr addrspace(8) inreg %input, ptr addrspace(8) i
; GCN-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GCN-NEXT: v_cmp_lt_u32_e32 vcc_lo, 3, v0
; GCN-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GCN-NEXT: s_cbranch_execz .LBB0_4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised this branch is now deleted. I thought the cutoff for profitability would be somewhere around 6 instructions, but this block has 9 + the branch

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this helping any benchmarks?

@jmmartinez
Copy link
Contributor Author

I'm not 100% confident about this patch at the moment. The transformation gets very aggressive.

I've run a benchmark pipeline (this patch vs amd-staging; I haven't checked the parent change alone) and I've got mixed results. Several big speedups but also several big slowdowns.

I didn't have time to investigate further.


uint64_t BranchTakenCost =
TII->getSchedModel().computeInstrLatency(&Branch, false);
constexpr uint64_t BranchNotTakenCost = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be bumped up. This should be at least 4 for < gfx10. IIRC the not taken branch was higher but I don't remember

@jmmartinez
Copy link
Contributor Author

Tested some variations of this proposal with hecbench and didn't see any significant improvement.

@jmmartinez jmmartinez closed this Feb 11, 2025
@arsenm
Copy link
Contributor

arsenm commented Feb 11, 2025

Tested some variations of this proposal with hecbench and didn't see any significant improvement.

We should still do this. A single benchmark isn't everything

@arsenm arsenm reopened this Feb 11, 2025
@jmmartinez jmmartinez force-pushed the amd/if-cvt-v2-machine-trace branch from dccd9fe to 54796a9 Compare February 11, 2025 15:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants