-
Notifications
You must be signed in to change notification settings - Fork 13.5k
MachineScheduler: Improve instruction clustering #137784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The existing way of managing clustered nodes was done through adding weak edges between the neighbouring cluster nodes, which is a sort of ordered queue. And this will be later recorded as `NextClusterPred` or `NextClusterSucc` in `ScheduleDAGMI`. But actually the instruction may be picked not in the exact order of the queue. For example, we have a queue of cluster nodes A B C. But during scheduling, node B might be picked first, then it will be very likely that we only cluster B and C for Top-Down scheduling (leaving A alone). Another issue is: ``` if (!ReorderWhileClustering && SUa->NodeNum > SUb->NodeNum) std::swap(SUa, SUb); if (!DAG->addEdge(SUb, SDep(SUa, SDep::Cluster))) ``` may break the cluster queue. For example, we want to cluster nodes (order as in `MemOpRecords`): 1 3 2. 1(SUa) will be pred of 3(SUb) normally. But when it comes to (3, 2), As 3(SUa) > 2(SUb), we would reorder the two nodes, which makes 2 be pred of 3. This makes both 1 and 2 become preds of 3, but there is no edge between 1 and 2. Thus we get a broken cluster chain. To fix both issues, we introduce an unordered set in the change. This could help improve clustering in some hard case. There are two major reasons why there are so many test check changes. 1. The existing implemention has some buggy behavior: The scheduler does not reset the pointer to next cluster candidate. For example, we want to cluster A and B, but after picking A, we might pick node C. In theory, we should reset the next cluster candiate here, because we have decided not to cluster A and B during scheduling. Later picking B because of Cluster seems not logical. 2. As the cluster candidates are not ordered now, the candidates might be picked in different order from before. The most affected targets are: AMDGPU, AArch64, RISCV. For RISCV, it seems to me most are just minor instruction reorder, don't see obvious regression. For AArch64, there were some combining of ldr into ldp being affected. With two cases being regressed and two being improved. This has more deeper reason that machine scheduler cannot cluster them well both before and after the change, and the load combine algorithm later is also not smart enough. For AMDGPU, some cases have more v_dual instructions used while some are regressed. It seems less critical. Seems like test `v_vselect_v32bf16` gets more buffer_load being claused.
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-backend-powerpc Author: Ruiling, Song (ruiling) ChangesThe existing way of managing clustered nodes was done through adding weak edges between the neighbouring cluster nodes, which is a sort of ordered queue. And this will be later recorded as But actually the instruction may be picked not in the exact order of the queue. For example, we have a queue of cluster nodes A B C. But during scheduling, node B might be picked first, then it will be very likely that we only cluster B and C for Top-Down scheduling (leaving A alone). Another issue is:
may break the cluster queue. For example, we want to cluster nodes (order as in To fix both issues, we introduce an unordered set in the change. This could help improve clustering in some hard case. There are two major reasons why there are so many test check changes.
The most affected targets are: AMDGPU, AArch64, RISCV. For RISCV, it seems to me most are just minor instruction reorder, don't see obvious regression. For AArch64, there were some combining of ldr into ldp being affected. With two cases being regressed and two being improved. This has more deeper reason that machine scheduler cannot cluster them well both before and after the change, and the load combine algorithm later is also not smart enough. For AMDGPU, some cases have more v_dual instructions used while some are regressed. It seems less critical. Seems like test Patch is 5.52 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137784.diff 176 Files Affected:
diff --git a/llvm/include/llvm/CodeGen/MachineScheduler.h b/llvm/include/llvm/CodeGen/MachineScheduler.h
index bc00d0b4ff852..14f3fda90ef6d 100644
--- a/llvm/include/llvm/CodeGen/MachineScheduler.h
+++ b/llvm/include/llvm/CodeGen/MachineScheduler.h
@@ -303,10 +303,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// The bottom of the unscheduled zone.
MachineBasicBlock::iterator CurrentBottom;
- /// Record the next node in a scheduled cluster.
- const SUnit *NextClusterPred = nullptr;
- const SUnit *NextClusterSucc = nullptr;
-
#if LLVM_ENABLE_ABI_BREAKING_CHECKS
/// The number of instructions scheduled so far. Used to cut off the
/// scheduler at the point determined by misched-cutoff.
@@ -367,10 +363,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// live ranges and region boundary iterators.
void moveInstruction(MachineInstr *MI, MachineBasicBlock::iterator InsertPos);
- const SUnit *getNextClusterPred() const { return NextClusterPred; }
-
- const SUnit *getNextClusterSucc() const { return NextClusterSucc; }
-
void viewGraph(const Twine &Name, const Twine &Title) override;
void viewGraph() override;
@@ -1292,6 +1284,9 @@ class GenericScheduler : public GenericSchedulerBase {
SchedBoundary Top;
SchedBoundary Bot;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
/// Candidate last picked from Top boundary.
SchedCandidate TopCand;
/// Candidate last picked from Bot boundary.
@@ -1332,6 +1327,9 @@ class PostGenericScheduler : public GenericSchedulerBase {
/// Candidate last picked from Bot boundary.
SchedCandidate BotCand;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
public:
PostGenericScheduler(const MachineSchedContext *C)
: GenericSchedulerBase(C), Top(SchedBoundary::TopQID, "TopQ"),
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAG.h b/llvm/include/llvm/CodeGen/ScheduleDAG.h
index 1c8d92d149adc..a4301d11a4454 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAG.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAG.h
@@ -17,6 +17,7 @@
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/SmallSet.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/iterator.h"
#include "llvm/CodeGen/MachineInstr.h"
@@ -234,6 +235,10 @@ class TargetRegisterInfo;
void dump(const TargetRegisterInfo *TRI = nullptr) const;
};
+ /// Keep record of which SUnit are in the same cluster group.
+ typedef SmallSet<SUnit *, 8> ClusterInfo;
+ constexpr unsigned InvalidClusterId = ~0u;
+
/// Scheduling unit. This is a node in the scheduling DAG.
class SUnit {
private:
@@ -274,6 +279,8 @@ class TargetRegisterInfo;
unsigned TopReadyCycle = 0; ///< Cycle relative to start when node is ready.
unsigned BotReadyCycle = 0; ///< Cycle relative to end when node is ready.
+ unsigned ParentClusterIdx = InvalidClusterId; ///< The parent cluster id.
+
private:
unsigned Depth = 0; ///< Node depth.
unsigned Height = 0; ///< Node height.
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
index e79b03c57a1e8..6c6bd8015ee69 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
@@ -180,6 +180,8 @@ namespace llvm {
/// case of a huge region that gets reduced).
SUnit *BarrierChain = nullptr;
+ SmallVector<ClusterInfo> Clusters;
+
public:
/// A list of SUnits, used in Value2SUsMap, during DAG construction.
/// Note: to gain speed it might be worth investigating an optimized
@@ -383,6 +385,14 @@ namespace llvm {
/// equivalent edge already existed (false indicates failure).
bool addEdge(SUnit *SuccSU, const SDep &PredDep);
+ /// Returns the array of the clusters.
+ SmallVector<ClusterInfo> &getClusters() { return Clusters; }
+
+ /// Get the specific cluster, return nullptr for InvalidClusterId.
+ ClusterInfo *getCluster(unsigned Idx) {
+ return Idx != InvalidClusterId ? &Clusters[Idx] : nullptr;
+ }
+
protected:
void initSUnits();
void addPhysRegDataDeps(SUnit *SU, unsigned OperIdx);
diff --git a/llvm/lib/CodeGen/MachineScheduler.cpp b/llvm/lib/CodeGen/MachineScheduler.cpp
index 0c3ffb1bbaa6f..91da22612eac6 100644
--- a/llvm/lib/CodeGen/MachineScheduler.cpp
+++ b/llvm/lib/CodeGen/MachineScheduler.cpp
@@ -15,6 +15,7 @@
#include "llvm/ADT/ArrayRef.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/EquivalenceClasses.h"
#include "llvm/ADT/PriorityQueue.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallVector.h"
@@ -844,8 +845,6 @@ void ScheduleDAGMI::releaseSucc(SUnit *SU, SDep *SuccEdge) {
if (SuccEdge->isWeak()) {
--SuccSU->WeakPredsLeft;
- if (SuccEdge->isCluster())
- NextClusterSucc = SuccSU;
return;
}
#ifndef NDEBUG
@@ -881,8 +880,6 @@ void ScheduleDAGMI::releasePred(SUnit *SU, SDep *PredEdge) {
if (PredEdge->isWeak()) {
--PredSU->WeakSuccsLeft;
- if (PredEdge->isCluster())
- NextClusterPred = PredSU;
return;
}
#ifndef NDEBUG
@@ -1077,11 +1074,8 @@ findRootsAndBiasEdges(SmallVectorImpl<SUnit*> &TopRoots,
}
/// Identify DAG roots and setup scheduler queues.
-void ScheduleDAGMI::initQueues(ArrayRef<SUnit*> TopRoots,
- ArrayRef<SUnit*> BotRoots) {
- NextClusterSucc = nullptr;
- NextClusterPred = nullptr;
-
+void ScheduleDAGMI::initQueues(ArrayRef<SUnit *> TopRoots,
+ ArrayRef<SUnit *> BotRoots) {
// Release all DAG roots for scheduling, not including EntrySU/ExitSU.
//
// Nodes with unreleased weak edges can still be roots.
@@ -2008,6 +2002,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
ScheduleDAGInstrs *DAG) {
// Keep track of the current cluster length and bytes for each SUnit.
DenseMap<unsigned, std::pair<unsigned, unsigned>> SUnit2ClusterInfo;
+ EquivalenceClasses<SUnit *> Clusters;
// At this point, `MemOpRecords` array must hold atleast two mem ops. Try to
// cluster mem ops collected within `MemOpRecords` array.
@@ -2047,6 +2042,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
SUnit *SUa = MemOpa.SU;
SUnit *SUb = MemOpb.SU;
+
if (!ReorderWhileClustering && SUa->NodeNum > SUb->NodeNum)
std::swap(SUa, SUb);
@@ -2054,6 +2050,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
if (!DAG->addEdge(SUb, SDep(SUa, SDep::Cluster)))
continue;
+ Clusters.unionSets(SUa, SUb);
LLVM_DEBUG(dbgs() << "Cluster ld/st SU(" << SUa->NodeNum << ") - SU("
<< SUb->NodeNum << ")\n");
++NumClustered;
@@ -2093,6 +2090,21 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
<< ", Curr cluster bytes: " << CurrentClusterBytes
<< "\n");
}
+
+ // Add cluster group information.
+ // Iterate over all of the equivalence sets.
+ auto &AllClusters = DAG->getClusters();
+ for (auto &I : Clusters) {
+ if (!I->isLeader())
+ continue;
+ ClusterInfo Group;
+ unsigned ClusterIdx = AllClusters.size();
+ for (auto *MemberI : Clusters.members(*I)) {
+ MemberI->ParentClusterIdx = ClusterIdx;
+ Group.insert(MemberI);
+ }
+ AllClusters.push_back(Group);
+ }
}
void BaseMemOpClusterMutation::collectMemOpRecords(
@@ -3456,6 +3468,9 @@ void GenericScheduler::initialize(ScheduleDAGMI *dag) {
}
TopCand.SU = nullptr;
BotCand.SU = nullptr;
+
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
/// Initialize the per-region scheduling policy.
@@ -3762,13 +3777,11 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU,
- TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -4015,11 +4028,25 @@ void GenericScheduler::reschedulePhysReg(SUnit *SU, bool isTop) {
void GenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (TopCluster) {
+ dbgs() << " Top Cluster: ";
+ for (auto *N : *TopCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Top.bumpNode(SU);
if (SU->hasPhysRegUses)
reschedulePhysReg(SU, true);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (BotCluster) {
+ dbgs() << " Bot Cluster: ";
+ for (auto *N : *BotCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Bot.bumpNode(SU);
if (SU->hasPhysRegDefs)
reschedulePhysReg(SU, false);
@@ -4076,6 +4103,8 @@ void PostGenericScheduler::initialize(ScheduleDAGMI *Dag) {
if (!Bot.HazardRec) {
Bot.HazardRec = DAG->TII->CreateTargetMIHazardRecognizer(Itin, DAG);
}
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
void PostGenericScheduler::initPolicy(MachineBasicBlock::iterator Begin,
@@ -4137,14 +4166,12 @@ bool PostGenericScheduler::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
-
// Avoid critical resource consumption and balance the schedule.
if (tryLess(TryCand.ResDelta.CritResources, Cand.ResDelta.CritResources,
TryCand, Cand, ResourceReduce))
@@ -4329,9 +4356,11 @@ SUnit *PostGenericScheduler::pickNode(bool &IsTopNode) {
void PostGenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
Top.bumpNode(SU);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
Bot.bumpNode(SU);
}
}
diff --git a/llvm/lib/CodeGen/MacroFusion.cpp b/llvm/lib/CodeGen/MacroFusion.cpp
index 5bd6ca0978a4b..c614e477a9d8f 100644
--- a/llvm/lib/CodeGen/MacroFusion.cpp
+++ b/llvm/lib/CodeGen/MacroFusion.cpp
@@ -61,6 +61,11 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
for (SDep &SI : SecondSU.Preds)
if (SI.isCluster())
return false;
+
+ unsigned FirstCluster = FirstSU.ParentClusterIdx;
+ unsigned SecondCluster = SecondSU.ParentClusterIdx;
+ assert(FirstCluster == InvalidClusterId && SecondCluster == InvalidClusterId);
+
// Though the reachability checks above could be made more generic,
// perhaps as part of ScheduleDAGInstrs::addEdge(), since such edges are valid,
// the extra computation cost makes it less interesting in general cases.
@@ -70,6 +75,14 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
if (!DAG.addEdge(&SecondSU, SDep(&FirstSU, SDep::Cluster)))
return false;
+ auto &Clusters = DAG.getClusters();
+
+ FirstSU.ParentClusterIdx = Clusters.size();
+ SecondSU.ParentClusterIdx = Clusters.size();
+
+ SmallSet<SUnit *, 8> Cluster{{&FirstSU, &SecondSU}};
+ Clusters.emplace_back(Cluster);
+
// TODO - If we want to chain more than two instructions, we need to create
// artifical edges to make dependencies from the FirstSU also dependent
// on other chained instructions, and other chained instructions also
diff --git a/llvm/lib/CodeGen/ScheduleDAG.cpp b/llvm/lib/CodeGen/ScheduleDAG.cpp
index 26857edd871e2..e630b80e33ab4 100644
--- a/llvm/lib/CodeGen/ScheduleDAG.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAG.cpp
@@ -365,6 +365,9 @@ LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeName(const SUnit &SU) const {
LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeAll(const SUnit &SU) const {
dumpNode(SU);
SU.dumpAttributes();
+ if (SU.ParentClusterIdx != InvalidClusterId)
+ dbgs() << " Parent Cluster Index: " << SU.ParentClusterIdx << '\n';
+
if (SU.Preds.size() > 0) {
dbgs() << " Predecessors:\n";
for (const SDep &Dep : SU.Preds) {
diff --git a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
index 5678512748569..6c6c81ab2b4cc 100644
--- a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
+++ b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
@@ -584,12 +584,11 @@ bool GCNMaxILPSchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid increasing the max critical pressure in the scheduled region.
@@ -659,12 +658,11 @@ bool GCNMaxMemoryClauseSchedStrategy::tryCandidate(SchedCandidate &Cand,
// MaxMemoryClause-specific: We prioritize clustered instructions as we would
// get more benefit from clausing these memory instructions.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// We only compare a subset of features when comparing nodes between
diff --git a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
index 03712879f7c49..5eb1f0128643d 100644
--- a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
+++ b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
@@ -100,12 +100,11 @@ bool PPCPreRASchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -190,8 +189,11 @@ bool PPCPostRASchedStrategy::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- if (tryGreater(TryCand.SU == DAG->getNextClusterSucc(),
- Cand.SU == DAG->getNextClusterSucc(), TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid critical resource consumption and balance the schedule.
diff --git a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
index b944194dae8fc..f9176bc9d3fa5 100644
--- a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
+++ b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
@@ -477,9 +477,8 @@ define void @callee_in_memory(%T_IN_MEMORY %a) {
; CHECK-NEXT: add x8, x8, :lo12:in_memory_store
; CHECK-NEXT: ldr d0, [sp, #64]
; CHECK-NEXT: str d0, [x8, #64]
-; CHECK-NEXT: ldr q0, [sp, #16]
; CHECK-NEXT: str q2, [x8, #48]
-; CHECK-NEXT: ldr q2, [sp]
+; CHECK-NEXT: ldp q2, q0, [sp]
; CHECK-NEXT: stp q0, q1, [x8, #16]
; CHECK-NEXT: str q2, [x8]
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
index 7e72e8de01f4f..3bada9d5b3bb4 100644
--- a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
@@ -7,8 +7,8 @@
; CHECK-LABEL: @test
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -36,8 +36,8 @@ entry:
; CHECK-LABEL: @test_int
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
; CHECK: add {{w[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: add {{w[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -65,8 +65,8 @@ entry:
; CHECK-LABEL: @test_long
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #4
-; CHECK: ldp [[CPLX1_I:x[0-9]+]], [[CPLX1_R:x[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:x[0-9]+]], [[CPLX2_R:x[0-9]+]], [[[BASE]], #128]
+; CHEC...
[truncated]
|
@llvm/pr-subscribers-backend-aarch64 Author: Ruiling, Song (ruiling) ChangesThe existing way of managing clustered nodes was done through adding weak edges between the neighbouring cluster nodes, which is a sort of ordered queue. And this will be later recorded as But actually the instruction may be picked not in the exact order of the queue. For example, we have a queue of cluster nodes A B C. But during scheduling, node B might be picked first, then it will be very likely that we only cluster B and C for Top-Down scheduling (leaving A alone). Another issue is:
may break the cluster queue. For example, we want to cluster nodes (order as in To fix both issues, we introduce an unordered set in the change. This could help improve clustering in some hard case. There are two major reasons why there are so many test check changes.
The most affected targets are: AMDGPU, AArch64, RISCV. For RISCV, it seems to me most are just minor instruction reorder, don't see obvious regression. For AArch64, there were some combining of ldr into ldp being affected. With two cases being regressed and two being improved. This has more deeper reason that machine scheduler cannot cluster them well both before and after the change, and the load combine algorithm later is also not smart enough. For AMDGPU, some cases have more v_dual instructions used while some are regressed. It seems less critical. Seems like test Patch is 5.52 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137784.diff 176 Files Affected:
diff --git a/llvm/include/llvm/CodeGen/MachineScheduler.h b/llvm/include/llvm/CodeGen/MachineScheduler.h
index bc00d0b4ff852..14f3fda90ef6d 100644
--- a/llvm/include/llvm/CodeGen/MachineScheduler.h
+++ b/llvm/include/llvm/CodeGen/MachineScheduler.h
@@ -303,10 +303,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// The bottom of the unscheduled zone.
MachineBasicBlock::iterator CurrentBottom;
- /// Record the next node in a scheduled cluster.
- const SUnit *NextClusterPred = nullptr;
- const SUnit *NextClusterSucc = nullptr;
-
#if LLVM_ENABLE_ABI_BREAKING_CHECKS
/// The number of instructions scheduled so far. Used to cut off the
/// scheduler at the point determined by misched-cutoff.
@@ -367,10 +363,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// live ranges and region boundary iterators.
void moveInstruction(MachineInstr *MI, MachineBasicBlock::iterator InsertPos);
- const SUnit *getNextClusterPred() const { return NextClusterPred; }
-
- const SUnit *getNextClusterSucc() const { return NextClusterSucc; }
-
void viewGraph(const Twine &Name, const Twine &Title) override;
void viewGraph() override;
@@ -1292,6 +1284,9 @@ class GenericScheduler : public GenericSchedulerBase {
SchedBoundary Top;
SchedBoundary Bot;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
/// Candidate last picked from Top boundary.
SchedCandidate TopCand;
/// Candidate last picked from Bot boundary.
@@ -1332,6 +1327,9 @@ class PostGenericScheduler : public GenericSchedulerBase {
/// Candidate last picked from Bot boundary.
SchedCandidate BotCand;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
public:
PostGenericScheduler(const MachineSchedContext *C)
: GenericSchedulerBase(C), Top(SchedBoundary::TopQID, "TopQ"),
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAG.h b/llvm/include/llvm/CodeGen/ScheduleDAG.h
index 1c8d92d149adc..a4301d11a4454 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAG.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAG.h
@@ -17,6 +17,7 @@
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/SmallSet.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/iterator.h"
#include "llvm/CodeGen/MachineInstr.h"
@@ -234,6 +235,10 @@ class TargetRegisterInfo;
void dump(const TargetRegisterInfo *TRI = nullptr) const;
};
+ /// Keep record of which SUnit are in the same cluster group.
+ typedef SmallSet<SUnit *, 8> ClusterInfo;
+ constexpr unsigned InvalidClusterId = ~0u;
+
/// Scheduling unit. This is a node in the scheduling DAG.
class SUnit {
private:
@@ -274,6 +279,8 @@ class TargetRegisterInfo;
unsigned TopReadyCycle = 0; ///< Cycle relative to start when node is ready.
unsigned BotReadyCycle = 0; ///< Cycle relative to end when node is ready.
+ unsigned ParentClusterIdx = InvalidClusterId; ///< The parent cluster id.
+
private:
unsigned Depth = 0; ///< Node depth.
unsigned Height = 0; ///< Node height.
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
index e79b03c57a1e8..6c6bd8015ee69 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
@@ -180,6 +180,8 @@ namespace llvm {
/// case of a huge region that gets reduced).
SUnit *BarrierChain = nullptr;
+ SmallVector<ClusterInfo> Clusters;
+
public:
/// A list of SUnits, used in Value2SUsMap, during DAG construction.
/// Note: to gain speed it might be worth investigating an optimized
@@ -383,6 +385,14 @@ namespace llvm {
/// equivalent edge already existed (false indicates failure).
bool addEdge(SUnit *SuccSU, const SDep &PredDep);
+ /// Returns the array of the clusters.
+ SmallVector<ClusterInfo> &getClusters() { return Clusters; }
+
+ /// Get the specific cluster, return nullptr for InvalidClusterId.
+ ClusterInfo *getCluster(unsigned Idx) {
+ return Idx != InvalidClusterId ? &Clusters[Idx] : nullptr;
+ }
+
protected:
void initSUnits();
void addPhysRegDataDeps(SUnit *SU, unsigned OperIdx);
diff --git a/llvm/lib/CodeGen/MachineScheduler.cpp b/llvm/lib/CodeGen/MachineScheduler.cpp
index 0c3ffb1bbaa6f..91da22612eac6 100644
--- a/llvm/lib/CodeGen/MachineScheduler.cpp
+++ b/llvm/lib/CodeGen/MachineScheduler.cpp
@@ -15,6 +15,7 @@
#include "llvm/ADT/ArrayRef.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/EquivalenceClasses.h"
#include "llvm/ADT/PriorityQueue.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallVector.h"
@@ -844,8 +845,6 @@ void ScheduleDAGMI::releaseSucc(SUnit *SU, SDep *SuccEdge) {
if (SuccEdge->isWeak()) {
--SuccSU->WeakPredsLeft;
- if (SuccEdge->isCluster())
- NextClusterSucc = SuccSU;
return;
}
#ifndef NDEBUG
@@ -881,8 +880,6 @@ void ScheduleDAGMI::releasePred(SUnit *SU, SDep *PredEdge) {
if (PredEdge->isWeak()) {
--PredSU->WeakSuccsLeft;
- if (PredEdge->isCluster())
- NextClusterPred = PredSU;
return;
}
#ifndef NDEBUG
@@ -1077,11 +1074,8 @@ findRootsAndBiasEdges(SmallVectorImpl<SUnit*> &TopRoots,
}
/// Identify DAG roots and setup scheduler queues.
-void ScheduleDAGMI::initQueues(ArrayRef<SUnit*> TopRoots,
- ArrayRef<SUnit*> BotRoots) {
- NextClusterSucc = nullptr;
- NextClusterPred = nullptr;
-
+void ScheduleDAGMI::initQueues(ArrayRef<SUnit *> TopRoots,
+ ArrayRef<SUnit *> BotRoots) {
// Release all DAG roots for scheduling, not including EntrySU/ExitSU.
//
// Nodes with unreleased weak edges can still be roots.
@@ -2008,6 +2002,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
ScheduleDAGInstrs *DAG) {
// Keep track of the current cluster length and bytes for each SUnit.
DenseMap<unsigned, std::pair<unsigned, unsigned>> SUnit2ClusterInfo;
+ EquivalenceClasses<SUnit *> Clusters;
// At this point, `MemOpRecords` array must hold atleast two mem ops. Try to
// cluster mem ops collected within `MemOpRecords` array.
@@ -2047,6 +2042,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
SUnit *SUa = MemOpa.SU;
SUnit *SUb = MemOpb.SU;
+
if (!ReorderWhileClustering && SUa->NodeNum > SUb->NodeNum)
std::swap(SUa, SUb);
@@ -2054,6 +2050,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
if (!DAG->addEdge(SUb, SDep(SUa, SDep::Cluster)))
continue;
+ Clusters.unionSets(SUa, SUb);
LLVM_DEBUG(dbgs() << "Cluster ld/st SU(" << SUa->NodeNum << ") - SU("
<< SUb->NodeNum << ")\n");
++NumClustered;
@@ -2093,6 +2090,21 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
<< ", Curr cluster bytes: " << CurrentClusterBytes
<< "\n");
}
+
+ // Add cluster group information.
+ // Iterate over all of the equivalence sets.
+ auto &AllClusters = DAG->getClusters();
+ for (auto &I : Clusters) {
+ if (!I->isLeader())
+ continue;
+ ClusterInfo Group;
+ unsigned ClusterIdx = AllClusters.size();
+ for (auto *MemberI : Clusters.members(*I)) {
+ MemberI->ParentClusterIdx = ClusterIdx;
+ Group.insert(MemberI);
+ }
+ AllClusters.push_back(Group);
+ }
}
void BaseMemOpClusterMutation::collectMemOpRecords(
@@ -3456,6 +3468,9 @@ void GenericScheduler::initialize(ScheduleDAGMI *dag) {
}
TopCand.SU = nullptr;
BotCand.SU = nullptr;
+
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
/// Initialize the per-region scheduling policy.
@@ -3762,13 +3777,11 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU,
- TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -4015,11 +4028,25 @@ void GenericScheduler::reschedulePhysReg(SUnit *SU, bool isTop) {
void GenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (TopCluster) {
+ dbgs() << " Top Cluster: ";
+ for (auto *N : *TopCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Top.bumpNode(SU);
if (SU->hasPhysRegUses)
reschedulePhysReg(SU, true);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (BotCluster) {
+ dbgs() << " Bot Cluster: ";
+ for (auto *N : *BotCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Bot.bumpNode(SU);
if (SU->hasPhysRegDefs)
reschedulePhysReg(SU, false);
@@ -4076,6 +4103,8 @@ void PostGenericScheduler::initialize(ScheduleDAGMI *Dag) {
if (!Bot.HazardRec) {
Bot.HazardRec = DAG->TII->CreateTargetMIHazardRecognizer(Itin, DAG);
}
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
void PostGenericScheduler::initPolicy(MachineBasicBlock::iterator Begin,
@@ -4137,14 +4166,12 @@ bool PostGenericScheduler::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
-
// Avoid critical resource consumption and balance the schedule.
if (tryLess(TryCand.ResDelta.CritResources, Cand.ResDelta.CritResources,
TryCand, Cand, ResourceReduce))
@@ -4329,9 +4356,11 @@ SUnit *PostGenericScheduler::pickNode(bool &IsTopNode) {
void PostGenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
Top.bumpNode(SU);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
Bot.bumpNode(SU);
}
}
diff --git a/llvm/lib/CodeGen/MacroFusion.cpp b/llvm/lib/CodeGen/MacroFusion.cpp
index 5bd6ca0978a4b..c614e477a9d8f 100644
--- a/llvm/lib/CodeGen/MacroFusion.cpp
+++ b/llvm/lib/CodeGen/MacroFusion.cpp
@@ -61,6 +61,11 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
for (SDep &SI : SecondSU.Preds)
if (SI.isCluster())
return false;
+
+ unsigned FirstCluster = FirstSU.ParentClusterIdx;
+ unsigned SecondCluster = SecondSU.ParentClusterIdx;
+ assert(FirstCluster == InvalidClusterId && SecondCluster == InvalidClusterId);
+
// Though the reachability checks above could be made more generic,
// perhaps as part of ScheduleDAGInstrs::addEdge(), since such edges are valid,
// the extra computation cost makes it less interesting in general cases.
@@ -70,6 +75,14 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
if (!DAG.addEdge(&SecondSU, SDep(&FirstSU, SDep::Cluster)))
return false;
+ auto &Clusters = DAG.getClusters();
+
+ FirstSU.ParentClusterIdx = Clusters.size();
+ SecondSU.ParentClusterIdx = Clusters.size();
+
+ SmallSet<SUnit *, 8> Cluster{{&FirstSU, &SecondSU}};
+ Clusters.emplace_back(Cluster);
+
// TODO - If we want to chain more than two instructions, we need to create
// artifical edges to make dependencies from the FirstSU also dependent
// on other chained instructions, and other chained instructions also
diff --git a/llvm/lib/CodeGen/ScheduleDAG.cpp b/llvm/lib/CodeGen/ScheduleDAG.cpp
index 26857edd871e2..e630b80e33ab4 100644
--- a/llvm/lib/CodeGen/ScheduleDAG.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAG.cpp
@@ -365,6 +365,9 @@ LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeName(const SUnit &SU) const {
LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeAll(const SUnit &SU) const {
dumpNode(SU);
SU.dumpAttributes();
+ if (SU.ParentClusterIdx != InvalidClusterId)
+ dbgs() << " Parent Cluster Index: " << SU.ParentClusterIdx << '\n';
+
if (SU.Preds.size() > 0) {
dbgs() << " Predecessors:\n";
for (const SDep &Dep : SU.Preds) {
diff --git a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
index 5678512748569..6c6c81ab2b4cc 100644
--- a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
+++ b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
@@ -584,12 +584,11 @@ bool GCNMaxILPSchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid increasing the max critical pressure in the scheduled region.
@@ -659,12 +658,11 @@ bool GCNMaxMemoryClauseSchedStrategy::tryCandidate(SchedCandidate &Cand,
// MaxMemoryClause-specific: We prioritize clustered instructions as we would
// get more benefit from clausing these memory instructions.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// We only compare a subset of features when comparing nodes between
diff --git a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
index 03712879f7c49..5eb1f0128643d 100644
--- a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
+++ b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
@@ -100,12 +100,11 @@ bool PPCPreRASchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -190,8 +189,11 @@ bool PPCPostRASchedStrategy::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- if (tryGreater(TryCand.SU == DAG->getNextClusterSucc(),
- Cand.SU == DAG->getNextClusterSucc(), TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid critical resource consumption and balance the schedule.
diff --git a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
index b944194dae8fc..f9176bc9d3fa5 100644
--- a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
+++ b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
@@ -477,9 +477,8 @@ define void @callee_in_memory(%T_IN_MEMORY %a) {
; CHECK-NEXT: add x8, x8, :lo12:in_memory_store
; CHECK-NEXT: ldr d0, [sp, #64]
; CHECK-NEXT: str d0, [x8, #64]
-; CHECK-NEXT: ldr q0, [sp, #16]
; CHECK-NEXT: str q2, [x8, #48]
-; CHECK-NEXT: ldr q2, [sp]
+; CHECK-NEXT: ldp q2, q0, [sp]
; CHECK-NEXT: stp q0, q1, [x8, #16]
; CHECK-NEXT: str q2, [x8]
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
index 7e72e8de01f4f..3bada9d5b3bb4 100644
--- a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
@@ -7,8 +7,8 @@
; CHECK-LABEL: @test
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -36,8 +36,8 @@ entry:
; CHECK-LABEL: @test_int
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
; CHECK: add {{w[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: add {{w[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -65,8 +65,8 @@ entry:
; CHECK-LABEL: @test_long
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #4
-; CHECK: ldp [[CPLX1_I:x[0-9]+]], [[CPLX1_R:x[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:x[0-9]+]], [[CPLX2_R:x[0-9]+]], [[[BASE]], #128]
+; CHEC...
[truncated]
|
@llvm/pr-subscribers-backend-amdgpu Author: Ruiling, Song (ruiling) ChangesThe existing way of managing clustered nodes was done through adding weak edges between the neighbouring cluster nodes, which is a sort of ordered queue. And this will be later recorded as But actually the instruction may be picked not in the exact order of the queue. For example, we have a queue of cluster nodes A B C. But during scheduling, node B might be picked first, then it will be very likely that we only cluster B and C for Top-Down scheduling (leaving A alone). Another issue is:
may break the cluster queue. For example, we want to cluster nodes (order as in To fix both issues, we introduce an unordered set in the change. This could help improve clustering in some hard case. There are two major reasons why there are so many test check changes.
The most affected targets are: AMDGPU, AArch64, RISCV. For RISCV, it seems to me most are just minor instruction reorder, don't see obvious regression. For AArch64, there were some combining of ldr into ldp being affected. With two cases being regressed and two being improved. This has more deeper reason that machine scheduler cannot cluster them well both before and after the change, and the load combine algorithm later is also not smart enough. For AMDGPU, some cases have more v_dual instructions used while some are regressed. It seems less critical. Seems like test Patch is 5.52 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137784.diff 176 Files Affected:
diff --git a/llvm/include/llvm/CodeGen/MachineScheduler.h b/llvm/include/llvm/CodeGen/MachineScheduler.h
index bc00d0b4ff852..14f3fda90ef6d 100644
--- a/llvm/include/llvm/CodeGen/MachineScheduler.h
+++ b/llvm/include/llvm/CodeGen/MachineScheduler.h
@@ -303,10 +303,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// The bottom of the unscheduled zone.
MachineBasicBlock::iterator CurrentBottom;
- /// Record the next node in a scheduled cluster.
- const SUnit *NextClusterPred = nullptr;
- const SUnit *NextClusterSucc = nullptr;
-
#if LLVM_ENABLE_ABI_BREAKING_CHECKS
/// The number of instructions scheduled so far. Used to cut off the
/// scheduler at the point determined by misched-cutoff.
@@ -367,10 +363,6 @@ class ScheduleDAGMI : public ScheduleDAGInstrs {
/// live ranges and region boundary iterators.
void moveInstruction(MachineInstr *MI, MachineBasicBlock::iterator InsertPos);
- const SUnit *getNextClusterPred() const { return NextClusterPred; }
-
- const SUnit *getNextClusterSucc() const { return NextClusterSucc; }
-
void viewGraph(const Twine &Name, const Twine &Title) override;
void viewGraph() override;
@@ -1292,6 +1284,9 @@ class GenericScheduler : public GenericSchedulerBase {
SchedBoundary Top;
SchedBoundary Bot;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
/// Candidate last picked from Top boundary.
SchedCandidate TopCand;
/// Candidate last picked from Bot boundary.
@@ -1332,6 +1327,9 @@ class PostGenericScheduler : public GenericSchedulerBase {
/// Candidate last picked from Bot boundary.
SchedCandidate BotCand;
+ ClusterInfo *TopCluster;
+ ClusterInfo *BotCluster;
+
public:
PostGenericScheduler(const MachineSchedContext *C)
: GenericSchedulerBase(C), Top(SchedBoundary::TopQID, "TopQ"),
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAG.h b/llvm/include/llvm/CodeGen/ScheduleDAG.h
index 1c8d92d149adc..a4301d11a4454 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAG.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAG.h
@@ -17,6 +17,7 @@
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/SmallSet.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/iterator.h"
#include "llvm/CodeGen/MachineInstr.h"
@@ -234,6 +235,10 @@ class TargetRegisterInfo;
void dump(const TargetRegisterInfo *TRI = nullptr) const;
};
+ /// Keep record of which SUnit are in the same cluster group.
+ typedef SmallSet<SUnit *, 8> ClusterInfo;
+ constexpr unsigned InvalidClusterId = ~0u;
+
/// Scheduling unit. This is a node in the scheduling DAG.
class SUnit {
private:
@@ -274,6 +279,8 @@ class TargetRegisterInfo;
unsigned TopReadyCycle = 0; ///< Cycle relative to start when node is ready.
unsigned BotReadyCycle = 0; ///< Cycle relative to end when node is ready.
+ unsigned ParentClusterIdx = InvalidClusterId; ///< The parent cluster id.
+
private:
unsigned Depth = 0; ///< Node depth.
unsigned Height = 0; ///< Node height.
diff --git a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
index e79b03c57a1e8..6c6bd8015ee69 100644
--- a/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
+++ b/llvm/include/llvm/CodeGen/ScheduleDAGInstrs.h
@@ -180,6 +180,8 @@ namespace llvm {
/// case of a huge region that gets reduced).
SUnit *BarrierChain = nullptr;
+ SmallVector<ClusterInfo> Clusters;
+
public:
/// A list of SUnits, used in Value2SUsMap, during DAG construction.
/// Note: to gain speed it might be worth investigating an optimized
@@ -383,6 +385,14 @@ namespace llvm {
/// equivalent edge already existed (false indicates failure).
bool addEdge(SUnit *SuccSU, const SDep &PredDep);
+ /// Returns the array of the clusters.
+ SmallVector<ClusterInfo> &getClusters() { return Clusters; }
+
+ /// Get the specific cluster, return nullptr for InvalidClusterId.
+ ClusterInfo *getCluster(unsigned Idx) {
+ return Idx != InvalidClusterId ? &Clusters[Idx] : nullptr;
+ }
+
protected:
void initSUnits();
void addPhysRegDataDeps(SUnit *SU, unsigned OperIdx);
diff --git a/llvm/lib/CodeGen/MachineScheduler.cpp b/llvm/lib/CodeGen/MachineScheduler.cpp
index 0c3ffb1bbaa6f..91da22612eac6 100644
--- a/llvm/lib/CodeGen/MachineScheduler.cpp
+++ b/llvm/lib/CodeGen/MachineScheduler.cpp
@@ -15,6 +15,7 @@
#include "llvm/ADT/ArrayRef.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/EquivalenceClasses.h"
#include "llvm/ADT/PriorityQueue.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallVector.h"
@@ -844,8 +845,6 @@ void ScheduleDAGMI::releaseSucc(SUnit *SU, SDep *SuccEdge) {
if (SuccEdge->isWeak()) {
--SuccSU->WeakPredsLeft;
- if (SuccEdge->isCluster())
- NextClusterSucc = SuccSU;
return;
}
#ifndef NDEBUG
@@ -881,8 +880,6 @@ void ScheduleDAGMI::releasePred(SUnit *SU, SDep *PredEdge) {
if (PredEdge->isWeak()) {
--PredSU->WeakSuccsLeft;
- if (PredEdge->isCluster())
- NextClusterPred = PredSU;
return;
}
#ifndef NDEBUG
@@ -1077,11 +1074,8 @@ findRootsAndBiasEdges(SmallVectorImpl<SUnit*> &TopRoots,
}
/// Identify DAG roots and setup scheduler queues.
-void ScheduleDAGMI::initQueues(ArrayRef<SUnit*> TopRoots,
- ArrayRef<SUnit*> BotRoots) {
- NextClusterSucc = nullptr;
- NextClusterPred = nullptr;
-
+void ScheduleDAGMI::initQueues(ArrayRef<SUnit *> TopRoots,
+ ArrayRef<SUnit *> BotRoots) {
// Release all DAG roots for scheduling, not including EntrySU/ExitSU.
//
// Nodes with unreleased weak edges can still be roots.
@@ -2008,6 +2002,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
ScheduleDAGInstrs *DAG) {
// Keep track of the current cluster length and bytes for each SUnit.
DenseMap<unsigned, std::pair<unsigned, unsigned>> SUnit2ClusterInfo;
+ EquivalenceClasses<SUnit *> Clusters;
// At this point, `MemOpRecords` array must hold atleast two mem ops. Try to
// cluster mem ops collected within `MemOpRecords` array.
@@ -2047,6 +2042,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
SUnit *SUa = MemOpa.SU;
SUnit *SUb = MemOpb.SU;
+
if (!ReorderWhileClustering && SUa->NodeNum > SUb->NodeNum)
std::swap(SUa, SUb);
@@ -2054,6 +2050,7 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
if (!DAG->addEdge(SUb, SDep(SUa, SDep::Cluster)))
continue;
+ Clusters.unionSets(SUa, SUb);
LLVM_DEBUG(dbgs() << "Cluster ld/st SU(" << SUa->NodeNum << ") - SU("
<< SUb->NodeNum << ")\n");
++NumClustered;
@@ -2093,6 +2090,21 @@ void BaseMemOpClusterMutation::clusterNeighboringMemOps(
<< ", Curr cluster bytes: " << CurrentClusterBytes
<< "\n");
}
+
+ // Add cluster group information.
+ // Iterate over all of the equivalence sets.
+ auto &AllClusters = DAG->getClusters();
+ for (auto &I : Clusters) {
+ if (!I->isLeader())
+ continue;
+ ClusterInfo Group;
+ unsigned ClusterIdx = AllClusters.size();
+ for (auto *MemberI : Clusters.members(*I)) {
+ MemberI->ParentClusterIdx = ClusterIdx;
+ Group.insert(MemberI);
+ }
+ AllClusters.push_back(Group);
+ }
}
void BaseMemOpClusterMutation::collectMemOpRecords(
@@ -3456,6 +3468,9 @@ void GenericScheduler::initialize(ScheduleDAGMI *dag) {
}
TopCand.SU = nullptr;
BotCand.SU = nullptr;
+
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
/// Initialize the per-region scheduling policy.
@@ -3762,13 +3777,11 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU,
- TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -4015,11 +4028,25 @@ void GenericScheduler::reschedulePhysReg(SUnit *SU, bool isTop) {
void GenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (TopCluster) {
+ dbgs() << " Top Cluster: ";
+ for (auto *N : *TopCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Top.bumpNode(SU);
if (SU->hasPhysRegUses)
reschedulePhysReg(SU, true);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
+ LLVM_DEBUG(if (BotCluster) {
+ dbgs() << " Bot Cluster: ";
+ for (auto *N : *BotCluster)
+ dbgs() << N->NodeNum << '\t';
+ dbgs() << "\n";
+ });
Bot.bumpNode(SU);
if (SU->hasPhysRegDefs)
reschedulePhysReg(SU, false);
@@ -4076,6 +4103,8 @@ void PostGenericScheduler::initialize(ScheduleDAGMI *Dag) {
if (!Bot.HazardRec) {
Bot.HazardRec = DAG->TII->CreateTargetMIHazardRecognizer(Itin, DAG);
}
+ TopCluster = nullptr;
+ BotCluster = nullptr;
}
void PostGenericScheduler::initPolicy(MachineBasicBlock::iterator Begin,
@@ -4137,14 +4166,12 @@ bool PostGenericScheduler::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
-
// Avoid critical resource consumption and balance the schedule.
if (tryLess(TryCand.ResDelta.CritResources, Cand.ResDelta.CritResources,
TryCand, Cand, ResourceReduce))
@@ -4329,9 +4356,11 @@ SUnit *PostGenericScheduler::pickNode(bool &IsTopNode) {
void PostGenericScheduler::schedNode(SUnit *SU, bool IsTopNode) {
if (IsTopNode) {
SU->TopReadyCycle = std::max(SU->TopReadyCycle, Top.getCurrCycle());
+ TopCluster = DAG->getCluster(SU->ParentClusterIdx);
Top.bumpNode(SU);
} else {
SU->BotReadyCycle = std::max(SU->BotReadyCycle, Bot.getCurrCycle());
+ BotCluster = DAG->getCluster(SU->ParentClusterIdx);
Bot.bumpNode(SU);
}
}
diff --git a/llvm/lib/CodeGen/MacroFusion.cpp b/llvm/lib/CodeGen/MacroFusion.cpp
index 5bd6ca0978a4b..c614e477a9d8f 100644
--- a/llvm/lib/CodeGen/MacroFusion.cpp
+++ b/llvm/lib/CodeGen/MacroFusion.cpp
@@ -61,6 +61,11 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
for (SDep &SI : SecondSU.Preds)
if (SI.isCluster())
return false;
+
+ unsigned FirstCluster = FirstSU.ParentClusterIdx;
+ unsigned SecondCluster = SecondSU.ParentClusterIdx;
+ assert(FirstCluster == InvalidClusterId && SecondCluster == InvalidClusterId);
+
// Though the reachability checks above could be made more generic,
// perhaps as part of ScheduleDAGInstrs::addEdge(), since such edges are valid,
// the extra computation cost makes it less interesting in general cases.
@@ -70,6 +75,14 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
if (!DAG.addEdge(&SecondSU, SDep(&FirstSU, SDep::Cluster)))
return false;
+ auto &Clusters = DAG.getClusters();
+
+ FirstSU.ParentClusterIdx = Clusters.size();
+ SecondSU.ParentClusterIdx = Clusters.size();
+
+ SmallSet<SUnit *, 8> Cluster{{&FirstSU, &SecondSU}};
+ Clusters.emplace_back(Cluster);
+
// TODO - If we want to chain more than two instructions, we need to create
// artifical edges to make dependencies from the FirstSU also dependent
// on other chained instructions, and other chained instructions also
diff --git a/llvm/lib/CodeGen/ScheduleDAG.cpp b/llvm/lib/CodeGen/ScheduleDAG.cpp
index 26857edd871e2..e630b80e33ab4 100644
--- a/llvm/lib/CodeGen/ScheduleDAG.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAG.cpp
@@ -365,6 +365,9 @@ LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeName(const SUnit &SU) const {
LLVM_DUMP_METHOD void ScheduleDAG::dumpNodeAll(const SUnit &SU) const {
dumpNode(SU);
SU.dumpAttributes();
+ if (SU.ParentClusterIdx != InvalidClusterId)
+ dbgs() << " Parent Cluster Index: " << SU.ParentClusterIdx << '\n';
+
if (SU.Preds.size() > 0) {
dbgs() << " Predecessors:\n";
for (const SDep &Dep : SU.Preds) {
diff --git a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
index 5678512748569..6c6c81ab2b4cc 100644
--- a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
+++ b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
@@ -584,12 +584,11 @@ bool GCNMaxILPSchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid increasing the max critical pressure in the scheduled region.
@@ -659,12 +658,11 @@ bool GCNMaxMemoryClauseSchedStrategy::tryCandidate(SchedCandidate &Cand,
// MaxMemoryClause-specific: We prioritize clustered instructions as we would
// get more benefit from clausing these memory instructions.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// We only compare a subset of features when comparing nodes between
diff --git a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
index 03712879f7c49..5eb1f0128643d 100644
--- a/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
+++ b/llvm/lib/Target/PowerPC/PPCMachineScheduler.cpp
@@ -100,12 +100,11 @@ bool PPCPreRASchedStrategy::tryCandidate(SchedCandidate &Cand,
// This is a best effort to set things up for a post-RA pass. Optimizations
// like generating loads of multiple registers should ideally be done within
// the scheduler pass by combining the loads during DAG postprocessing.
- const SUnit *CandNextClusterSU =
- Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- const SUnit *TryCandNextClusterSU =
- TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
- if (tryGreater(TryCand.SU == TryCandNextClusterSU,
- Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
if (SameBoundary) {
@@ -190,8 +189,11 @@ bool PPCPostRASchedStrategy::tryCandidate(SchedCandidate &Cand,
return TryCand.Reason != NoCand;
// Keep clustered nodes together.
- if (tryGreater(TryCand.SU == DAG->getNextClusterSucc(),
- Cand.SU == DAG->getNextClusterSucc(), TryCand, Cand, Cluster))
+ const ClusterInfo *CandCluster = Cand.AtTop ? TopCluster : BotCluster;
+ const ClusterInfo *TryCandCluster = TryCand.AtTop ? TopCluster : BotCluster;
+ if (tryGreater(TryCandCluster && TryCandCluster->contains(TryCand.SU),
+ CandCluster && CandCluster->contains(Cand.SU), TryCand, Cand,
+ Cluster))
return TryCand.Reason != NoCand;
// Avoid critical resource consumption and balance the schedule.
diff --git a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
index b944194dae8fc..f9176bc9d3fa5 100644
--- a/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
+++ b/llvm/test/CodeGen/AArch64/argument-blocks-array-of-struct.ll
@@ -477,9 +477,8 @@ define void @callee_in_memory(%T_IN_MEMORY %a) {
; CHECK-NEXT: add x8, x8, :lo12:in_memory_store
; CHECK-NEXT: ldr d0, [sp, #64]
; CHECK-NEXT: str d0, [x8, #64]
-; CHECK-NEXT: ldr q0, [sp, #16]
; CHECK-NEXT: str q2, [x8, #48]
-; CHECK-NEXT: ldr q2, [sp]
+; CHECK-NEXT: ldp q2, q0, [sp]
; CHECK-NEXT: stp q0, q1, [x8, #16]
; CHECK-NEXT: str q2, [x8]
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
index 7e72e8de01f4f..3bada9d5b3bb4 100644
--- a/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
@@ -7,8 +7,8 @@
; CHECK-LABEL: @test
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:s[0-9]+]], [[CPLX1_R:s[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:s[0-9]+]], [[CPLX2_R:s[0-9]+]], [[[BASE]], #64]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: fadd {{s[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -36,8 +36,8 @@ entry:
; CHECK-LABEL: @test_int
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #3
-; CHECK: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
+; CHECK-DAG: ldp [[CPLX1_I:w[0-9]+]], [[CPLX1_R:w[0-9]+]], [[[BASE]]]
+; CHECK-DAG: ldp [[CPLX2_I:w[0-9]+]], [[CPLX2_R:w[0-9]+]], [[[BASE]], #64]
; CHECK: add {{w[0-9]+}}, [[CPLX2_I]], [[CPLX1_I]]
; CHECK: add {{w[0-9]+}}, [[CPLX2_R]], [[CPLX1_R]]
; CHECK: ret
@@ -65,8 +65,8 @@ entry:
; CHECK-LABEL: @test_long
; CHECK: add [[BASE:x[0-9]+]], x0, x1, lsl #4
-; CHECK: ldp [[CPLX1_I:x[0-9]+]], [[CPLX1_R:x[0-9]+]], [[[BASE]]]
-; CHECK: ldp [[CPLX2_I:x[0-9]+]], [[CPLX2_R:x[0-9]+]], [[[BASE]], #128]
+; CHEC...
[truncated]
|
Could you fix that bug first in a separate PR? Then this PR should have fewer test check changes. |
Agreed, this is probably just due to good luck or bad luck in register allocation. We could put |
@@ -61,6 +61,11 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU, | |||
for (SDep &SI : SecondSU.Preds) | |||
if (SI.isCluster()) | |||
return false; | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the MacroFusion changes have no effect, do I understand it correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a little bit confusing here. The "cluster" weak edge actually does not function as before. I thought about removing them at very first, but it is hard to detect like:
// Check that neither instr is already paired with another along the edge
// between them."
We only keep the set of SUnit in a cluster group, no order anymore, so we cannot detect "the edge between them". I am not sure whether it is important to check for this pattern. Maybe we just need to check that they are not being clustered with anyone else? I don't have clear answer yet. So, I still keep the "cluster" weak edge. But the way nodes will be clustered during scheduling will only be determined by the Clusters
in the DAG
. Maybe we can cleanup it later if people have clear idea how to best handle this.
See #139513. After that change, the diff here would be dropped from "176 files changed, 31670 insertions(+), 31540 deletions(-)" to "101 files changed, 25804 insertions(+), 25670 deletions(-)" |
The existing way of managing clustered nodes was done through adding weak edges between the neighbouring cluster nodes, which is a sort of ordered queue. And this will be later recorded as
NextClusterPred
orNextClusterSucc
inScheduleDAGMI
.But actually the instruction may be picked not in the exact order of the queue. For example, we have a queue of cluster nodes A B C. But during scheduling, node B might be picked first, then it will be very likely that we only cluster B and C for Top-Down scheduling (leaving A alone).
Another issue is:
may break the cluster queue.
For example, we want to cluster nodes (order as in
MemOpRecords
): 1 3 2. 1(SUa) will be pred of 3(SUb) normally. But when it comes to (3, 2), As 3(SUa) > 2(SUb), we would reorder the two nodes, which makes 2 be pred of 3. This makes both 1 and 2 become preds of 3, but there is no edge between 1 and 2. Thus we get a broken cluster chain.To fix both issues, we introduce an unordered set in the change. This could help improve clustering in some hard case.
There are two major reasons why there are so many test check changes.
The existing implemention has some buggy behavior: The scheduler does not reset the pointer to next cluster candidate. For example, we want to cluster A and B, but after picking A, we might pick node C. In theory, we should reset the next cluster candiate here, because we have decided not to cluster A and B during scheduling. Later picking B because of Cluster seems not logical.
As the cluster candidates are not ordered now, the candidates might be picked in different order from before.
The most affected targets are: AMDGPU, AArch64, RISCV.
For RISCV, it seems to me most are just minor instruction reorder, don't see obvious regression.
For AArch64, there were some combining of ldr into ldp being affected. With two cases being regressed and two being improved. This has more deeper reason that machine scheduler cannot cluster them well both before and after the change, and the load combine algorithm later is also not smart enough.
For AMDGPU, some cases have more v_dual instructions used while some are regressed. It seems less critical. Seems like test
v_vselect_v32bf16
gets more buffer_load being claused.