Skip to content

Commit 03d4332

Browse files
authored
[RISCV] Pack build_vectors into largest available element type (llvm#97351)
Our worst case build_vector lowering is a serial chain of vslide1down.vx operations which creates a serial dependency chain through a relatively high latency operation. We can instead pack together elements into ELEN sized chunks, and move them from integer to scalar in a single operation. This reduces the length of the serial chain on the vector side, and costs at most three scalar instructions per element. This is a win for all cores when the sum of the latencies of the scalar instructions is less than the vslide1down.vx being replaced, and is particularly profitable for out-of-order cores which can overlap the scalar computation. This patch is restricted to configurations with zba and zbb. Without both, the zero extend might require two instructions which would bring the total scalar instructions per element to 4. zba and zba are both present in the rva22u64 baseline which is looking to be quite common for hardware in practice; we could extend this to systems without bitmanip with a bit of extra effort.
1 parent 2dadf8d commit 03d4332

File tree

2 files changed

+958
-327
lines changed

2 files changed

+958
-327
lines changed

llvm/lib/Target/RISCV/RISCVISelLowering.cpp

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3905,6 +3905,65 @@ static SDValue lowerBuildVectorOfConstants(SDValue Op, SelectionDAG &DAG,
39053905
return SDValue();
39063906
}
39073907

3908+
/// Double the element size of the build vector to reduce the number
3909+
/// of vslide1down in the build vector chain. In the worst case, this
3910+
/// trades three scalar operations for 1 vector operation. Scalar
3911+
/// operations are generally lower latency, and for out-of-order cores
3912+
/// we also benefit from additional parallelism.
3913+
static SDValue lowerBuildVectorViaPacking(SDValue Op, SelectionDAG &DAG,
3914+
const RISCVSubtarget &Subtarget) {
3915+
SDLoc DL(Op);
3916+
MVT VT = Op.getSimpleValueType();
3917+
assert(VT.isFixedLengthVector() && "Unexpected vector!");
3918+
MVT ElemVT = VT.getVectorElementType();
3919+
if (!ElemVT.isInteger())
3920+
return SDValue();
3921+
3922+
// TODO: Relax these architectural restrictions, possibly with costing
3923+
// of the actual instructions required.
3924+
if (!Subtarget.hasStdExtZbb() || !Subtarget.hasStdExtZba())
3925+
return SDValue();
3926+
3927+
unsigned NumElts = VT.getVectorNumElements();
3928+
unsigned ElemSizeInBits = ElemVT.getSizeInBits();
3929+
if (ElemSizeInBits >= std::min(Subtarget.getELen(), Subtarget.getXLen()) ||
3930+
NumElts % 2 != 0)
3931+
return SDValue();
3932+
3933+
// Produce [B,A] packed into a type twice as wide. Note that all
3934+
// scalars are XLenVT, possibly masked (see below).
3935+
MVT XLenVT = Subtarget.getXLenVT();
3936+
auto pack = [&](SDValue A, SDValue B) {
3937+
// Bias the scheduling of the inserted operations to near the
3938+
// definition of the element - this tends to reduce register
3939+
// pressure overall.
3940+
SDLoc ElemDL(B);
3941+
SDValue ShtAmt = DAG.getConstant(ElemSizeInBits, ElemDL, XLenVT);
3942+
return DAG.getNode(ISD::OR, ElemDL, XLenVT, A,
3943+
DAG.getNode(ISD::SHL, ElemDL, XLenVT, B, ShtAmt));
3944+
};
3945+
3946+
SDValue Mask = DAG.getConstant(
3947+
APInt::getLowBitsSet(XLenVT.getSizeInBits(), ElemSizeInBits), DL, XLenVT);
3948+
SmallVector<SDValue> NewOperands;
3949+
NewOperands.reserve(NumElts / 2);
3950+
for (unsigned i = 0; i < VT.getVectorNumElements(); i += 2) {
3951+
SDValue A = Op.getOperand(i);
3952+
SDValue B = Op.getOperand(i + 1);
3953+
// Bias the scheduling of the inserted operations to near the
3954+
// definition of the element - this tends to reduce register
3955+
// pressure overall.
3956+
A = DAG.getNode(ISD::AND, SDLoc(A), XLenVT, A, Mask);
3957+
B = DAG.getNode(ISD::AND, SDLoc(B), XLenVT, B, Mask);
3958+
NewOperands.push_back(pack(A, B));
3959+
}
3960+
assert(NumElts == NewOperands.size() * 2);
3961+
MVT WideVT = MVT::getIntegerVT(ElemSizeInBits * 2);
3962+
MVT WideVecVT = MVT::getVectorVT(WideVT, NumElts / 2);
3963+
return DAG.getNode(ISD::BITCAST, DL, VT,
3964+
DAG.getBuildVector(WideVecVT, DL, NewOperands));
3965+
}
3966+
39083967
// Convert to an vXf16 build_vector to vXi16 with bitcasts.
39093968
static SDValue lowerBUILD_VECTORvXf16(SDValue Op, SelectionDAG &DAG) {
39103969
MVT VT = Op.getSimpleValueType();
@@ -4006,6 +4065,13 @@ static SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG,
40064065
return convertFromScalableVector(VT, Vec, DAG, Subtarget);
40074066
}
40084067

4068+
// If we're about to resort to vslide1down (or stack usage), pack our
4069+
// elements into the widest scalar type we can. This will force a VL/VTYPE
4070+
// toggle, but reduces the critical path, the number of vslide1down ops
4071+
// required, and possibly enables scalar folds of the values.
4072+
if (SDValue Res = lowerBuildVectorViaPacking(Op, DAG, Subtarget))
4073+
return Res;
4074+
40094075
// For m1 vectors, if we have non-undef values in both halves of our vector,
40104076
// split the vector into low and high halves, build them separately, then
40114077
// use a vselect to combine them. For long vectors, this cuts the critical

0 commit comments

Comments
 (0)