-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8322174: RISC-V: C2 VectorizedHashCode RVV Version #17413
base: master
Are you sure you want to change the base?
Conversation
👋 Welcome back ygaevsky! A progress list of the required criteria for merging this PR into |
NB: I have no access to RVV v1.0.0 hardware so to estimate performance improvements [1] https://mail.openjdk.org/pipermail/riscv-port-dev/2024-January/001220.html The results are below. Hopefully they will be similar on RVV v1.0.0 hardware. Legend: UseVHI ==> UseVectorizedHashCodeIntrinsic
|
Webrevs
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments from a brief look.
// 31^^(MaxVectorSize-1)...31^^0 ==> vector registers | ||
la(pows31, ExternalAddress(adr_pows31)); | ||
mv(t1, num_8b_elems_in_vec); | ||
vsetvli(t0, t1, Assembler::e32, Assembler::m4); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if the scalar code for handling WIDE_TAIL
could be eliminated with RVV's design for stripmining approach [1]? Looks like the current code doesn't take advantage of this design as new vl returned by vsetvli
is not checked and used.
[1] https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#sec-vector-config
One of the common approaches to handling a large number of elements is "stripmining" where each iteration of
a loop handles some number of elements, and the iterations continue until all elements have been processed.
The RISC-V vector specification provides direct, portable support for this approach. The application specifies the
total number of elements to be processed (the application vector length or AVL) as a candidate value for vl, and
the hardware responds via a general-purpose register with the (frequently smaller) number of elements that the
hardware will handle per iteration (stored in vl), based on the microarchitectural implementation and the vtype
setting. A straightforward loop structure, shown in [Example of stripmining and changes to SEW]
(https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#example-stripmine-sew), depicts the ease with
which the code keeps track of the remaining number of elements and the amount per iteration handled by hardware.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your comments, @RealFYang. I have tried to use vector instructions (m4 ==> m2) for the tail calculations but that makes the perfromance numbers only worse. :-(
I've made additional measurements with more granularity:
[ -XX:-UseRVV ] [-XX:+UseRVV }
ArraysHashCode.multiints 10 avgt 30 12.460 ± 0.155 13.836 ± 0.054 ns/op
ArraysHashCode.multiints 11 avgt 30 14.541 ± 0.140 14.613 ± 0.084 ns/op
ArraysHashCode.multiints 12 avgt 30 15.097 ± 0.052 15.517 ± 0.097 ns/op
ArraysHashCode.multiints 13 avgt 30 13.632 ± 0.137 14.486 ± 0.181 ns/op
ArraysHashCode.multiints 14 avgt 30 15.771 ± 0.108 16.153 ± 0.092 ns/op
ArraysHashCode.multiints 15 avgt 30 14.726 ± 0.088 15.930 ± 0.077 ns/op
ArraysHashCode.multiints 16 avgt 30 15.533 ± 0.067 15.496 ± 0.083 ns/op
ArraysHashCode.multiints 17 avgt 30 15.875 ± 0.173 16.878 ± 0.172 ns/op
ArraysHashCode.multiints 18 avgt 30 15.740 ± 0.114 16.465 ± 0.089 ns/op
ArraysHashCode.multiints 19 avgt 30 17.252 ± 0.051 17.628 ± 0.155 ns/op
ArraysHashCode.multiints 20 avgt 30 20.193 ± 0.282 19.039 ± 0.441 ns/op
ArraysHashCode.multiints 25 avgt 30 20.209 ± 0.070 20.513 ± 0.071 ns/op
ArraysHashCode.multiints 30 avgt 30 23.157 ± 0.068 23.290 ± 0.165 ns/op
ArraysHashCode.multiints 35 avgt 30 28.671 ± 0.116 26.198 ± 0.127 ns/op <---
ArraysHashCode.multiints 40 avgt 30 30.992 ± 0.068 27.342 ± 0.072 ns/op
ArraysHashCode.multiints 45 avgt 30 39.408 ± 1.428 32.170 ± 0.230 ns/op
ArraysHashCode.multiints 50 avgt 30 41.976 ± 0.442 33.103 ± 0.090 ns/op
ArraysHashCode.multiints 55 avgt 30 45.379 ± 0.236 35.899 ± 0.692 ns/op
ArraysHashCode.multiints 60 avgt 30 48.615 ± 0.249 35.709 ± 0.477 ns/op
ArraysHashCode.multiints 65 avgt 30 51.455 ± 0.213 38.275 ± 0.266 ns/op
ArraysHashCode.multiints 70 avgt 30 54.032 ± 0.324 37.985 ± 0.264 ns/op
ArraysHashCode.multiints 75 avgt 30 56.759 ± 0.164 39.446 ± 0.425 ns/op
ArraysHashCode.multiints 80 avgt 30 61.334 ± 0.267 41.521 ± 0.310 ns/op
ArraysHashCode.multiints 85 avgt 30 66.177 ± 0.299 44.136 ± 0.407 ns/op
ArraysHashCode.multiints 90 avgt 30 67.444 ± 0.282 42.909 ± 0.275 ns/op
ArraysHashCode.multiints 95 avgt 30 77.312 ± 0.303 49.078 ± 1.166 ns/op
ArraysHashCode.multiints 100 avgt 30 78.405 ± 0.220 47.499 ± 0.553 ns/op
ArraysHashCode.multiints 105 avgt 30 75.706 ± 0.265 46.029 ± 0.579 ns/op
As you can see the numbers become better with +UseRVV only after length >= 30 and perhaps that can explain why my attempt to improve the tail with RVV instructions was unsuccessful - the cost of setting up Vector Unit for small lengths is to high. :-(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I don't quite understand why there is a need to change LMUL from m4
to m2
if we are switching to use the stripmining approach. The tail calculation should normally share the code for VEC_LOOP
, which also means we need to use some vector mask instructions to filter out the active elements for each loop iteration especially the iteration for handing the tail elements. And the vl returned by vsetvli
tells us the number of elements which could be processed in parallel for one certain iteration ([1] is one example). I am not sure if you are trying this way. Do you have more details or code changes to share? Thanks.
[1] https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#example-stripmine-sew
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used m4->m2 change to process 8 elements in the tail with vector instructions after main vector loop. IIUC, the m4->m2 change in runtime is very costly, so I've created another patch with same goal but without m4->m2 change:
void C2_MacroAssembler::arrays_hashcode_v(Register ary, Register cnt, Register result,
Register tmp1, Register tmp2, Register tmp3,
Register tmp4, Register tmp5, Register tmp6,
BasicType eltype)
{
...
const int nof_vec_elems = MaxVectorSize;
const int hof_vec_elems = nof_vec_elems >> 1;
const int elsize_bytes = arrays_hashcode_elsize(eltype);
const int elsize_shift = exact_log2(elsize_bytes);
const int vec_step_bytes = nof_vec_elems << elsize_shift;
const int half_vec_step_bytes = vec_step_bytes >> 1;
const address adr_pows31 = StubRoutines::riscv::arrays_hashcode_powers_of_31()
+ sizeof(jint);
...
const Register chunks = tmp1;
const Register chunks_end = chunks;
const Register pows31 = tmp2;
const Register powmax = tmp3;
const VectorRegister v_coeffs = v4;
const VectorRegister v_src = v8;
const VectorRegister v_sum = v12;
const VectorRegister v_powmax = v16;
const VectorRegister v_result = v20;
const VectorRegister v_tmp = v24;
const VectorRegister v_zred = v28;
Label DONE, TAIL, TAIL_LOOP, PRE_TAIL, SAVE_VRESULT, WIDE_TAIL, VEC_LOOP;
// result has a value initially
beqz(cnt, DONE);
andi(chunks, cnt, ~(hof_vec_elems-1));
beqz(chunks, TAIL);
// load pre-calculated powers of 31
la(pows31, ExternalAddress(adr_pows31));
mv(t1, nof_vec_elems);
vsetvli(t0, t1, Assembler::e32, Assembler::m4);
vle32_v(v_coeffs, pows31);
// clear vector registers used in intermediate calculations
vmv_v_i(v_sum, 0);
vmv_v_i(v_powmax, 0);
vmv_v_i(v_result, 0);
// set initial values
vmv_s_x(v_result, result);
vmv_s_x(v_zred, x0);
andi(chunks, cnt, ~(nof_vec_elems-1));
beqz(chunks, WIDE_TAIL);
subw(cnt, cnt, chunks);
slli(chunks_end, chunks, elsize_shift);
add(chunks_end, ary, chunks_end);
// get value of 31^^nof_vec_elems
lw(powmax, Address(pows31, -1 * sizeof(jint)));
vmv_s_x(v_powmax, powmax);
bind(VEC_LOOP);
// result = result * 31^^(hof_vec_elems) + v_src[0] * 31^^(hof_vec_elems-1)
// + ... + v_src[hof_vec_elems-1] * 31^^(0)
vmul_vv(v_result, v_result, v_powmax);
arrays_hashcode_vec_elload(v_src, v_tmp, ary, eltype);
vmul_vv(v_src, v_src, v_coeffs);
vredsum_vs(v_sum, v_src, v_zred);
vadd_vv(v_result, v_result, v_sum);
addi(ary, ary, vec_step_bytes); // bump array pointer
bne(ary, chunks_end, VEC_LOOP); // reached the end of chunks?
beqz(cnt, SAVE_VRESULT);
bind(WIDE_TAIL);
andi(chunks, cnt, ~(hof_vec_elems-1));
beqz(chunks, PRE_TAIL);
mv(t1, hof_vec_elems);
subw(cnt, cnt, t1);
vslidedown_vx(v_coeffs, v_coeffs, t1);
// get value of 31^^hof_vec_elems
lw(powmax, Address(pows31, sizeof(jint)*(hof_vec_elems - 1)));
vmv_s_x(v_powmax, powmax);
vsetvli(t0, t1, Assembler::e32, Assembler::m4);
// result = result * 31^^(hof_vec_elems) + v_src[0] * 31^^(hof_vec_elems-1)
// + ... + v_src[hof_vec_elems-1] * 31^^(0)
vmul_vv(v_result, v_result, v_powmax);
arrays_hashcode_vec_elload(v_src, v_tmp, ary, eltype);
vmul_vv(v_src, v_src, v_coeffs);
vredsum_vs(v_sum, v_src, v_zred);
vadd_vv(v_result, v_result, v_sum);
beqz(cnt, SAVE_VRESULT);
addi(ary, ary, half_vec_step_bytes); // bump array pointer
bind(PRE_TAIL);
vmv_x_s(result, v_result);
bind(TAIL);
slli(chunks_end, cnt, elsize_shift);
add(chunks_end, ary, chunks_end);
bind(TAIL_LOOP);
arrays_hashcode_elload(t0, Address(ary), eltype);
slli(t1, result, 5); // optimize 31 * result
subw(result, t1, result); // with result<<5 - result
addw(result, result, t0);
addi(ary, ary, elsize_bytes);
bne(ary, chunks_end, TAIL_LOOP);
j(DONE);
bind(SAVE_VRESULT);
vmv_x_s(result, v_result);
bind(DONE);
...
}
and got the following numbers:
[ -XX:+UseVectorizedHashCodeIntrinsic -XX:-UseRVV ]
Benchmark (size) Mode Cnt Score Error Units
ArraysHashCode.multibytes 8 avgt 10 11.020 ± 0.225 ns/op
ArraysHashCode.multibytes 9 avgt 10 12.578 ± 0.117 ns/op
ArraysHashCode.multibytes 16 avgt 10 15.505 ± 0.273 ns/op
ArraysHashCode.multibytes 17 avgt 10 16.603 ± 0.164 ns/op
ArraysHashCode.multibytes 24 avgt 10 21.005 ± 0.271 ns/op
ArraysHashCode.multibytes 25 avgt 10 21.428 ± 0.227 ns/op
ArraysHashCode.multibytes 32 avgt 10 27.985 ± 0.356 ns/op
ArraysHashCode.multibytes 33 avgt 10 29.669 ± 0.145 ns/op
ArraysHashCode.multibytes 48 avgt 10 37.575 ± 0.318 ns/op
ArraysHashCode.multibytes 49 avgt 10 40.121 ± 0.229 ns/op
ArraysHashCode.multibytes 56 avgt 10 48.637 ± 0.274 ns/op
ArraysHashCode.multibytes 57 avgt 10 45.931 ± 0.305 ns/op
ArraysHashCode.multibytes 64 avgt 10 48.362 ± 0.315 ns/op
ArraysHashCode.multibytes 65 avgt 10 52.228 ± 0.320 ns/op
ArraysHashCode.multibytes 72 avgt 10 49.523 ± 0.287 ns/op
ArraysHashCode.multibytes 73 avgt 10 54.788 ± 0.437 ns/op
ArraysHashCode.multibytes 80 avgt 10 62.087 ± 0.289 ns/op
ArraysHashCode.multibytes 81 avgt 10 62.570 ± 0.211 ns/op
[ -XX:+UseVectorizedHashCodeIntrinsic -XX:+UseRVV ]
Benchmark (size) Mode Cnt Score Error Units
ArraysHashCode.multibytes 8 avgt 10 15.700 ± 0.181 ns/op
ArraysHashCode.multibytes 9 avgt 10 20.743 ± 0.419 ns/op
ArraysHashCode.multibytes 16 avgt 10 30.189 ± 0.301 ns/op
ArraysHashCode.multibytes 17 avgt 10 32.639 ± 0.601 ns/op
ArraysHashCode.multibytes 24 avgt 10 36.358 ± 0.628 ns/op
ArraysHashCode.multibytes 25 avgt 10 34.486 ± 0.563 ns/op
ArraysHashCode.multibytes 32 avgt 10 42.667 ± 0.473 ns/op
ArraysHashCode.multibytes 33 avgt 10 44.858 ± 0.413 ns/op
ArraysHashCode.multibytes 48 avgt 10 47.132 ± 0.443 ns/op
ArraysHashCode.multibytes 49 avgt 10 51.528 ± 0.519 ns/op
ArraysHashCode.multibytes 56 avgt 10 52.133 ± 0.225 ns/op
ArraysHashCode.multibytes 57 avgt 10 48.549 ± 0.411 ns/op
ArraysHashCode.multibytes 64 avgt 10 57.399 ± 0.546 ns/op
ArraysHashCode.multibytes 65 avgt 10 57.680 ± 0.158 ns/op
ArraysHashCode.multibytes 72 avgt 10 50.890 ± 0.327 ns/op
ArraysHashCode.multibytes 73 avgt 10 54.338 ± 0.378 ns/op
ArraysHashCode.multibytes 80 avgt 10 59.218 ± 0.301 ns/op
ArraysHashCode.multibytes 81 avgt 10 63.889 ± 0.344 ns/op
As you can see the numbers are worse even in cases when scalar code is not used at all, i.e for lengths 16,24,32,48,56,64 etc. It seems possible to change the code to not contain any scalar code, e.g. use vslidedown instruction to move pre-calculated powers of 31 in v_coeffs according to the count of remaining elements, and perform the calculation:
vmul_vv(v_result, v_result, v_powmax);
arrays_hashcode_vec_elload(v_src, v_tmp, ary, eltype);
vmul_vv(v_src, v_src, v_coeffs);
vredsum_vs(v_sum, v_src, v_zred);
vadd_vv(v_result, v_result, v_sum);
for them at once. However, as I pointed out above in notes about lengths24/36/..., that unlikely change the performance numbers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, any ideas for improvements the code are very welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, any ideas for improvements the code are very welcome.
Hi, I am afraid that your local changes posted is still not in a stripmining form. Normally I am expecting a single loop with masked vector instructions to handle all cases including the tail ones. See my previous comment [1]. Note that I am not saying that stripmining is the best one here in performance, but we will need the numbers to evaluate the different approaches.
[1] #17413 (comment)
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
"Please keep me active" comment. |
❗ This change is not yet ready to be integrated. |
@ygaevsky this pull request can not be integrated into git checkout JDK-8322174
git fetch https://git.openjdk.org/jdk.git master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
@ygaevsky This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration! |
. |
The patch adds possibility to use RVV instructions for faster vectorizedHashCode calculations on RVV v1.0.0 capable hardware.
Testing: hotspot/jtreg/compiler/ under QEMU-8.1 with RVV v1.0.0.
Progress
Warning
Issue
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/17413/head:pull/17413
$ git checkout pull/17413
Update a local copy of the PR:
$ git checkout pull/17413
$ git pull https://git.openjdk.org/jdk.git pull/17413/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 17413
View PR using the GUI difftool:
$ git pr show -t 17413
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/17413.diff
Webrev
Link to Webrev Comment