-
Notifications
You must be signed in to change notification settings - Fork 632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression traversing large arrays compared to other engines #1294
Comments
Hermes is an interpreter optimized for very fast startup and small binary size. You are comparing it against JSC, which is a type specializing JIT. At a steady state, given enough time to warm up, a JIT will always have a perf advantage. So, this is expected. I extracted the benchmark from your code: function testConvert1() {
const startTime = Date.now();
const byteArray = new Uint8Array(10000000);
for (let n = 0; n < byteArray.length; n++) {
}
print("ExecutionTime = ", Date.now() - startTime);
} This code, a tight array loop that does nothing, is particularly advantageous for a JIT. A JIT could legitimately optimize out the entire loop, bringing the time down to 0. By comparison, the JIT will have much harder time optimizing multiple small routines with allocations, etc. If we modify the code to actually do something, and keep the typed array length in a variable to avoid fetching it all the time, we get this: function testConvert2() {
const startTime = Date.now();
const byteArray = new Uint8Array(10000000);
let sum = 0;
for (let n = 0, len = byteArray.length; n < len; n++) {
sum += byteArray[n];
}
print(sum, "ExecutionTime = ", Date.now() - startTime);
} I ran this code with Hermes, v8 and JSC, with their JITs on and off:
As an interpreter, Hermes is currently on par in performance and often has an advantage. With all that said, we realize that there are situations where more performance is needed, and Hermes is unable to serve them well. That's why we are working on Static Hermes, which will be much faster. Running the same benchmark with Static Hermes currently takes 14ms, so we are very close and keep improving. |
Ok, understood, thx for answer. There's also some other performance differences I've noticed, I posted it in react-native project but they said I should look into library's authors, altho I think issue is with some of the low-level api's. I was looking at TextEncoder(from 'text-encoding' lib) and cheerio(jQuery) performace, and saw they perform much worse on React Native compared to webview/NodeJS and about as twice worse on Hermes compared to JSC. TextEncoder in my example(on 100kb input) takes ~170 ms, while webview takes 1-2 ms (both Android and iPhone) and NodeJS takes 14 ms (I suppose it has to do something with WebAPI optimizations). Another case, cheerio(JQuery) performance -when I try to perform some HTML manipulation via cheerio, I also saw big hit if compared to other platforms:
|
I am now trying to workaround the TextEncoder via JSI/Golang and if it goes well then might attempt to do same with cheerio, altho that one might be bulky. |
We are currently working on a native implementation of TextEncoder. So that will be much faster soon. |
TextEncoder works really fast now, as of react-native 0.74.2 TextDecoder is not yet supported I guess, but I have my JSI solution for now which is sufficient enough for me. |
Hi, I've noticed some parts of my application being slow, so while checking it, I noticed that running through array on hermes is significantly slower than on other platforms.
For this same test timings comparison:
NodeJS: 15 ms
Android WebView: 100 ms
Android RN JSC: 60 ms
Android RN Hermes: 1400 ms
Test is simply run over loop 10 min times and do something. Provided example with 'Test Convert' buton.
https://github.com/maksimlya/TestRNPerf
The text was updated successfully, but these errors were encountered: