-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measure perf for leveldown after error handling implementation. #55
Comments
@boingoing - any update on this ? |
Above executes the leveldown unit test suite with plain or instrumented Node/Leveldown binaries built from respective repositories. Elapsed time for the suite is measured as a performance metric. Tests run on a Windows machine with this hardware: |
Is there a perf test in leveldown or is this just running the unit tests ? If there is no perf test I think we might want to write at least as simple one that does something typical and measure that as well. |
Next steps: |
leveldown does have its own perf benchmark suite. @boingoing is that included in your data? |
Thanks @ianwjhalliday for pointing to the leveldown benchmark. This looks like a pretty representative test of ordinary database operations. It writes about a million entries representing a database of around 110MB. Ran the benchmark on the same machine and binaries listed in the above comment and found NAPI has about a 5% perf impact. Here are the results for a few runs:
This workload is not very chatty in terms of calls out to napi API so the measured perf impact may be representative generally. Reducing the overhead for napi callback functions which need to fetch the argument count, argument values, this value, etc by consolidating all those into a single call may prove to minimize the perf impact here, as well as in other modules. This needs some more investigation. |
@boingoing what is the unit of elapsed time? |
@sampsongao all units are in seconds. |
I am working on a cuckoo hash table. Just calling into JS to fetch the various arguments is costing half the run time. Being able to fetch all arguments in a single call would be terrific. |
@jorangreef I do not see a lot of opportunities to improve We have already implemented the change you quoted (whereby you retrieve all function arguments in one call). @kenny-y do you see a way we can further reduce the overhead of that API call? |
Thanks @gabrielschulhof Sorry, I was not clear. What I meant was that I have a napi method which just does this:
This alone takes a total time of 200ms for 1,000,000 calls. Each napi call here is adding overhead, 100ms is in napi_get_cb_info and the rest is split across the others. Is that right? |
@jorangreef I experimentally added a benchmark to https://github.com/nodejs/node/tree/master/benchmark/napi/function_args with your combination of parameters in order to examine how much faster V8 is than N-API: The V8 benchmark: void CallWithCombo1(const FunctionCallbackInfo<Value>& args) {
v8::Local<v8::Context> context = args.GetIsolate()->GetCurrentContext();
void* data = args[0].As<External>()->Value();
(void) data;
size_t buf1_length = node::Buffer::Length(args[1]);
(void) buf1_length;
char* buf1_data = node::Buffer::Data(args[1]);
(void) buf1_data;
int64_t int1 = args[2]->IntegerValue(context).FromJust();
(void) int1;
size_t buf2_length = node::Buffer::Length(args[3]);
(void) buf2_length;
char* buf2_data = node::Buffer::Data(args[3]);
(void) buf2_data;
int64_t int2 = args[4]->IntegerValue(context).FromJust();
(void) int2;
} The N-API benchmark: static napi_value CallWithCombo1(napi_env env, napi_callback_info info) {
size_t argc = 5;
napi_value argv[5];
napi_get_cb_info(env, info, &argc, argv, NULL, NULL);
void* data;
napi_get_value_external(env, argv[0], &data);
size_t buf1_length;
void* buf1_data;
napi_get_buffer_info(env, argv[1], &buf1_data, &buf1_length);
int64_t int1;
napi_get_value_int64(env, argv[2], &int1);
size_t buf2_length;
void* buf2_data;
napi_get_buffer_info(env, argv[3], &buf2_data, &buf2_length);
int64_t int2;
napi_get_value_int64(env, argv[4], &int2);
return NULL;
} The result: There's definitely a performance degradation there, but, of course, that is to be expected. The question is: does the benefit of ABI stability outweigh the performance cost, and can we improve the performance? |
One potential improvement I've found is to have a version of napi_status napi_get_value_int64_fast(napi_env env,
napi_value value,
int64_t* result) {
// Omit NAPI_PREAMBLE and GET_RETURN_STATUS because V8 calls here cannot throw
// JS exceptions.
CHECK_ENV(env);
CHECK_ARG(env, value);
CHECK_ARG(env, result);
v8::Local<v8::Value> val = v8impl::V8LocalValueFromJsValue(value);
// Empty context: https://github.com/nodejs/node/issues/14379
v8::Local<v8::Context> context;
*result = val->IntegerValue(context).FromJust();
return napi_clear_last_error(env);
} This doesn't seem to buy us that much though 😕: |
@gabrielschulhof Not yet, I'm still trying to figure out whether there is anything that I can squeeze out... |
Thanks for the benchmark @gabrielschulhof, that's awesome! 😃
Just speaking for myself, I personally would not mind if methods like Are there any optimizations that can be done for Regarding If I understand things correctly, at present, it's like this:
Is there theoretically any way to do this:
At present, the N-API version of my hash table is slower than the JS version. The JS version can set an element in the hash table faster than the N-API version can just fetch the arguments (let alone set an element). I understand a hash table is by nature more chatty than typical N-API use cases, but it would be great if argument passing was not such a bottleneck. |
We don't really call back into JS when retrieving arguments. That is, no JS code gets executed as part of argument retrieval. We do have to copy the arguments out of C++ though, because, although they appear to be an array, with C++' operator overloading we cannot assume that they map to a plain C array – in fact, they most likely don't. |
@jorangreef one possible solution I can think of would be to use an |
@jorangreef I tested the |
@jorangreef OK, the external still has to go in a separate argument. |
I just realized that using a large |
Argh ... no Int64 |
Would it make sense to have this in a new open issue? |
@jorangreef I'm curious - are you using N-API directly, or via the node-addon-api package? If via the package, are you building on a version of Node.js that has an implementation of N-API? We've recently improved the performance of N-API function invocation. This improvement has not yet landed in node-addon-api's version of N-API, but I have a PR in the pipe which should bring the changes to node-addon-api. The performance gains are charted in the PR. |
@jorangreef |
Re: nodejs/abi-stable-node#55 (comment) PR-URL: #21722 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Michael Dawson <michael_dawson@ca.ibm.com>
Re: nodejs/abi-stable-node#55 (comment) PR-URL: #21722 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Michael Dawson <michael_dawson@ca.ibm.com>
No description provided.
The text was updated successfully, but these errors were encountered: