-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add async HTTP context #18
Conversation
The CI fails on Node.js 12.0.0 because AsyncLocalStorage is only available since v12.17.0. |
Here's some benchmarks results to show the performance impact we have by activating the The first benchmark is a simple Without the AsyncLocalStorage
With the AsyncLocalStorage
We can see with that we loosing In a real application, we may have many Route.get('/', async () => {
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
await asyncFn();
return { hello: AsyncHttpContext.getContext()?.request.id() || 'NOTHING' }
})
async function asyncFn() {
await new Promise((resolve) => setImmediate(resolve));
} Without the AsyncLocalStorage
With the AsyncLocalStorage
We can see that the we are loosing way more req/sec with multiple |
Hey @targos, thanks for the PR, this is going to be a helpful addition. @RomainLanz @targos Yes, the performance hits are quite visible and if I am not wrong, then with every async/await call, the performance degrades linearly? Also I think, the API is currently new and we will see performance improvements in the future right? Or is it something that will never happen, because of the design of async local storage? |
I have not checked if it's linear, but it seems so, yeah. The reason is that for every
I have looked a bit at the implementation and it seems that performance improvements are possible in the future. /cc @vdeturckheim do you think I've implemented it correctly on the Adonis side? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand implementation LGTM
It might also be interesting to add a handler at process level for unhandled exceptions - with the context, you should be able to trigger the framework's error logic from there. |
So what you suggest, on how should we roll it? For me the biggest advantage is to use the |
The only improvement I can think of from the implementation perspective is wrappers around Here is a simple example: https://github.com/puzpuzpuz/cls-rtracer/blob/master/src/util.js#L8 |
I've checked the benchmark and don't see any problems with it or any potential improvements. Indeed, when HTTP server does almost nothing, ALS (or
AFAIK all low-hanging fruits in terms of ALS-targeted optimizations in
One of the main remaining problems related with the overhead (especially when async/await syntax is in use) is What's written above is my vision and other collaborators (who are certainly more knowledgeable in the topic) may have different opinion. |
@puzpuzpuz Thanks a ton for your sharing all these insights, it's really helpful. I will start looking into getting this implemented and then try it on a real project to see the actual overhead. |
@thetutlage no problem at all. I'll be waiting for your opinion once the experiments are done. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I'm going to update this eventually. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This is not abandoned. @thetutlage is there a way to prevent the stale bot from marking this PR? |
Updated config to exempt the |
I followed this thread nodejs/node#34493 and especially the comment by mcollina nodejs/node#34493 (comment) that micro benchmarks are may not be the best way to find the async hooks overhead. Should we give it a try once again? I am thinking to finalize the feature and put it behind a flag. I have a couple of apps handling real world traffic, I will use it there and see the overhead there? |
I don't think you'll see significant improvements in terms of the overhead, but once nodejs/node#36394 lands, the improvement should be very significant. So, in future versions of node |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Seems that didn't work |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I know that there is some ongoing work to make this faster but I don't know exactly where that is. I will try to get some data next week. |
Thanks @vdeturckheim |
FYI just saw that the underlying v8 changes for node/36394 were just merged in. Thanks to all contributors for working on this! @thetutlage I had a question around how async HttpContext might work with v5. Take Lucid for example. Would async context work with emitter, i.e. having access to the request This would allow for much deeper and correlated logging within our applications, and - at least for our business, is the most promising use of this feature. |
That's great. I see that the PR does improves the performance a lot.
Yes, it will. Infact, AdonisJS uses a profiler under the hood, that collects more metrics. However, we never exposed it or documented it. Coz, the real benefit of profiler comes into the picture after using the AsyncHooks. |
I'm going to update this PR soon. |
Seems that some updates regarding performance have been done in latest Node.js release. |
Yep, just saw the release PR (#38719) for Really excited for this one! 😄 |
I updated the code. Some questions before I make the PR ready and write tests: Is the API OK, from a user's perspective?It looks like this: import AsyncHttpContext from '@ioc:Adonis/Core/AsyncHttpContext'
import Route from '@ioc:Adonis/Core/Route'
import { setTimeout } from 'timers/promises'
AsyncHttpContext.getContext() // Returns null (because it's called outside of a request)
AsyncHttpContext.getContextOrFail() // Throws (same reason)
Route.get('/test', async () {
await doSomething()
return 'ok'
})
function doSomething() {
await setTimeout(1000)
const ctx = AsyncHttpContext.getContextOrFail()
console.log(ctx.request.url())
} Should it be opt-in?I will do some new testing with Node.js 16.2.0 as the overhead should be much smaller than before. |
Thanks for updating the PR :) So, there are a couple of ways to opt into it. First is Enable it using a flag and then the internals of the framework will rely on the flag to access the Async context. This way, someone can just opt into this for tracing. This also opens up the possibility to have something like Laravel debugbar for development. Another is we just make this the default way to even write our apps. Meaning, the user land code uses Async context for developing features. If the performance impact is not too big, then I will opt for this. Coz it then makes everything simple, the guides and tutorials all will use the same syntax/API |
Let's get it in :) |
@RomainLanz @thetutlage
This is very much WIP but I open it early to discuss the API and approach taken.