-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: split up routers to separate lambdas #8041
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
@juliusmarminge is attempting to deploy a commit to the cal Team on Vercel. A member of the Team first needs to authorize it. |
📦 Next.js Bundle Analysis for @calcom/webThis analysis was generated by the Next.js Bundle Analysis action. 🤖 This PR introduced no changes to the JavaScript bundle! 🙌 |
E2Es are fixed now. There were route assertions which changed.
Documenting here for clarity. There was a separate test done in #8551 that verified when accessing the route The effect this will have is if you look at the network traffic for cold starts on pages in the app, you will now see multiple tRPC calls take upwards of 3 seconds to load but since they run concurrently, the effect is not compounded, which was what resulted in such long load times. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. tested some core features locally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job!! 🚀
Left a comment about how we can make sure this runs on separate lambdas. we can do it in other PR or add it here before we merge if you agree, your choice @keithwillcode
/** | ||
* We deploy our tRPC router on multiple lambdas to keep number of imports as small as possible | ||
* TODO: Make this dynamic based on folders in trpc server? | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way it’s written now, since the api routes are separated into different
folders, Vercel already creates separate functions for each one. We don’t
have to include this config to ensure those are created distinctly. We can
if we want to change config of course though.
|
What does this PR do?
Splits up the all tRPC routers into their own API routes. (Edited by @keithwillcode)
New bundle sizes
The slots router is massive compared to all the rest endpoints together...
Old bundle size
Seems like the analyzer doesn't really compare apples to apples here, as it states the previous bundle size is tiny. Not sure what's going on there, does it not take into account the size of the appRouter since it's just joining in others?
But, given that the entire appRouter is imported which comes in at 334kB, that should give an idea of the actual size:
Fixes # (issue)
Environment: Staging(main branch) / Production
Type of change
How should this be tested?