Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance #75

Open
Sing-Li opened this issue Mar 21, 2021 · 8 comments
Open

Improve performance #75

Sing-Li opened this issue Mar 21, 2021 · 8 comments

Comments

@Sing-Li
Copy link
Member

Sing-Li commented Mar 21, 2021

This year, 2021, our GSoC participation has exploded comparing to 2020 and prior years.

As a result, we already have close to 200 registered on our current leaderboard ( https://gsoc.rocket.chat )

Current architecture limitations are showing up in the form of performance bottlenecks. The initial load for everyone is VERY SLOW, for example.

We need to adopt some form of caching (or other techniques) to optimize the performance of this leaderboard for our own use -- and for other high-load users.

@yash-rajpal
Copy link
Member

I have an idea for this, for quite some time. We don't backup any data and when we start server, it always starts from 0 for every contributor and takes some time to load actual contributions and shows 0 till then. I don't think we should use a database for this and we can get away storing all information in the config.json file.
As of now we store array of contributor's names in config.json, my idea or approach is to store array of objects such that :-

"contributors": [ { "username" : "yash-rajpal", "merged" : x , "open" : y, "issues": z }, { "username" : "someUser", "merged" : x , "open" : y, "issues": z }, ]

Now after every time we fetch any details from github apis, we will update the data for that contributor in the config.json file. After any server restart, we will have the data about contributions in config.json and we can show this directly instead of showing 0, which will provide better UX.

@Sing-Li please provide feedback on this approach and I will also look for any performance improvements and report if I find any. Thanks :)

@Sing-Li
Copy link
Member Author

Sing-Li commented Mar 25, 2021

I don't think the delay we observe now with the list has anything to do with this. (I may be wrong though)

Can you try to quantatatively determine where the delay observed may be? Using Chrome or Firefox devtool on request timing.

@Sing-Li
Copy link
Member Author

Sing-Li commented Mar 25, 2021

If you can present your findings with screen capture here - we can analyze it together and find the best way to optimize the app.

@umakantv
Copy link

@Sing-Li
I would suggest using a redis based caching solution for this. I am aware that redis is only an in-memory database but it can backup its data on a regular basis. I can go ahead and implement if you approve of this.

@Sing-Li
Copy link
Member Author

Sing-Li commented Apr 16, 2021

@umakantv We do not need any more "guess" on a solution at the moment - since the problem is not totally quantified (as stated earlier).

Can you try to quantatatively determine where the delay observed may be? Using Chrome or Firefox devtool on request timing.

If you can present your findings with screen capture here - we can analyze it together and find the best way to optimize the app.

Quantifying the problem first and present it here.

Then we can trace it to the current architecture and figure out where the optimization / improvement must take place.

@hrahul2605
Copy link

@Sing-Li
image

Okay so i found out, this is one of the main reasons, the data itself is taking around 2 seconds to load.
What we can do to resolve this is created a paginated endpoint, and just send 10-15 users at a time, and on scroll we can call for the next set of users or display the first set of users and call for the rest in background

The time varies a lot, as you can see it took around 6 seconds here.
image

@Sing-Li
Copy link
Member Author

Sing-Li commented Apr 16, 2021

@hrahul2605 Cool. Anyway to get an average of, say 100 iterations, with a clear local browser cache before each run? That might give us a better view of what is happening.

@Dnouv
Copy link
Member

Dnouv commented May 11, 2022

Hello, @Sing-Li @yash-rajpal ,
It's a little late but do we still need pagination for this performance fix, please let me know? Or do we have plans to adapt the NextJS frontend?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants