Skip to content

async get_info_lines #186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 24, 2020
Merged

async get_info_lines #186

merged 3 commits into from
Aug 24, 2020

Conversation

o2sh
Copy link
Owner

@o2sh o2sh commented Aug 23, 2020

#185
The get_info_lines was the obvious first choice to start playing with the async paradigm.

@o2sh o2sh requested a review from spenserblack August 23, 2020 13:58
@o2sh o2sh changed the title async get_info_lines #185 async get_info_lines Aug 23, 2020
Copy link
Collaborator

@spenserblack spenserblack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me 🙂 👍
Just left a few optional suggestions on code style.

Have you had a chance to compare the performance difference before and after this change?

@o2sh
Copy link
Owner Author

o2sh commented Aug 23, 2020

Have you had a chance to compare the performance difference before and after this change?

I didn't see any noticeable differences when running the command on the Linux repo. Either on master branch or the feature branch I was getting a 35s response time on average

@spenserblack
Copy link
Collaborator

spenserblack commented Aug 23, 2020

I didn't see any noticeable differences

🤔 I guess that the functions for gathering repo data were already fast enough that concurrency doesn't have a large effect.

We may want to test exactly where the most slowdown is (my instinct is that it's somewhere around the language analysis).

@o2sh
Copy link
Owner Author

o2sh commented Aug 23, 2020

my instinct is that it's somewhere around the language analysis

This part is certainly responsible for an important proportion of the response time. it's why in the parent issue, I chose to compare the performance of Onefetch with Tokei (responsible for the language stats). A 3x difference is quite significant. I am curious to see what are the other bottlenecks

@spenserblack
Copy link
Collaborator

spenserblack commented Aug 23, 2020

I am curious to see what are the other bottlenecks

As am I 😄

But I guess further discussion about optimization belongs in other issues. Anyway, your changes in this PR look great! 👍
If you're done making changes to this branch, it looks ready to merge to me 🙂

@o2sh o2sh merged commit b1d9c22 into master Aug 24, 2020
@spenserblack
Copy link
Collaborator

spenserblack commented Aug 28, 2020

Looks like we made a bit of a mistake. See the behavior of this code.

While all the fns have been declared async, they are still calling synchronous functions, causing the thread to be blocked. Is that the behavior you've seen?

I'll have to read more into the documentation of libraries that help with async, like futures and tokio.

@o2sh
Copy link
Owner Author

o2sh commented Aug 29, 2020

Try this

This blog post explains it.

Coming from a C# background, I am kind of suprised/disappointed by this behavior...

And looking at the telemetry I have done in the past:

get_language_stats --> 2.696831215s
get_configuration --> 24.528µs
get_current_commit_info --> 1.132701ms
get_authors --> 9.683836222s
get_git_info --> 1.874656ms
get_version --> 1.41074ms
get_commits --> 6.315069997s
get_pending_pending --> 128.824896ms
get_packed_size --> 20.631127ms
get_last_change --> 1.778009ms
get_creation_time --> 9.860741358s
get_project_license --> 88.195828ms

It does seem to run synchronously.
Maybe we should try multi threading ?

@spenserblack
Copy link
Collaborator

spenserblack commented Aug 29, 2020

Yeah, looks like Rust's async/await behaves more like how Python behaves (I just had to switch over from a sync library to an async one yesterday for this same reason).

So I think the best option is multi threading as you suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants