Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increase specialization of _totuple #41515

Merged
merged 1 commit into from
Jul 14, 2021
Merged

Conversation

JeffBezanson
Copy link
Member

Helps #41512

After:

julia> @time doit(itr)
  0.000285 seconds (10.00 k allocations: 312.500 KiB)

so almost 100x speedup, but I'm not sure where the remaining allocation is coming from.

@JeffBezanson JeffBezanson added the performance Must go faster label Jul 8, 2021
Copy link
Member

@Sacha0 Sacha0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tough reviews Jeff.

Copy link
Member

@simeonschaub simeonschaub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the type annotation here still needed?

ts = _totuple(rT, itr, y[2])::rT
(Won't hurt to leave it in, just curious.)

Otherwise LGTM.

@JeffBezanson
Copy link
Member Author

Yes, the type assert seems to still help. I would guess due to recursion widening of some sort.

@JeffBezanson JeffBezanson merged commit 468b157 into master Jul 14, 2021
@JeffBezanson JeffBezanson deleted the jb/specialize_totuple branch July 14, 2021 19:48
KristofferC pushed a commit that referenced this pull request Jul 19, 2021
Helps #41512

(cherry picked from commit 468b157)
KristofferC pushed a commit that referenced this pull request Jul 20, 2021
Helps #41512

(cherry picked from commit 468b157)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants