Skip to content

TokenStream manipulations are 1000x too slow #65080

Closed
@dtolnay

Description

@dtolnay

Context: illicitonion/num_enum#14
Switching a proc macro from being token-based to operating on strings with just a final conversion from string to TokenStream can be a 100x improvement in compile time. If we care that people continue to write macros using tokens, the performance needs to be better.

I minimized the slow part of the num_enum macro to this benchmark:
https://github.com/alexcrichton/proc-macro2/tree/12bac84dd8d090d2987a57b747c7ae7bbeb8a3d0/benches/bench-libproc-macro
On my machine the string implementation takes 8ms and the token implementation takes 25721ms.

I know that there is a proc macro server that these calls end up talking to, but I wouldn't expect this huge of a factor from that. If the server calls are the only thing making this slow, is there maybe a way we could buffer operations in memory to defer and batch the server work?

I will file issues in proc-macro2 and quote as well to see if anything can be improved on their end.

FYI @eddyb @petrochenkov @alexcrichton

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-macrosArea: All kinds of macros (custom derive, macro_rules!, proc macros, ..)C-enhancementCategory: An issue proposing an enhancement or a PR with one.I-compiletimeIssue: Problems and improvements with respect to compile times.T-compilerRelevant to the compiler team, which will review and decide on the PR/issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions