Skip to content

Optimize some inner cases for tokenFromKeyword #2116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Nov 11, 2021

Conversation

MaxGraey
Copy link
Member

@MaxGraey MaxGraey commented Nov 4, 2021

Some nested cases which contain a lot of comparisons (>= 5) with long keyword lengths and common prefixes could be optimized more. WDYT?

benchmark
Parse AS.
main branch:

Parse      :   785.185 ms  n=171
Parse      :   768.248 ms  n=171
Parse      :   801.483 ms  n=171
Parse      :   776.191 ms  n=171

This PR:

Parse      :   727.563 ms  n=171
Parse      :   759.355 ms  n=171
Parse      :   760.821 ms  n=171
Parse      :   769.710 ms  n=171
  • I've read the contributing guidelines
  • I've added my name and email to the NOTICE file

@MaxGraey MaxGraey requested a review from dcodeIO November 4, 2021 10:51
@MaxGraey MaxGraey merged commit 34f52c7 into AssemblyScript:main Nov 11, 2021
@MaxGraey MaxGraey deleted the improve-tokenizer branch November 11, 2021 10:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant