Skip to content

Improving inference caching (unnecessary context clones?) #529

Open
@brycepg

Description

@brycepg

From investigating the code I'm getting the impression that context cache_generator isn't working very well for caching, and may have some low hanging fruit for improving performance.

Possible improvements:

  • Allow partial caching. Currently an inference value is only cached if the inference is completely exhausted. Since there are many instances of only retrieving the first value of the inference, caching is ignored because there is no partial caching.
  • Have context.push attempt to find inference results in the cache. This could potentially reduce the amount of context.clones required to stop inference from failing due to already traveled values in the context path.
  • Centralize caching. Currently caching is tied to the context. Since pylint does many disparate inferences visiting the ast, this may have a significant speedup. One possible downside would be a larger memory-speed tradeoff.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions