Skip to content

indexing runs out of memory for large projects #1219

Closed
@martinlippert

Description

@martinlippert

The indexing infrastructure is running out of memory when indexing projects with a large number of source files (as reported in #1212).

We need to improve the implementation to reduce the overall memory consumption, especially to decouple the memory consumption from the size of the project or the number of projects being parsed.

Step 1: we need to chunk the set of source files into well-defined smaller chunks in order to allow the garbage collection to free up while indexing.

Step 2: we need to cleanup the lookup environment of the parser after each parsing attempt in order to avoid leaking memory or keeping zip files open.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions