Skip to content

NBT indexer for reading large NBT data efficiently #127

Open
@dktapps

Description

@dktapps

One of the annoying properties of NBT is that the entire thing has to be parsed in order to know how long it is, because a length prefix usually isn't included where it's used.

It's also very annoying to need to decode an entire NBT file (sometimes many MBs of data) just to get at perhaps a specific data key.

It would be nice to introduce an NbtIndexer that would take an input file and build an index of where to find all the tags in it, instead of decoding all those tags directly into objects. Then, when a specific subset of the data wants to be accessed, it would lookup the byte offset in the index, and then deserialize only that specific part of the file.

This would mainly be useful for processing large amounts of data when only a small amount of it is actually wanted.

It's not currently clear to me how useful this would actually be in terms of performance, but it's an experiment I thought might be worth someone's time to investigate.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions