- Support hashing the
folded_tensor.length
field (via a UserList), which is convenient for caching - Improve error messaging when refolding with missing dims
- Fix a data_dims access issue
- Marginally improve the speed of handling FoldedTensors in standard torch operations
- Use default torch types (e.g.
torch.float32
ortorch.torch64
)
- Handle empty inputs (e.g.
as_folded_tensor([[[], []], [[]]])
) by returning an empty tensor - Correctly bubble errors when converting inputs with varying deepness (e.g.
as_folded_tensor([1, [2, 3]])
)
- Allow to use
as_folded_tensor
with no args, as a simple padding function
- Enable sharing FoldedTensor instances in a multiprocessing + cuda context by autocloning the indexer before fork-pickling an instance
- Distribute arm64 wheels for macOS
- Allow dims after last foldable dim during list conversion (e.g. embeddings)
- Github release
- Fix backpropagation when refolding
- Improve performance by computing the new "padded to flattened" indexer only (and not the previous one) when refolding
- Remove C++ torch dependency in favor of Numpy due to lack of torch ABI backward/forward compatibility, making the pre-built wheels unusable in most cases
- Require dtype to be specified when creating a FoldedTensor from a nested list
Inception ! 🎉
- Support for arbitrary numbers of nested dimensions
- No computational overhead when dealing with already padded tensors
- Dynamic re-padding (or refolding) of data based on stored inner lengths
- Automatic mask generation and updating whenever the tensor is refolded
- C++ optimized code for fast data loading from Python lists and refolding
- Flexibility in data representation, making it easy to switch between different layouts when needed