-
Notifications
You must be signed in to change notification settings - Fork 63
Closed
Description
I see that v4 dropped the LRU cache. I thought caching was a good strategy to reduce the number of allocations
in algorithms where Tensor sizes are not known beforehand.
Is there a particular reason why this feature has been removed?
PS: I really appreciate the work on TensorOperations, it is a lot of help in my work :)
Metadata
Metadata
Assignees
Labels
No labels