Skip to content

Conversation

@clems4ever
Copy link
Contributor

Realizing that the weights are also sparse, it might be interesting to exploit this fact to allocate only what is necessary in memory. This has the downside of being slower in terms query but I think this is definitely optimizable. This was just a proof of concept.

I took the opportunity to fix the other implementation and the test related to DyadicMemory.

Realizing that the weights are also sparse, it might be interesting to
exploit this fact to allocate only what is necessary in memory. This has
the downside of being slower in terms query but I think this is definitely
optimizable. This was just a proof of concept.
@clems4ever
Copy link
Contributor Author

I have made a little benchmark by trying to store all sequences of three consecutive uint8 values encoded with sdr size of 512 and population of 8. Each uint8 value is overlapping with the next one.
I can say that the memory gap is huge between this implementation and the performance optimized one, ~22k against 128M measured in number of actual stored values.

Obviously, the more there are values in the associative memory, the more consumed memory will tend to N^3 and the more it might impact performance. Again it is a tradeoff.

@clems4ever
Copy link
Contributor Author

I have tried to use scipy sparse matrices (coo, csr, csc, dok) but they are way slower than defaultdict.

@PeterOvermann PeterOvermann merged commit ae9b61c into PeterOvermann:main Oct 16, 2023
@PeterOvermann
Copy link
Owner

Thanks for your contributions!
One remark: About one third (1/e to be precise) of bits will be set when the memory is filled to capacity (n/p)^3 -- this is not sparse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants