Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make neo compatible with ORC since shallowCopy has already been removed for ORC #45

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ringabout
Copy link
Contributor

@andreaferretti
Copy link
Owner

Hi @ringabout , thank you for your contribution. Unfortunately, this fails tests, namely the suite "trivial operations should share storage". In other words, it seems that we are performing a copy when it would not be necessary: transpositions, reshaping and so on. Is there a way to make this work with ORC?

@mratsim
Copy link

mratsim commented Aug 5, 2022

I haven't check the actual impact in Arraymancer yet, but this mratsim/Arraymancer#562 apparently didn't fail tests especially the one involving the GRU layer as it relies on shallow copies: https://github.com/mratsim/Arraymancer/blob/a7c5476/src/arraymancer/nn_primitives/nnp_gru.nim#L77-L79.

If this is true, you can use the {.shallow.} pragma, though curently ORC CI seems to have other issues (cc @Vindaar) mratsim/Arraymancer#563

https://github.com/mratsim/Arraymancer/blob/fecd375/src/arraymancer/laser/tensor/datatypes.nim#L31-L45

@Vindaar
Copy link

Vindaar commented Aug 5, 2022

I haven't check the actual impact in Arraymancer yet, but this mratsim/Arraymancer#562 apparently didn't fail tests especially the one involving the GRU layer as it relies on shallow copies: https://github.com/mratsim/Arraymancer/blob/a7c5476/src/arraymancer/nn_primitives/nnp_gru.nim#L77-L79.

If this is true, you can use the {.shallow.} pragma, though curently ORC CI seems to have other issues (cc @Vindaar) mratsim/Arraymancer#563

https://github.com/mratsim/Arraymancer/blob/fecd375/src/arraymancer/laser/tensor/datatypes.nim#L31-L45

I could be mistaken, but isn't the important difference here that in that GRU example the code only uses tensors that match our KnownSupportsCopyMem (as they are float) and thus fall under the manual memory allocation branch? For those I wouldn't expect to see any different behavior.

Here in neo it seems more comparable to our fallback 'backend' using seq storage, in which case I also haven't checked if the new code actually keeps reference semantics in those cases.

@mratsim
Copy link

mratsim commented Aug 5, 2022

I could be mistaken, but isn't the important difference here that in that GRU example the code only uses tensors that match our KnownSupportsCopyMem (as they are float) and thus fall under the manual memory allocation branch? For those I wouldn't expect to see any different behavior.

Right I forgot, when I wrote GRU we still had a seq and I had to explicitly ensure that {.shallow.} worked as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants