-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor extensions #4260
Tensor extensions #4260
Conversation
Codecov Report
@@ Coverage Diff @@
## master #4260 +/- ##
==========================================
- Coverage 74.48% 74.48% -0.01%
==========================================
Files 877 878 +1
Lines 153663 153688 +25
Branches 16828 16830 +2
==========================================
+ Hits 114457 114469 +12
- Misses 34474 34479 +5
- Partials 4732 4740 +8
|
After this PR will be merged, If needed - I'll add performant support for conversion between types instead of throwing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just 2 comments. After those are addressed I believe this can be merged. Nice work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
256f9e9
to
ecbfc65
Compare
Output tensors from graph are being copied from unamanaged to managed memory and every time a new buffer is created. Performance will be better if we reuse buffer when it's possible.
Improve performance by replacing ToArray method in Tensor class with extension methods: ToScalar, ToSpan and ToArray