-
Notifications
You must be signed in to change notification settings - Fork 645
Add scalar tensor operations #3127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3127 +/- ##
==========================================
- Coverage 81.13% 81.11% -0.02%
==========================================
Files 821 821
Lines 117950 117962 +12
==========================================
- Hits 95696 95688 -8
- Misses 22254 22274 +20 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's going to be a lot nicer to write out!
One minor comment for the scalar-tensor div, otherwise LGTM
let data = TensorData::new(alloc::vec![self], [1]); | ||
let numerator = Tensor::<B, D, K>::from_data(data, &tensor.device()).unsqueeze(); | ||
Tensor::div(numerator, tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about using recip and scalar multiply instead?
tensor.recip().mul_scalar(self)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recip() is only for Float tensors, but then that's probably fine! I've changed it to use that now and implemented it for f32 and f64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think it makes more sense for fractions anyway. We could always add int support back if warranted but it could be ambiguous.
Probably we should mention this in https://burn.dev/burn-book/building-blocks/tensor.html |
I've added a few lines. Didn't add notes that adding and multiplication are symmetric now as it just read a bit redundant, but lmk! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for filling out the docs as well!
While tensor scalar generally already existed, scalar tensor didn't.
This is especially annoying for operations like 1.0 / tensor or a common case like (1.0 - tensor).
Due to Rust's orphan rules these sadly can't be implemented for every E: ElementConversion, but manually implementing them for primitives seems to work fine!