Skip to content

Add scalar tensor operations #3127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 1, 2025
Merged

Conversation

ArthurBrussee
Copy link
Contributor

While tensor scalar generally already existed, scalar tensor didn't.

This is especially annoying for operations like 1.0 / tensor or a common case like (1.0 - tensor).

Due to Rust's orphan rules these sadly can't be implemented for every E: ElementConversion, but manually implementing them for primitives seems to work fine!

Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 47.82609% with 12 lines in your changes missing coverage. Please review.

Project coverage is 81.11%. Comparing base (c2ffe16) to head (7456875).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
crates/burn-tensor/src/tensor/api/numeric.rs 47.82% 12 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3127      +/-   ##
==========================================
- Coverage   81.13%   81.11%   -0.02%     
==========================================
  Files         821      821              
  Lines      117950   117962      +12     
==========================================
- Hits        95696    95688       -8     
- Misses      22254    22274      +20     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's going to be a lot nicer to write out!

One minor comment for the scalar-tensor div, otherwise LGTM

Comment on lines 4492 to 4494
let data = TensorData::new(alloc::vec![self], [1]);
let numerator = Tensor::<B, D, K>::from_data(data, &tensor.device()).unsqueeze();
Tensor::div(numerator, tensor)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using recip and scalar multiply instead?

tensor.recip().mul_scalar(self)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

recip() is only for Float tensors, but then that's probably fine! I've changed it to use that now and implemented it for f32 and f64

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think it makes more sense for fractions anyway. We could always add int support back if warranted but it could be ambiguous.

@antimora
Copy link
Collaborator

Probably we should mention this in https://burn.dev/burn-book/building-blocks/tensor.html

@ArthurBrussee
Copy link
Contributor Author

ArthurBrussee commented Apr 30, 2025

Probably we should mention this in https://burn.dev/burn-book/building-blocks/tensor.html

I've added a few lines. Didn't add notes that adding and multiplication are symmetric now as it just read a bit redundant, but lmk!

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for filling out the docs as well!

@laggui laggui merged commit f6e3622 into tracel-ai:main May 1, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants