-
Notifications
You must be signed in to change notification settings - Fork 645
Add burn::linalg::{vector_norm,l2_norm} #3131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3131 +/- ##
==========================================
+ Coverage 81.39% 81.47% +0.08%
==========================================
Files 821 823 +2
Lines 118058 118301 +243
==========================================
+ Hits 96088 96385 +297
+ Misses 21970 21916 -54 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a nice addition! Good start to the linalg module 🙂
A couple minor comments below.
Also, not sure about all the redefined functions for typical norms like l0_norm
, l1_norm
, etc. We could probably remove those and simply add the definitions to the docstring 🤔
I think there's a lot of value in having common functions defined explicitly, even if they're defined in terms of other functions. I could see renaming the +inf/-inf norm functions; if you have better names. We could also fix the lack of But if you think about these in terms of "how readable are callers", a lot of literature is written wrt l1 and l2 norms; and i think at least those should have their own alias functions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if you think about these in terms of "how readable are callers", a lot of literature is written wrt l1 and l2 norms; and i think at least those should have their own alias functions.
For readability and explicitness, I agree. But the negative norm names are a bit awkward indeed, and the match on the f64 as well.
What do you think about an enum?
Could be something along the lines of..
pub enum Norm {
L0,
L1,
L2,
LInf,
LNegInf,
Lp(f64),
}
For ease of use, we could accept p: N
where N: Into<Norm>
.
So one could specify Norm::L0
and other explicit norms. But also if we have From
implementations we could have something more seamless. Just a quick example:
impl From<i32> for Norm {
fn from(value: i32) -> Self {
match value {
0 => Norm::L0,
1 => Norm::L1,
2 => Norm::L2,
v => Norm::Lp(v as f64)
}
}
}
could probably be added for i64, f32 and f64 for better type inference.
Lmk what you think 🙂
Otherwise, the actual implementation LGTM. Just a matter of defining the correct interface.
Drive-by comment: is it worth specializing L2 as diff * diff instead of a powf_scalar? Just since L2 is so common everywhere in ML, might be worth benching |
@ArthurBrussee I have specialized it (for the @laggui I ran with your idea; and updated the PR. I would like someone to look at this and suggest what numeric traits, if any, we should bind the methods to; and how I express those traits. It seems like some of these (max abs, min abs, count) don't need float; but l2 and lp both probably need float. (Also, it's ... weird that .powf_scalar() doesn't require the Float trait, but .sqrt() does ...) |
Please re-review. I think I've addressed the type bounds to the best of my ability. |
Discovered, worked-around #3139 |
Wait on: #3140 |
Converted into a draft until the above bug / fix goes in. |
Should this be moved to |
I briefly had a thought about this over the weekend. Most of the "tensor ops" are just being thrown in Right now |
286e080
to
94b8841
Compare
Done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a minor comment regarding naming for the max/min norms to make sure.
Otherwise should be good to go! Thanks for going through the rounds in this PR.
/// # Returns | ||
/// | ||
/// The L:INFINITY norm of the input tensor. | ||
pub fn max_abs_norm<B: Backend, const D: usize, K>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much better naming than l_inf_norm 😅
Is it called the max abs norm or just max norm typically? My linear algebra terminology is a little dusty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, it really is called the l-infinity-norm
:
https://mathworld.wolfram.com/L-Infinity-Norm.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes of course but there are often other names used interchangeably (e.g., euclidean norm or l2 norm).
Max norm seems to also be used, but not sure if it is that common. Anyway, either max_abs_norm
, max_norm
or infinity_norm
(without the L
prefix) should be suitable.
linalg::max_abs_norm(x.clone(), 0) | ||
.into_data() | ||
.assert_eq(&TestTensor::<2>::from([[3., 4.]]).into_data(), true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this would test pretty much the same code path, but I have no strong opposition against the additional test here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes; it's verifying that the dispatch in vector_norm works
I missed the previous discussion about the ops; I'll try and adjust |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will adjust
linalg::max_abs_norm(x.clone(), 0) | ||
.into_data() | ||
.assert_eq(&TestTensor::<2>::from([[3., 4.]]).into_data(), true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes; it's verifying that the dispatch in vector_norm works
Push a change to jiggle stuck test Action.
b0e765e
to
8622f98
Compare
* Add burn::linalg::{vector_norm,l2_norm} * fmt * doc * freshen up inf * l1 norm * aliases * docs * close for l2 * approx_eq * long * improve tolerance * review * vector_normalize * Refactor for review * typing; make .sqrt() work for l2_norm * eps type conversion * test integer tensors, workaround new bug: tracel-ai#3139 * generalize * remove empty where clause * Remove workaround for tracel-ai#3119 * Update vector_norm.rs Push a change to jiggle stuck test Action.
Pull Request Template
Checklist
run-checks all
script has been executed.Changes
Added
burn::linalg::{vector_norm,l2_norm}
Testing
Testgen tests.