-
Notifications
You must be signed in to change notification settings - Fork 645
separating onnx parsing from burn-import #1921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1921 +/- ##
==========================================
- Coverage 84.87% 84.86% -0.01%
==========================================
Files 797 798 +1
Lines 95082 95105 +23
==========================================
+ Hits 80697 80711 +14
- Misses 14385 14394 +9 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. A minor comment to review/clean up the dependencies.
But I think it's almost ready to be approve.
Since this is a brand new crate, we also need to update publish.yml
under .github.yml
.
Also, can you please double check adding-a-new-operation-to-burn.md
is up to date? Not sure if we have any references to about the onnx portion.
@nathanielsimard @laggui can you sign off on the name?
|
I rewrote it, but I'm having trouble figuring out the best way to word it. I'd definitely appreciate a proof reader on the changed section. Also I added a readme though it's currently pretty barebones still figuring out what information it should have for other implementors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally forgot about this PR, I'll hold off on merging an new ONNX ops if we're going to merge this soon otherwise you'll always have to fix the conflicts 😅
Otherwise, the refactor LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving in advance. There are a couple minor text fixes but looks good overall.
Merged! Thanks for your work. I hope this crate will be useful to other project as well. |
* Implement 3D and transposed 3D convolutions. * Merge changes from onnx-ir #1921 pr --------- Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
* Implement 3D and transposed 3D convolutions. * Merge changes from onnx-ir tracel-ai#1921 pr
* Implement 3D and transposed 3D convolutions. * Merge changes from onnx-ir tracel-ai#1921 pr
* separating onnx parsing from burn-import * ran clippy and cargo-fmt * removed unused deps from onnx-ir * fixed clippy warnings that were causing run-checks to fail * removed dead code * removed unused dependencies from burn-import * updated contributor-book, updated publish.yml, added readme * update cargo lock * formatted md document with prettier, rephrased sentence * missed the errors with reduce_prod_conversion during merge * formatted onnx-to-burn-conversion-tool.md, forgot to save
* Implement 3D and transposed 3D convolutions. * Merge changes from onnx-ir #1921 pr --------- Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
* Implement 3D and transposed 3D convolutions. * Merge changes from onnx-ir #1921 pr --------- Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
Pull Request Template
Checklist
run-checks all
script has been executed.Related Issues/PRs
#1812
Changes
Separated out all the onnx specific code in burn-import to a new crate
onnx-ir
(which we can rename to whatever), and made some necessary changes to burn-import sinceOnnxGraph
and other ir structs are now external. I also aliased a few types to just better distinguish between onnx and burn types.some code had to be duplicated (
flatten_config
)or made public for the sake of using between both (constant_conversion
)Testing
so far I've tested with onnx-test and burn-import. All test seem to be passing. examples seem okay
I think we should rework how onnx test are being used so that the models, and test themselves could be potentially reused. I think we can do that separately from this PR though.
Python side that would mean serializing graph inputs and outputs, burn side I'm guessing some macros for op types (such as
binary_op_test!
) though I'm open to suggestions if you guys want to handle it another way.