-
Notifications
You must be signed in to change notification settings - Fork 68
[BOO] Remove force_single_dispatch #1113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1f5fd69
to
526e900
Compare
Keeping this as a draft until IREE is updated |
526e900
to
1cb0828
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we know if this is working correctly? Can we add some tests to verify this doesn't regress, etc?
Can we rather plumb this through as a flag (defaulted to False if needed). There are cases for layer norm that are currently failing to distribute without forcing single dispatch, in particular when intermediate results are forwarded from forward to reverse mode since those results don't participate in the "main" reduction. See https://discord.com/channels/689900678990135345/689932666505920596/1412414159248691210 for some more context. |
Yeah, I just noticed this too. We could maybe put it into the |
We will have to fix that forward. Could you file an issue for what you are seeing? |
Why the rush? Can't we just treat this as an optimization option? |
1cb0828
to
2aae8c2
Compare
We can keep it as an optimization option, but it would be better to keep it default off. The default on is breaking a lot of cases. The issue that you pointed to is triaged and I think there is a fix for that already. |
@ftynse do you have any ideas/preference on how this would be exposed as an optimization option? |
Signed-off-by: Ian Wood <ianwood@u.northwestern.edu>
Signed-off-by: zjgarvey <zjgarvey@gmail.com>
ab12df6
to
e396140
Compare
Yeah, the change I made should work as a temporary workaround until something like #1134 lands (where the fusion spec for an op already determines whether it uses the attr). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Needs iree-org/iree@35a872c to ensure all conv kernels get fused to a single dispatch. Specifically, the pads need to be fused into the dispatches.