-
Notifications
You must be signed in to change notification settings - Fork 5
Implementation of MPI SUMMA matrix mul #160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@astroC86 very well done, this code looks amazingly clear and (I hope) efficient as the SUMMA promises to be 😄
I think at this point we are left with the following:
- add a chaining of
MPISummaMatrixMult
with another operator like we did forMPIMatrixMult
- add unittests
- refractor
MatrixMult.py
to contain 2 private classes_MPIBlockMatrixMult
and_MPISummaMatrixMult
and a public classMPIMatrixMult
which takes all the inputs required by both of the private classes and akind
parameter to choose betweenblock
andsumma
- see https://github.com/PyLops/pylops/blob/dev/pylops/signalprocessing/fft2d.py for a similar pattern
for k in range(self._P_prime): | ||
Atemp = self.A.copy() if self._col_id == k else np.empty_like(self.A) | ||
Xtemp = x_block.copy() if self._row_id == k else np.empty_like(x_block) | ||
self._row_comm.Bcast(Atemp, root=k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: do you think you could use bcast
instead of Bcast
?
The reason I am asking is because we currently use always lower-case methods, which although sub-optimal (mpi4py/mpi4py#657) has the advantage of working with CuPy arrays out of the box whilst we saw some issues when @tharittk tried to use upper-case methods (although it was Send/Recv)... my suggestion would be, let's stick to what we currently do and soon (hopefully during @tharittk second part of GSoC) we will switch all MPI4PY calls involving NumPy/CuPy arrays to buffers (aka upper-case methods)
21ea369
to
a994192
Compare
No description provided.