Skip to content

Commit 9109b2f

Browse files
committed
Add MDD
1 parent 2bf1b7c commit 9109b2f

File tree

2 files changed

+24
-7
lines changed

2 files changed

+24
-7
lines changed

joss/paper.bib

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,3 +115,16 @@ @article{Nemeth:1999
115115
year={1999},
116116
publisher={Society of Exploration Geophysicists}
117117
}
118+
119+
@article{Ravasi:2022,
120+
title={Stochastic Multi-Dimensional Deconvolution},
121+
volume={60},
122+
ISSN={1558-0644},
123+
url={http://dx.doi.org/10.1109/TGRS.2022.3179626},
124+
DOI={10.1109/tgrs.2022.3179626},
125+
journal={IEEE Transactions on Geoscience and Remote Sensing},
126+
publisher={Institute of Electrical and Electronics Engineers (IEEE)},
127+
author={Ravasi, Matteo and Selvan, Tamin and Luiken, Nick},
128+
year={2022},
129+
pages={1–14}
130+
}

joss/paper.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Large-scale linear operations and inverse problems are fundamental to numerous a
3131
processing, geophysics, signal processing, and remote sensing. This paper presents PyLops-MPI, an extension of PyLops
3232
designed for distributed and parallel processing of large-scale challenges. PyLops-MPI facilitates forward and adjoint
3333
matrix-vector products, as well as inversion solvers, in a distributed framework. By using the Message Passing
34-
Interface (MPI), this framework effectively utilizes the computational power of multiple nodes or processors, enabling
34+
Interface (MPI), this framework effectively utilizes the computational power of multiple nodes or ranks, enabling
3535
efficient solutions to large and complex inversion tasks in a parallelized manner.
3636

3737
# Statement of need
@@ -46,22 +46,21 @@ When addressing distributed inverse problems, we identify three distinct use cas
4646
flexible, scalable framework:
4747

4848
- **Fully Distributed Models and Data**: Both the model and data are distributed across nodes, with minimal
49-
communication during the modeling process. Communication occurs mainly during the solver stage when dot
49+
communication during the modeling process. Communication occurs mainly during the solver stage when dot
5050
products or regularization, such as the Laplacian, are applied. In this scenario where each node
5151
handles a portion of the model and data, and communication only happens between the model and data at each node.
5252

5353
- **Distributed Data, Model Available on All Nodes**: In this case, data is distributed across nodes while the model is
54-
available at all nodes. Communication is required during the adjoint pass when models produced by each node need
54+
available at all nodes. Communication is required during the adjoint pass when models produced by each node need
5555
to be summed, and in the solver when performing dot products on the data.
5656

5757
- **Model and Data Available on All Nodes or Master**: Here, communication is confined to the operator, with the master
5858
node distributing parts of the model or data to workers. The workers then perform computations without requiring
5959
communication in the solver.
6060

61-
Recent updates to mpi4py (version 3.0 and above) [@Dalcin:2021] have simplified its integration, enabling more efficient data
62-
communication between nodes and processes.
63-
Some projects in the Python ecosystem, such as mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax],
64-
utilize MPI to extend its capabilities,
61+
Recent updates to mpi4py (version 3.0 and above) [@Dalcin:2021] have simplified its integration, enabling more efficient
62+
data communication between nodes and processes. Some projects in the Python ecosystem, such as
63+
mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax], utilize MPI to extend its capabilities,
6564
improving the efficiency and scalability of distributed computing.
6665

6766
PyLops-MPI is built on top of PyLops[@Ravasi:2020] and utilizes mpi4py to enable an efficient framework to deal with
@@ -155,4 +154,9 @@ the need for explicit inter-process communication, thereby avoiding heavy commun
155154
Each rank applies the source modeling operator to perform matrix-vector products with the broadcasted reflectivity.
156155
The resulting data is then inverted using the MPI-Powered solvers to produce the desired subsurface image.
157156

157+
- *Multi-Dimensional Deconvolution (MDD)* is a powerful technique used at various stages of the seismic processing
158+
sequence to create ideal datasets deprived of overburden effects[@Ravasi:2022]. PyLops-MPI addresses this issue by
159+
ensuring that the model is available on all ranks and that the data is broadcasted. Operations are performed
160+
independently at each rank, eliminating the need for communication during the solving process.
161+
158162
# References

0 commit comments

Comments
 (0)