You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: joss/paper.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ authors:
15
15
orcid: 0000-0002-0741-6602
16
16
affiliation: 3
17
17
affiliations:
18
-
- name: Computer Science and Engineering, Cluster Innovation Center, University of Delhi, Delhi, India.
18
+
- name: Cluster Innovation Center, University of Delhi, Delhi, India.
19
19
index: 1
20
20
- name: Earth Science and Engineering, Physical Sciences and Engineering (PSE), King Abdullah University of Science and Technology (KAUST), Thuwal, Kingdom of Saudi Arabia.
21
21
index: 2
@@ -39,20 +39,21 @@ in scientific inverse problems can be decomposed into a series of computational
39
39
40
40
When addressing distributed inverse problems, we identify three distinct families of problems:
41
41
42
-
- Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
42
+
-**1. Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
43
43
communication, mainly when performing dot products in the solver or in the regularization terms.
44
44
45
-
- Data is distributed across nodes, whilst the model is available on all nodes.
45
+
-**2. Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
46
46
Communication happens during the adjoint pass to sum models and in the solver for data vector operations.
47
47
48
-
- All nodes have identical copies of the data and model. Communication only happens within
48
+
-**3. Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
49
49
the operator, with no communication in solver needed.
50
50
51
51
MPI for Python (mpi4py [@Dalcin:2021]) provides Python bindings for the MPI standard, allowing applications to leverage multiple
52
52
processors. Projects like mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax]
53
53
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020] leverages mpi4py to solve large-scale problems in a distributed fashion.
54
54
Its intuitive API provide functionalities to scatter and broadcast data and model vector across nodes and allows various mathematical operations (e.g., summation, subtraction, norms)
55
55
to be performed. Additionally, a suite of MPI-powered linear operators and solvers is offered, and its flexible design eases the integration of custom operators and solvers.
56
+
PyLops-MPI enables users to solve complex inverse problems without concerns about data leaks or MPI management.
0 commit comments