Skip to content

Commit a4cafab

Browse files
committed
Minor changes to paper.md
1 parent e2aafbe commit a4cafab

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

joss/paper.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ authors:
1515
orcid: 0000-0002-0741-6602
1616
affiliation: 3
1717
affiliations:
18-
- name: Computer Science and Engineering, Cluster Innovation Center, University of Delhi, Delhi, India.
18+
- name: Cluster Innovation Center, University of Delhi, Delhi, India.
1919
index: 1
2020
- name: Earth Science and Engineering, Physical Sciences and Engineering (PSE), King Abdullah University of Science and Technology (KAUST), Thuwal, Kingdom of Saudi Arabia.
2121
index: 2
@@ -39,20 +39,21 @@ in scientific inverse problems can be decomposed into a series of computational
3939

4040
When addressing distributed inverse problems, we identify three distinct families of problems:
4141

42-
- Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
42+
- **1. Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
4343
communication, mainly when performing dot products in the solver or in the regularization terms.
4444

45-
- Data is distributed across nodes, whilst the model is available on all nodes.
45+
- **2. Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
4646
Communication happens during the adjoint pass to sum models and in the solver for data vector operations.
4747

48-
- All nodes have identical copies of the data and model. Communication only happens within
48+
- **3. Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
4949
the operator, with no communication in solver needed.
5050

5151
MPI for Python (mpi4py [@Dalcin:2021]) provides Python bindings for the MPI standard, allowing applications to leverage multiple
5252
processors. Projects like mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax]
5353
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020] leverages mpi4py to solve large-scale problems in a distributed fashion.
5454
Its intuitive API provide functionalities to scatter and broadcast data and model vector across nodes and allows various mathematical operations (e.g., summation, subtraction, norms)
5555
to be performed. Additionally, a suite of MPI-powered linear operators and solvers is offered, and its flexible design eases the integration of custom operators and solvers.
56+
PyLops-MPI enables users to solve complex inverse problems without concerns about data leaks or MPI management.
5657

5758
# Software Framework
5859

0 commit comments

Comments
 (0)