Skip to content

Implement Connectivity.java example using Java bindings #13201

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions examples/Connectivity.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
/*
* Test the connectivity between all processes
*/

import mpi.*;
import java.nio.IntBuffer;

class Connectivity {
public static void main(String args[]) throws MPIException {
MPI.Init(args);

/*
* MPI.COMM_WORLD is the communicator provided when MPI is
* initialized. It contains all the processes that are created
* upon program execution.
*/
int myRank = MPI.COMM_WORLD.getRank();
int numProcesses = MPI.COMM_WORLD.getSize();
boolean verbose = false;
String processorName = MPI.getProcessorName();

for (String arg : args) {
if (arg.equals("-v") || arg.equals("--verbose")) {
verbose = true;
break;
}
}

for (int i = 0; i < numProcesses; i++) {
/* Find current process */
if (myRank == i) {
/* send to and receive from all higher ranked processes */
for (int j = i + 1; j < numProcesses; j++) {
if (verbose)
System.out.printf("Checking connection between rank %d on %s and rank %d\n", i, processorName,
j);

/*
* rank is the Buffer passed into sendRecv to send to rank j.
* rank is populated with myRank, which is the data to send off
* peer is the Buffer received from rank j to current rank
*/
IntBuffer rank = MPI.newIntBuffer(1);
IntBuffer peer = MPI.newIntBuffer(1);
rank.put(0, myRank);

/*
* To avoid deadlocks, use combined sendRecv operation.
* This performs a send and recv as a combined atomic operation
* and allow MPI to efficiently handle the requests internally.
*/
MPI.COMM_WORLD.sendRecv(rank, 1, MPI.INT, j, myRank, peer, 1, MPI.INT, j, j);
}
} else if (myRank > i) {
IntBuffer rank = MPI.newIntBuffer(1);
IntBuffer peer = MPI.newIntBuffer(1);
rank.put(0, myRank);

/* receive from and reply to rank i */
MPI.COMM_WORLD.sendRecv(rank, 1, MPI.INT, i, myRank, peer, 1, MPI.INT, i, i);
}
}

/* Wait for all processes to reach barrier before proceeding */
MPI.COMM_WORLD.barrier();

/*
* Once all ranks have reached the barrier,
* have only one process print out the confirmation message.
* In this case, we are having the "master" process print the message.
*/
if (myRank == 0) {
System.out.printf("Connectivity test on %d processes PASSED.\n", numProcesses);
}

MPI.Finalize();
}
}
1 change: 1 addition & 0 deletions examples/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ EXAMPLES = \
ring_oshmemfh \
Ring.class \
connectivity_c \
Connectivity.class \
oshmem_shmalloc \
oshmem_circular_shift \
oshmem_max_reduction \
Expand Down