kali
kali
's webpage at
http://www.math.umbc.edu/kali.
kali
.
If you find mistakes on this page or have suggestions, please contact me.
mpicc
.
It is assumed here that you have gone through the steps outlined
in the initial setup so that now, the command mpicc
is well-defined in your account.
Also, I assume that you have read the information on how to run
on kali using our scheduler, so that you know how to use the
submission scripts included with each code for your convenience.
This script is written to request 4 processes (hence the p4
extension); for a full test, one should repeat each test with
modified scripts for other cases, from 1 process to 64 processes.
The following codes assume that you can compile and run serial code. Starting from there, a Hello-world program is first, followed by a progression of more involved code to test certain features. The documentation here only discusses my motivation for each code; to see exactly what it does, you have to read the source.
A final suggestion is that you create a subdirectory for these
tests containing a subdirectory for each of the tests.
The main point here is to avoid running the tests from the
home directory, because this may hide effects regarding the
redirection of stdout
and stderr
into files and their movement to the correct directory,
for instance.
MPI_Init
and MPI_Finalize
commands; or actually, just
to test mpicc
, one should use a C code without
any MPI commands and then only with the include "mpi.h"
line added; we do not want to be this paranoid here, though.
So, we start here with the next more complicated MPI program that you can write: It just prints out a string. To make this string more interesting and useful as a test, it already incorporates useful MPI commands and outputs their results. Particularly, on a system with a scheduler, it is a useful thing to be able to see, which node each process is run on; for instance, this allows you to check that the scheduler did not run more than one process per CPU. (Hence, this is actually more advanced than a true Hello-world program that would only print out a constant string.)
Notice that our procedure for using the scheduler includes
the redirection of stdout
and stderr
,
so one of the not-so-trivial issues is to test first of all
whether these files end up the current directory, as desired.
stdout
of my code.
The information is obtained by a system call to pwd
.
The point is that in production runs, you will run many instances
of your code, typically in different directories that are only
distinguished by subtle differences in input files; when you later
transfer or combine the results, it is good to be positively sure
about which directory a result came from. That's why the code
outputs this information first to stdout
.
Additionally, I want to output the nodes used, but in a useful and
readable order (compared to the order of the previous sample code's output!).
To get the order right, I use
MPI_Send/MPI_Recv
command pairs; this also tests the next
level of MPI commands beyond hello.c
,
namely actual communication commands.
Notice that this output is encapsulated in a function;
I in fact use this function, and the above output from pwd
,
in all my parallel codes.
For good measure, I also test the equally basic
MPI_Bcast
communication command in this code.
After the run of this code, you should have as many output
files as processes used, called testio-p00.out
,
testio-p01.out
, etc.
One of the maybe more subtle points of this test is that I want to
make sure that my input file is found and that my output
files all end up in the correct directory.