Introduction to Parallel Computing using MPI

Math 700 - Special Topics in Applied and Numerical Mathematics

Matthias K. Gobbert, Susan E. Minkoff, and Madhu Nayakkankuppam

Fall 2001 - Schedule Number 3620

This page can be reached via my homepage at

Grading Information

Final scores and grades ordered by the last four digits of your student number: scores and grades

Final Projects

The class presentations of the final projects will be held on December 11 and 12 in MP 401 starting at 04:00 p.m. See the Program for the titles, abstracts, and (later) links to download the project reports.

Information for Download

Instructions: Open the file by clicking on the link; plain text should appear in your browser window. Then use the ``File -> Save As'' functionality of your browser to save the data to a file. Or use the right mouse button to directly save the file without opening it first. The details may vary depending on your browser software and operating system. Contact me if there is a persisting problem.

Homework 3:

Lecture Notes by Madhu Nayakkankuppam:

Homework 4:

Homework 5:

Basic Information


In recent years, parallel computing has become an almost ubiquitous way to perform computer simulations involving large amounts of data or intensive calculations. The basic purpose of using several processors is to speed up computations of large problems by distributing the work. But large problems typically involve vast quantities of data as well; by distributing the work and data across several processors, previously unsolvable problems can now be tackled in reasonable time.

The first parallel machines could typically only be afforded by well-financed government agencies, national laboratories, and large corporations. Today, however, due to the dramatic drop in personal computer prices, parallel computing has become accessible to all by the availability of inexpensive dual-processor PCs. It is then only slightly more expensive to couple several of these into a distributed-memory cluster.

The most common library of parallel computing instructions for any type of parallel machine architecture is the Message Passing Interface (MPI). This course will provide interested students a basic introduction to parallel computing using MPI on a distributed-memory cluster of Linux PCs. We anticipate about half the semester will be spent on introducing the basic features of MPI. Project-oriented assignments will be given to establish practical experience with MPI.

In order to truly appreciate the power of parallel computing, it is useful to see it used in practice. Therefore, we intend to invite several other researchers from a variety of related application areas to give presentations about how they use parallel computing to solve their application problems.

Other Information

Official UMBC Honors Code

By enrolling in this course, each student assumes the responsibilities of an active participant in UMBC's scholarly community in which everyone's academic work and behavior are held to the highest standards of honesty. Cheating, fabrication, plagiarism, and helping others to commit these acts are all forms of academic dishonesty, and they are wrong. Academic misconduct could result in disciplinary action that may include, but is not limited to, suspension or dismissal.

To read the full Student Academic Conduct Policy, consult the UMBC Student Handbook, the Faculty Handbook, or the UMBC Policies section of the UMBC Directory.

Copyright © 2001 by Matthias K. Gobbert. All Rights Reserved.
This page version 2.6, December 2001.