Math 627 - Introduction to Parallel Computing
Matthias K. Gobbert and Madhu Nayakkankuppam
Fall 2002 - Schedule Number 7608
This page can be reached via my homepage at
The class presentations of the final projects will be held on
Monday, December 09, 2002 and Tuesday, December 10, 2002
starting at 07:00 p.m. in MP 401.
Please follow the link to the Program
for the titles, abstracts, and (later) links to download the project reports.
Just like for seminar talks, everybody is welcome to attend!
Final scores and grades ordered by your assigned number:
scores and grades
- Matthias K. Gobbert,
Math/Psyc 416, (410) 455-2404, email@example.com,
office hours: TTh 04:00-05:00 or by appointment
- Madhu Nayakkankuppam,
Math/Psyc 427, (410) 455-3298, firstname.lastname@example.org,
office hours: WF 10:00-11:00 or by appointment
- Lectures: MP 401,
September 04 to October 28 and December 02 to December 09,
see the schedule for more information.
- Prerequisites: Math 630,
fluency in programming either C or Fortran and
proficiency in using the Unix/Linux operating system,
or instructor approval
- Copies of all following books are on reserve in the library.
- Grading policy:
See also the general policies and procedures for more information.
In recent years, parallel computing has become an almost ubiquitous
way to perform computer simulations involving large amounts of data or
intensive calculations. The basic purpose of using several processors
is to speed up computations of large problems by distributing the
work. But large problems typically involve vast quantities of data
as well; by distributing the data across several processors, problems
of previously unsolvable size can now be tackled in reasonable time.
Only government agencies, national laboratories, and large corporations
could afford the first parallel machines. Due to the dramatic drop
in prices for personal computers (PCs) and their components,
parallel computing has become much more accessible in the form of
Beowulf clusters formed by connecting commodity PCs by dedicated networks.
The most common library of parallel computing instructions today for
any type of parallel machine architecture is the Message Passing
Interface (MPI). This course will provide interested students a basic
introduction to parallel computing using MPI on a distributed-memory
cluster of Linux PCs. Time permitting, we will present
several application examples that show how parallel computing can be
used to solve large application problems in practice.
Information for Download
Instructions: Open the file by clicking on the link; plain text
should appear in your browser window. Then use the "File -> Save As"
functionality of your browser to save the data to a file. Or use the
right mouse button to directly save the file without opening it first.
The details may vary depending on your browser software and operating
system. Contact me if there is a persisting problem.
Official UMBC Honors Code
By enrolling in this course, each student assumes the responsibilities of
an active participant in UMBC's scholarly community in which everyone's
academic work and behavior are held to the highest standards of honesty.
Cheating, fabrication, plagiarism, and helping others to commit these acts
are all forms of academic dishonesty, and they are wrong. Academic
misconduct could result in disciplinary action that may include, but is
not limited to, suspension or dismissal.
To read the full Student Academic Conduct Policy, consult the
UMBC Student Handbook, the Faculty Handbook, or the UMBC Policies
section of the UMBC Directory.
Copyright © 2001-2002 by Matthias K. Gobbert. All Rights Reserved.
This page version 4.0, December 2002.