Math 627 - Introduction to Parallel Computing
Spring 2004 - Matthias K. Gobbert
Section 0101 - Schedule Number 3661
This page can be reached via my homepage at
The class presentations of the final projects will be held on
Monday, May 17, 2004 starting at 03:30 p.m. in MP 401.
Please follow the link to the Program
for the titles and abstracts.
Just like for seminar talks, everybody is welcome to attend!
- Matthias K. Gobbert,
Math/Psyc 416, (410) 455-2404, email@example.com,
office hours: MW 03:00-03:50 or by appointment
- Classes: note lengthened class meetings on Wednesdays;
we will now meet on Mondays 04:00-05:15 p.m. (as officially scheduled)
and on Wednesdays 03:30-05:15 p.m. in MP 401;
see the detailed schedule for more information.
- Prerequisites: Math 630,
fluency in programming either C or Fortran and
proficiency in using the Unix/Linux operating system,
or instructor approval
- Copies of the following books are on reserve in the library.
- Grading policy:
See also the
general policies and procedures
for more information.
- The homework includes the computer assignments that are
vital to understanding the course material.
- The project consists of work on an individual project,
a written report, and an oral class presentation.
In recent years, parallel computing has become an almost ubiquitous
way to perform computer simulations involving large amounts of data or
intensive calculations. The basic purpose of using several processors
is to speed up computations of large problems by distributing the
work. But large problems typically involve vast quantities of data
as well; by distributing the data across several processors, problems
of previously unsolvable size can now be tackled in reasonable time.
Only government agencies, national laboratories, and large corporations
could afford the first parallel machines. Due to the dramatic drop
in prices for personal computers (PCs) and their components,
parallel computing has become much more accessible in the form of
Beowulf clusters formed by connecting commodity PCs by dedicated networks.
The most common library of parallel computing instructions today for
any type of parallel machine architecture is the Message Passing
Interface (MPI). This course will provide interested students a basic
introduction to parallel computing using MPI on a distributed-memory
cluster of Linux PCs. Time permitting, we will present
several application examples that show how parallel computing can be
used to solve large application problems in practice.
UMBC Academic Integrity Policy
By enrolling in this course, each student assumes the responsibilities of
an active participant in UMBC's scholarly community in which everyone's
academic work and behavior are held to the highest standards of honesty.
Cheating, fabrication, plagiarism, and helping others to commit these acts
are all forms of academic dishonesty, and they are wrong. Academic
misconduct could result in disciplinary action that may include, but is
not limited to, suspension or dismissal.
To read the full Student Academic Conduct Policy, consult the
UMBC Student Handbook, the Faculty Handbook,
the UMBC Policies section of the UMBC Directory for undergraduate students,
or the Graduate School website for graduate students.
Copyright © 2001-2004 by Matthias K. Gobbert. All Rights Reserved.
This page version 3.0, May 2004.