Introduction to Parallel Computing - Syllabus
Universität Kassel - Sommersemester 2016
Matthias K. Gobbert - University of Maryland, Baltimore County
This page can be reached via my homepage at
http://www.umbc.edu/~gobbert.
Basic Information
- Personnel:
- Instructor:
Matthias K. Gobbert, University of Maryland, Baltimore County,
at the Universität Kassel from February to June 2016.
Office AVZ Room 2431, (0561) 804-4376, gobbert@umbc.edu,
office hours: Fridays 13-14 or by appointment
- Assistant:
Stefan Kopecz, Institut für Mathematik, Universität Kassel.
Office AVZ Room 2419, (0561) 804-4677, kopecz@mathematik.uni-kassel.de,
office hours: by appointment
- The Vorlesungsverzeichnis of Sommersemester 2016 lists the
lectures and
associated computer labs formally.
The course is taught as 4+2 SWS for the first half of the
Sommersemester 2016,
thus it counts as 2+1 SWS.
Lectures Wednesdays 13-15 and Fridays 11-13 in AVZ Room 2420;
computer labs Tuesdays 09-11 AVZ Room 2421.
see the detailed schedule for more information.
- Prerequisites: Numerik I,
recommended preparation:
proficiency in programming C/C++ and
in using the Unix/Linux operating system;
or consent of instructor
- Books on parallel computing, the programming language C, and Matlab:
-
Required textbook on parallel computing:
Peter S. Pacheco,
Parallel Programming with MPI,
Morgan Kaufmann, 1997.
Associated webpage:
http://www.cs.usfca.edu/~peter/ppmpi.
We will have explicit reading assignments for several chapters
from this book at the beginning of the semester.
The book can be checked out in Room 2410 for two hours at a time.
-
Recommended book on the programming language C:
Brian W. Kernighan and Dennis M. Ritchie,
The C Programming Language,
second edition, Prentice-Hall, 1988.
Associated webpage:
http://cm.bell-labs.com/cm/cs/cbook/.
This is the classic book on C written by its creators.
It is the shortest book I know on the subject and a nice
one to have because of its authorship, but you can also
use other resources to learn C programming for our purposes.
-
Recommended book on Matlab:
Desmond J. Higham and Nicholas J. Higham,
Matlab Guide, second edition, SIAM, 2005.
Webpage of the book
including list of errors.
Matlab's documentation is excellent, but along with its functionality
has reached a scale that requires a lot of sophistication to fully
understand. Moreover, there is a definite role for a book that
is organized by chapter on topics such as all types of functions
(inline, anonymous, etc.), efficient Matlab programming
(vectorization, pre-allocation, etc.), Tips and Tricks, and more.
- The course grade will be based on the homeworks, which include
the computer assignments that are vital to understanding
the course material.
Each homework consists of a written report
that explains what you did and
that responds to the instructions and questions in the problems.
The report should be submitted in the computer lab
to Dr. Kopecz for grading as well as needs to include
your showing Dr. Kopecz your code, its compilation,
and oral discussion of results.
Additional details or changes will be announced as necessary.
-
Moodle
is a course management system that allows for posting
and communicating among registered participants of a course.
I will post class summaries and PDF transcripts of the lecture notes
as well as other material including the homework assignments
and electronic copies of handouts in this area.
Course Description
Parallel computing has become an ubiquitous
way to perform computer simulations involving large amounts of data or
intensive calculations. The basic purpose of using several processors
is to speed up computations of large problems by distributing the
work. But large problems typically involve vast quantities of data
as well; by pooling the memory from several processors, problems
of previously unsolvable size can now be tackled in reasonable time.
This course will introduce the basic aspects of parallel programming
and the algorithmic considerations involved in designed scalable
parallel numerical methods.
The programming will use MPI (Message Passing Interface),
the most common library of parallel communication commands
for distributed-memory clusters with multi-core CPUs.
Several application examples will
demonstrate how parallel computing can be
used to solve large problems in practice.
We will also consider the options for taking advantage of
state-of-the-art massively parallel GPUs (graphics processing units)
and cutting-edge many-core Intel Xeon Phi accelerators.
The class will include an efficient introduction to the Linux operating
system as installed on the cluster used, and it will include
a review of serial programming in the source code language C
that is integrated into the initial presentation of sample codes.
This review assumes some experience with compiling and debugging
in a high-level source code programming language.
It will only include a restricted set of features of C, but these are
selected to be sufficient for work on the homework assignments in the class.
Learning Goals
By the end of this course, you should:
-
understand and remember the key ideas, concepts, definitions,
and theorems of the subject.
Examples include understanding the purpose of parallel computing
and why it can work, being aware of potential limitations,
and knowing the major types of hardware available.
This information will be communicated in class and in the
textbook, but also in additional reading.
-- This information will be discussed in the lecture as well as
in the textbook and other assigned reading.
-
have experience writing code for a Linux cluster using MPI in C, C++,
and/or Fortran that correctly solves problems in scientific computing.
The sample problems are taken from mathematics and your code has to
compile without error or warning, run without error,
and give mathematically correct results first of all.
In addition, it needs to run on a Linux cluster without error
and you need to be able to explain its scalability, i.e.,
why or why not it executes faster on several processors than in serial.
We will have problems stated in different ways and from various
sources to provide you with exposure to as many issues as possible.
-- This is the main purpose of the homework and most
learning will take place here.
-
have gained proficiency in delivering code written by you to others
for compilation and use.
This includes the concept of providing a README file that gives
instructions how to compile and run the code as well as
of providing a sample output file to allow the user to check the results.
We will work together in class to discuss best practices to transfer code
for homework problems of increasing complexity.
-- You will submit your homework code by e-mail to the instructor
and it needs to compile and run in parallel for credit; this is
complemented by a report that shows and explains your results.
-
have some experience how to learn information from a research paper
and to discuss it with peers.
Group work requiring communication for effective collaboration
with peers and supervisors is a vital professional skill,
and the development of professional skills is a declared learning goal
of this course.
-- I will supply some research papers carefully
selected for their readability and relevance to the course.
Learning from research papers is a crucial skill to develop.
Other Information
Copyright © 2001-2016 by Matthias K. Gobbert. All Rights Reserved.
This page version 1.0, April 2016.