Introduction to Parallel Computing - Syllabus

Universität Kassel - Sommersemester 2016

Matthias K. Gobbert - University of Maryland, Baltimore County

This page can be reached via my homepage at

Basic Information

Course Description

Parallel computing has become an ubiquitous way to perform computer simulations involving large amounts of data or intensive calculations. The basic purpose of using several processors is to speed up computations of large problems by distributing the work. But large problems typically involve vast quantities of data as well; by pooling the memory from several processors, problems of previously unsolvable size can now be tackled in reasonable time.

This course will introduce the basic aspects of parallel programming and the algorithmic considerations involved in designed scalable parallel numerical methods. The programming will use MPI (Message Passing Interface), the most common library of parallel communication commands for distributed-memory clusters with multi-core CPUs. Several application examples will demonstrate how parallel computing can be used to solve large problems in practice. We will also consider the options for taking advantage of state-of-the-art massively parallel GPUs (graphics processing units) and cutting-edge many-core Intel Xeon Phi accelerators.

The class will include an efficient introduction to the Linux operating system as installed on the cluster used, and it will include a review of serial programming in the source code language C that is integrated into the initial presentation of sample codes. This review assumes some experience with compiling and debugging in a high-level source code programming language. It will only include a restricted set of features of C, but these are selected to be sufficient for work on the homework assignments in the class.

Learning Goals

By the end of this course, you should:

Other Information

Copyright © 2001-2016 by Matthias K. Gobbert. All Rights Reserved.
This page version 1.0, April 2016.