# Suchergebnis: Katalogdaten im Herbstsemester 2019

Rechnergestützte Wissenschaften Bachelor | ||||||

Bachelor-Studium (Studienreglement 2018) | ||||||

Kernfächer aus dem Bereich I (Module) | ||||||

Modul A | ||||||

Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|---|

151-0107-20L | High Performance Computing for Science and Engineering (HPCSE) I | W | 4 KP | 4G | P. Koumoutsakos | |

Kurzbeschreibung | This course gives an introduction into algorithms and numerical methods for parallel computing on shared and distributed memory architectures. The algorithms and methods are supported with problems that appear frequently in science and engineering. | |||||

Lernziel | With manufacturing processes reaching its limits in terms of transistor density on today’s computing architectures, efficient utilization of computing resources must include parallel execution to maintain scaling. The use of computers in academia, industry and society is a fundamental tool for problem solving today while the “think parallel” mind-set of developers is still lagging behind. The aim of the course is to introduce the student to the fundamentals of parallel programming using shared and distributed memory programming models. The goal is on learning to apply these techniques with the help of examples frequently found in science and engineering and to deploy them on large scale high performance computing (HPC) architectures. | |||||

Inhalt | 1. Hardware and Architecture: Moore’s Law, Instruction set architectures (MIPS, RISC, CISC), Instruction pipelines, Caches, Flynn’s taxonomy, Vector instructions (for Intel x86) 2. Shared memory parallelism: Threads, Memory models, Cache coherency, Mutual exclusion, Uniform and Non-Uniform memory access, Open Multi-Processing (OpenMP) 3. Distributed memory parallelism: Message Passing Interface (MPI), Point-to-Point and collective communication, Blocking and non-blocking methods, Parallel file I/O, Hybrid programming models 4. Performance and parallel efficiency analysis: Performance analysis of algorithms, Roofline model, Amdahl’s Law, Strong and weak scaling analysis 5. Applications: HPC Math libraries, Linear Algebra and matrix/vector operations, Singular value decomposition, Neural Networks and linear autoencoders, Solving partial differential equations (PDEs) using grid-based and particle methods | |||||

Skript | Link Class notes, handouts | |||||

Literatur | • An Introduction to Parallel Programming, P. Pacheco, Morgan Kaufmann • Introduction to High Performance Computing for Scientists and Engineers, G. Hager and G. Wellein, CRC Press • Computer Organization and Design, D.H. Patterson and J.L. Hennessy, Morgan Kaufmann • Vortex Methods, G.H. Cottet and P. Koumoutsakos, Cambridge University Press • Lecture notes | |||||

Voraussetzungen / Besonderes | Students should be familiar with a compiled programming language (C, C++ or Fortran). Exercises and exams will be designed using C++. The course will not teach basics of programming. Some familiarity using the command line is assumed. Students should also have a basic understanding of diffusion and advection processes, as well as their underlying partial differential equations. | |||||

Modul B | ||||||

Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |

263-2800-00L | Design of Parallel and High-Performance Computing | W | 8 KP | 3V + 2U + 2A | M. Püschel, T. Ben Nun | |

Kurzbeschreibung | Advanced topics in parallel / concurrent programming. | |||||

Lernziel | Understand concurrency paradigms and models from a higher perspective and acquire skills for designing, structuring and developing possibly large concurrent software systems. Become able to distinguish parallelism in problem space and in machine space. Become familiar with important technical concepts and with concurrency folklore. |

- Seite 1 von 1