Research Computing Courses
Three research computing courses will be organised through the office of the Deputy Vice-Chancellor (Research and Innovation) using the Research Computing Cluster at the University of Canterbury in 2019. These courses are designed not only for postgraduate students who are interested in understanding large scale computational and big data facilities, but also for students who wish to use such facilities in their research such as molecular biology, computational fluid dynamics, bioinformatics and applied mathematics etc.
Students who wish to participate in any of all of the following courses should normally have a degree in Science / Engineering / Humanities or be in the final year of one of these degrees.
For more information about these courses and their schedules please contact firstname.lastname@example.org.
Introduction to Parallel Computing Architectures
Thursday 13th - Friday 14th June 2019, 9am-5pm
ER: 212 Ernest Rutherford
This 2-day course provides students with an understanding of different types of parallel computer architectures that are currently used in computational sciences, engineering and humanities disciplines to solve computationally and/or data intensive problems. It also introduces some of the programming techniques through hands on exercises using some of these architectures.
Structured Parallel Programming for Research Computing (C and OpenMP)
Monday 1st July - Friday 5th July 2019, 9am-5pm
ER: 211 Ernest Rutherford
This 5-day course provides the student, after an intensive introduction to compiled languages such as C, C++ and Fortran, the necessary skills to design, develop and run structured parallelised programs on the UC Research Computing Cluster.
The student applies some of the techniques introduced in Parallel Computing Architectures to profile, optimise and parallelise serial code/numerical methods using various tools available on the Research Computing Cluster, including OpenMP [An Application Program Interface (API) to explicitly direct multi-threaded, shared memory parallelism].
This is an ideal course to take in order to meet the prerequisite for Parallel Programming using the Message Passing Interface.
Parallel Programming using the Message Passing Interface
Monday 26th August – Friday 30th August 2019, 9am-5pm
ER: 211A Ernest Rutherford
Most of the applications used by Research Computing Clusters and the majority of the world's High-Performance Computer infrastructures are parallelised by using the Message Passing Interface (MPI). The MPI standard defines a core library of software routines to assist in turning serial applications into parallel ones that can be run on a shared or distributed memory system. The latter allows your computational research to potentially scale across multiple compute nodes in a cluster.
This 5-day course provides students, through lectures, tutorials and lots of ‘hands on’ exercises the skills required to write parallel programs using this programming model and is directly applicable to almost every parallel computer architecture.
- Structured Parallel Programming for Research Computing OR recommended preparation by the tutor
- Experience of a High Performance Computing environment and programming language such as C/C++ or FORTRAN