CS W267. Applications of Parallel Computers
Catalog Description: Parallel programming, from laptops to supercomputers to the cloud. Goals include writing programs that run fast while minimizing programming effort. Parallel architectures and programming languages and models, including shared memory (eg OpenMP on your multicore laptop), distributed memory (MPI and UPC on a supercomputer), GPUs (CUDA and OpenCL), and cloud (MapReduce, Hadoop and Spark). Parallel algorithms and software tools for common computations (eg dense and sparse linear algebra, graphs, structured grids). Tools for load balancing, performance analysis, debugging. How high level applications are built (eg climate modeling). On-line lectures and office hours.
Student Learning Outcomes: An understanding of computer architectures at a high level, in order to understand what can and cannot be done in parallel, and the relative costs of operations like arithmetic, moving data, etc., To recognize programming "patterns" to use the best available algorithms and software to implement them. , To understand sources of parallelism and locality in simulation in designing fast algorithms , To master parallel programming languages and models for different computer architectures
Prerequisites: Computer Science W266 or the consent of the instructor.
Credit Restrictions: Students will receive no credit for Computer Science W267 after completing Computer Science C267.
Spring: 3.0 hours of web-based lecture per week
Fall: 3.0 hours of web-based lecture per week
Grading basis: letter
Final exam status: No final exam