Graphics Processing Units (GPUs)


   Back to course contents

GPUs

From Wikipedia: A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized processor that offloads 3D graphics rendering from the microprocessor. It is used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms.


   Back to top of page

GPU Based Computing

GPU computing is the use of a GPU (graphics processing unit) to do general purpose scientific and engineering computing.

The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the perspective of the user, the application just runs faster because it is using the high-performance of the GPU to boost performance.

NVIDIA is foremost in developing such systems.


   Back to top of page

borg3.physics.drexel.edu

borg3.physics.drexel.edu is the system we will use in this course. See its configuration.

This system harbors two NVIDIA Tesla M1060 GPUs. These are the 1U version (no fan - it relies on the chassis fan) of the Tesla C1040.

Check the specifications of this GPU.

The modern NVDIA GPU cards are fantastically fast! Look at the NVDIA sites: Tesla K80 ,

Furthermore, many of the fastest supercomputer centers in the world are powered by GPUs. Ref ?


   Back to top of page

CUDA

From Wikipedia: CUDA (an acronym for Compute Unified Device Architecture) is a language to build applications for a parallel computing architecture developed by NVIDIA. CUDA is the language to access and use the computing engine in NVIDIA graphics processing units, or GPUs, that is accessible to software developers through industry standard programming languages. Programmers use 'C for CUDA' (C with NVIDIA extensions), compiled through a PathScale Open64 C compiler, to code algorithms for execution on the GPU. CUDA architecture supports a range of computational interfaces including OpenCL and DirectCompute. Third party wrappers are also available for Python, Fortran, Java and Matlab.


   Back to top of page

Learning CUDA

NVIDIA maintains numerous CUDA tutorials, references and language descriptions on its web site. Look in particular at NVIDIA.

An interesting primer is that of Seland.

David Kirk, NVDIA, and Wen-Mei Hwua, ECE, U of Illinos, have written an excellent textbook on CUDA. Download it in draft form.

The book GPU Gems 3 contains great demonstration GPU codes.


   Back to top of page

Bootstrap approach to learning CUDA

One way to learn CUDA is to look at multiple examples of increasing complexity. A good set of such examples is by NVIDA:

CUDA by Example

Book Software: Software

~

transfer
   Back to top of page

"   Back to course contents