In the past, graphics processors were special purpose hardwired application accelerators, suitable only for conventional rasterization-style graphics applications. Modern GPUs are now fully programmable, massively parallel floating point processors. This talk will describe how NVIDIA's massively multithreaded computing architecture and CUDA software for GPU computing have changed both graphics and computing. It's no longer news that GPUs can accelerate many applications and compute kernels 100x or more. The true importance of this change lies in what new science and applications will be enabled.
Short Bio:David Kirk is an NVIDIA Fellow and served as NVIDIA's chief scientist from 1997 to 2009, a role in which he led the development of graphics technology for today's most popular consumer entertainment platforms. He also serves on the U.S. Commerce Department's Information Systems Technical Advisory Committee.
Kirk was honored by the California Institute of Technology in 2009 with a Distinguished Alumni Award, its highest honor, for his work in the graphics-technology industry. In 2006, he was elected to the National Academy of Engineering for his role in bringing high-performance graphics to personal computers. He received the SIGGRAPH Computer Graphics Achievement Award in 2002 for his role in bringing high-performance computer graphics systems to the mass market.
Prior to coming NVIDIA, Kirk served from 1993 to 1996 as chief scientist and head of technology for Crystal Dynamics, a video game development company. From 1989 to 1991, he was an engineer for the Apollo Systems Division of HP.
Kirk is the inventor close to 100 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. He holds BS and MS degrees in mechanical engineering from the Massachusetts Institute of Technology, and MS and PhD degrees in computer science from Caltech. Kirk is also the co-author (with Professor Wen-mei W. Hwu of University of Illinois, Urbana-Champaign) of the popular parallel programming textbook, "Programming Massively Parallel Processors".
Software developers must take advantage of parallelism to keep up with the industry shift to multi-core and many-core processors, but in spite of decades of research, parallel programming remains difficult. To address this difficulty, Intel and Microsoft launched the Universal Parallel Computing Research Centers (UPCRC) at the University of California Berkeley and the University of Illinois Urbana-Champaign. The UPCRC research agenda targets mainstream developers writing client applications rather than expert developers writing high-performance technical applications. This talk will look at two important UPCRC concepts that raise the abstraction level of parallel programming: design patterns and Selective Embedded Just-in-Time Specialization (SEJITS). A case study using these concepts will be presented from the perspective of a domain scientist who is not an expert developer.
Short Bio:Henry Gabb is a Principal Engineer in Intel Labs, currently working in the Academic Research Office creating research programs in parallel computing and life science. In his 11 years at Intel, Henry has mainly worked on parallel applications, algorithms, and math libraries. He is currently Program Director of the Universal Parallel Computing Research Centers that Intel cosponsors with Microsoft. Henry holds a PhD in molecular genetics from the University of Alabama Birmingham Schools of Medicine and Dentistry and a BS in biochemistry from Louisiana State University. Prior to joining Intel, he was Director of Scientific Computing at a Department of Defense high-performance computing facility.
The current trend in the high performance computing industry is to provide hybrid systems with GPUs attached to multi-core processors. Currently, the two dominate programming models for these hybrid systems are CUDA and OpenCL, both of these programming models give the programmer the power to extract performance from the accelerator, but with extreme usability and portability costs. In order to be an effective HPC platform, these hybrid systems need a high level software development environment that will facilitate the porting and development of applications. In this talk I will present Cray's high level parallel programming environment for accelerator based systems, which consists of compilers, libraries, and tools designed to improve user's productivity on the Cray's hybrid Supercomputers.
Short Bio:Dr. Luiz DeRose is a Senior Principal Engineer and the Programming Environments Director at Cray Inc, where he is responsible for the programming environment strategy for all Cray systems. Dr. DeRose has a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. With more than 20 years of high performance computing experience and a deep knowledge of its programming environments, he has published more than 50 peer-review articles in scientific journals, conferences, and book chapters, primarily on the topics of compilers and tools for high performance computing.