Abstract: Clusters revolutionized computing by making supercomputer capabilities widely available. But one of the main drivers of that revolution, the rapid doubling of processor clock rates, ran out of steam several years ago. To maintain (or even increase) the historic rate of improvement in computing power, processor designs are rapidly increasing parallelism at all levels, including more functional units, more cores, and ways to share resources among threads. Heterogeneous designs that use more specialized processors such as GPGPUs are becoming common. The scale of high-end systems is also getting larger, with 1000-core systems becoming commonplace and systems with over 300,000 cores becoming operational in 2011. However, the software and algorithms for these systems are still basically the same as when the cluster revolution began. Drawing on experiences with the sustained PetaFLOPS system, called Blue Waters, to be installed at Illinois in 2011, and with exploratory work into Exascale system designs, this talk will discuss some of the challenges facing the high performance and cluster community as scalability becomes increasingly important and reviews some of the developments in algorithms, programming models, and software frameworks that must complement the evolution of high performanace computing hardware.
Bio:William Gropp received his B.S. in Mathematics from Case Western Reserve University in 1977, a MS in Physics from the University of Washington in 1978, and a Ph.D. in Computer Science from Stanford in 1982. He held the positions of assistant (1982-1988) and associate (1988-1990) professor in the Computer Science Department at Yale University. In 1990, he joined the Numerical Analysis group at Argonne, where he was a Senior Computer Scientist in the Mathematics and Computer Science Division, a Senior Scientist in the Department of Computer Science at the University of Chicago, and a Senior Fellow in the Argonne-Chicago Computation Institute. From 2000 through 2006, he was also Deputy Director of the Mathematics and Computer Science Division at Argonne. In 2007, he joined the University of Illinois at Urbana-Champaign as the Paul and Cynthia Saylor Professor in the Department of Computer Science. His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He has played a major role in the development of the MPI message-passing standard. He is co-author of the most widely used implementation of MPI, MPICH, and was involved in the MPI Forum as a chapter author for both MPI-1 and MPI-2. He has written many books and papers on MPI including "Using MPI" and "Using MPI-2". He is also one of the designers of the PETSc parallel numerical library, and has developed efficient and scalable parallel algorithms for the solution of linear and nonlinear equations. Gropp was named an ACM Fellow in 2006 and received the Sidney Fernbach Award from the IEEE Computer Society in 2008.
Abstract: In the past couple of years the amount of data traffic in wireless cellular networks has surpassed that of voice traffic. This is partly because of the increased popularity of smart phones which provide access to Internet services, video, music and many other forms of digital content through the wireless network, and partly because of new emerging applications and services using machine-to-machine communications. This phenomenon has stressed the capacity of today’s networks to the point that Quality Of Service (QOS) is suffering severely and customer complaints are exploding. Next generation 4G network technology will help but it will not solve the problem because data traffic is increasing at exponential rates, well beyond the capacity of what 4G will be able to provide. At the same time, as cellular network standards have adopted IP and packet switched protocols, it is possible now Internet technology in the wireless infrastructure. In this presentation, I review the state of affairs. In particular, I discuss various Information Technology approaches to network bandwidth optimization that can be used at the edge of various interfaces in the wireless network to help alleviate congestion in the 3G cellular networks of today, while at the same time enabling new location based applications and services.
Bio: Cesar Gonzales is an IBM Fellow, which is IBM’s highest technical position. Since early 2010 he has been working on innovations exploiting the convergence of IT and wireless technologies. Before that he was the research executive responsible for all interactions between IBM’s worldwide research labs and its electronics industry clients. He managed the transfer of hardware and software technology from research laboratories to industry solutions offered by IBM.
Early in his IBM career Dr. Gonzales contributed to the development of the JPEG standard for image compression, and was a leader in the development of the ubiquitous MPEG standard for digital video compression. Gonzales is an expert in image and video processing and compression; and his experience spans the development of algorithms, multimedia system architectures and chips, having led in the development of several related hardware and software commercial digital video products sold in the millions. His contributions to MPEG technology were recognized by the IEEE which conferred him the degree of IEEE Fellow.
Gonzales has served as editor of IEEE Transactions for Circuits and Systems for Video Technology. He is currently in the editorial board of ACM’s Computers in Entertainment. Besides receiving a number of corporate awards from IBM, he has also been recognized with “Outstanding Technical Achievement” awards by various other professional organizations including the “Hispanic Engineers National Achievement Awards Conference”, and the “Society of Hispanic Professional Engineers”. Dr. Gonzales is listed as an inventor in more than twenty US patents and has published over 60 technical papers. He received his BS and Engineering degree in Electronics Engineering from Universidad Nacional de Ingeniería in Lima, Perú, and his PhD from Cornell University. He also received an honorary doctorate from the Universidad Privada de Tacna in Peru.
Abstract: The numerical simulation of complex physical systems has evolved into a key investigation tool for the physicist as well as a grand challenge of high performance computing. Several areas of computational physics -- Quantum Chromodynamics, Computational Fluid dynamics, the evolution of Gravitational Systems -- have been at the forefront of this trend in the last two or three decades; over the years, several application-driven special architectures have been developed to provide -- efficiently and cheaply -- computing power to these applications. From the point of view of computer architecture, these systems are characterized by a very close match between algorithm and machine structure, and by a very effective exploitation of the large degree of available parallelism. Some of the ideas developed in these machine have been later adopted in commercial systems. This talk will describe some of these endeavors, focusing on the most significant architectural choices -- at both processor level and system level -- that have been developed over the years. The talk will pay closer attention to the features that are at larger variance with those present in conventional systems and discuss methodologies able to assess quantitatively the match between an application and a corresponding application-driven architecture.
Bio: Raffaele Tripiccione (born 1956) obtained his laurea degree in Physics from the University of Pisa. He carried out his doctoral studies at Scuola Normale Superiore in Pisa.
He spent his early scientific career at Istituto Nazionale di Fisica Nucleare (INFN). He moved to the Università di Ferrara, as a full professor of physics, in the year 2000. Over the years he has visited for extended periods several scientific institutions, such as Fermilab in the US and CERN (Geneva). He has supervised approximately 20 doctoral theses and a large number of master theses.
His research interests are in the area of computational theoretical physics. Over the years he has focused his work on Lattice Gauge Theories (LGT), turbulent fluid dynamics and complex systems (mainly spin-glasses).
He has played a major role in the development of several application-driven high performance computer systems for physics applications. He is one of the founders of the French-German-Italian APE project, that has developed several generation of LGToptimized computer system.
He has been the head of the project from 1990 to 2000. In recent years, he has been involved in the Italian-Spanish JANUS project, that develops reconfigurable computer systems for applications in condensed-matter physics.