SBAC-PAD'2002 - INVITED TALKS ______________
High Performance Computing,
Computational Grid, and Numerical Libraries
Jack Dongarra, University of Tennessee, USA
|
In this talk we will look at how High Performance
computing has changed over the last 10-year and look toward the future in terms
of trends. In addition, we advocate the
"Computational Grids" to support "large-scale"
applications. These must provide transparent access to the complex mix of
resources - computational, networking,
and storage - that can be provided through aggregation of resources. We will
look at how numerical library software can be run in an adaptive fashion to
take advantage of available resources.
In the last 50 years, the field of scientific computing has seen rapid,
sweeping changes—in vendors, in architectures, in technologies, and in the
users and uses of high-performance computer systems. The evolution of
performance over the same 50-year period, however, seems to have been a very
steady and continuous process. Moore's law is often cited in this context, and,
in fact, a plot of the peak performance of the various computers that could be
considered the "supercomputers" of their times clearly shows that
this law has held for almost the entire lifespan of modern computing. On
average, performance has increased every decade by about two orders of
magnitude.
Two statements have been consistently true in the realm of computer science:
(1) the need for computational power is always greater than what is available
at any given point, and (2) to access our resources, we always want the
simplest, yet the most complete and easy to use interface possible. With these
conditions in mind, researchers have directed considerable attention in recent
years to the area of grid computing. The ultimate goal is the ability to plug
any and all of our resources into a computational grid and draw from these
resources—this is analogous to the electrical power grid, much as we plug our
appliances into electrical sockets today.
Advances in networking technologies will soon make it possible to use the
global information infrastructure in a qualitatively different way—as a
computational as well as an information resource. As described in the recent book "The Grid: Blueprint for a
New Computing Infrastructure," this "Grid" will connect the
nation's computers, databases, instruments, and people in a seamless web of
computing and distributed intelligence, that can be used in an on-demand
fashion as a problem-solving resource in many fields of human endeavor—and, in
particular, for science and engineering.
The availability of Grid resources will give rise to dramatically new classes
of applications, in which computing resources are no longer localized, but
distributed, heterogeneous, and dynamic; computation is increasingly
sophisticated and multidisciplinary; and computation is integrated into our
daily lives, and hence subject to stricter time constraints than at present. The impact of these new applications will be
pervasive, ranging from new systems for scientific inquiry, through computing
support for crisis management, to the use of ambient computing to enhance
personal mobile computing environments.
In this talk we will explore the issues of developing a prototype system
designed specifically for the use of numerical libraries in the grid setting.
Will Vector ISAs Survive in
the Future?
Mateo Valero, Universitat
Politècnica de Catalunya, Spain
|
During many years, vector architectures were the
choice to build supercomputers. They were introduced around 1975, with the
first Cray-1, they dominated during more than 20 years, and they are still
active with processors designed by NEC and Cray. Nevertheless, their influence
and relative importance is constantly decreasing in the list of the fastest
computers in the world. Since the middle of the past decade, massively parallel
computers built on current high performance microprocessors are replacing
vector supercomputers from that list. It seems that classical vector
architectures are going to disappear in a short time.
During the last years, advances in technology and architecture dictated the end
of these architectures. Nevertheless, these technological advances and the need
for new applications could produce a new renaissance for vector architectures. This
is our thesis in this talk.
In this talk, we will present the history of supercomputers, why vector
processors were the correct choice to follow and why they were substituted by
the appearance of the micro killers in 1990. After that, we will present the
advantage that current and future trends in technology give to these vector ISA
to be implemented in a very efficient way.
Vector ISAs are the natural choice to express applications with data level
parallelism as those associated to numerical and multimedia applications. They
require less instructions to be executed and can exploit in a very nice way the
high bandwidth and the huge number of transistors inside the chip, reducing the
cost of power and avoiding the negative effect of wire delays without
penalizing the clock speed. We will defend all these advantages and we will
describe the research done in this field during the last years. Finally, we
will present some real products and projects of vector microprocessors going in
this direction.
Playing Distributed Systems with
Memory-to-Memory Communication
Liviu Iftode, University of Maryland, USA
|
Memory-to-Memory communication is a communication
architecture which allows remote DMA operations to be performed silently, i.e.
without involving the remote host. Given its low latency and low overhead
qualities, memory-to-memory communication has been extensively studied as a
clustering support for high-performance distributed computing. More recently,
this concept has attracted a great deal of interest from industry being
incorporated in the Virtual Interface Architecture (VIA) interconnect
specifications, in Infiniband, a new switch-based I/O architecture for servers,
and as a high-performance transport solution for remote storage and file
systems over IP. These trends contribute to extend the scope of
memory-to-memory communication to a more influential class of systems, namely
the network servers, and to a broaden class of problems including availability
in addition to performance.
In this talk, I will explore several novel uses of memory to memory
communication in building robust and efficient systems software for distributed
servers. In particular, I will discuss three new system software architectures
which have been made possible by the new I/O and interconnect technologies
based on memory-to-memory communication: (i) intra-server splitting of
operating system functionality between host and intelligent devices, (ii)
inter-server state migration for highly-available internet services, and (iii)
weak file server aggregation over SAN and WANs. In all these cases, the use of
silent memory-to-memory communication is essential to achieve the desired
performance and functionality. In the talk, I will also emphasize the caveats
and pitfalls of using memory-to- memory communication and share the lessons I
have learned in implementing multiple systems based on this communication
architecture.