SBAC - PAD 2009

21st International Symposium on Computer Architecture and High Performance Computing

Sao Paulo, Brazil, October 28-31, 2009


  Home Page
  Important Dates
  Call for Papers
  Call for Workshops
  Paper Format
  Paper Submission
  Conference Program
  Keynote Talks
  Industrial Talks
  Local Information
  Trip Accommodation
  Travel and Visa
  Previous Edition

  Parallel Events

  WSCAD-SSC 2009
  WEAC 2009
  Marathon 2009
  LAGrid 2009
  SINAPAD 2009


On Wednesday (10/28 from 1:30pm to 3:00pm): "Cloud Computing: Vision, Tools, and Technologies for Delivering Computing as the 5th Utility", Rajkumar Buyya (The University of Melbourne and Manjrasoft)

On Saturday (10/31 from 2:50pm to 4:20pm): "Visualization for Performance Debugging of Large-Scale Parallel Applications", Lucas Mello Schnorr (Federal University of Rio Grande do Sul and Grenoble Institute of Technology), Benhur de Oliveira Stein (University of Santa Maria), Guillaume Huard (University Joseph Fourier), and Jean-Marc Vincent (University of Grenoble)


On Wednesday (10/28 from 1:30pm to 3:00pm)

"Cloud Computing: Vision, Tools, and Technologies for Delivering Computing as the 5th Utility", Rajkumar Buyya (The University of Melbourne and Manjrasoft)


Computing is being transformed to a model consisting of services that are commoditised and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. In such a model, users access services based on their requirements without regard to where the services are hosted. Several computing paradigms have promised to deliver this utility computing vision and they include Grid computing, P2P computing, and more recently Cloud computing. The latter term denotes the infrastructure as a Cloud in which businesses and users are able to access applications from anywhere in the world on demand. Cloud computing delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. These services in industry are respectively referred to as Infrastructure as a Service (Iaas), Platform as a Service (PaaS), and Software as a Service (SaaS). To realize Cloud computing, vendors such as Amazon, HP, IBM, and Sun are starting to create and deploy Clouds in various locations around the world. In addition, companies with global operations require faster response time, and thus save time by distributing workload requests to multiple Clouds in various locations at the same time. This creates the need for establishing a computing atmosphere for dynamically interconnecting and provisioning Clouds from multiple domains within and across enterprises. There are many challenges involved in creating such Clouds and Cloud interconnections.

This tutorial (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver the vision of computing utilities; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as VMs; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our recent initiative in Cloud Computing, called as Megha: (i) Aneka, a software system for providing PaaS within private or public Clouds and supporting market-oriented resource management, (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications, (iii) creation of 3rd party Cloud brokering services for content delivery network and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon and Nirvanix along with Grid mashups, and (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; and (5) concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision along with pathways for future research.


Dr. Rajkumar Buyya is an Associate Professor of Computer Science and Software Engineering; and Director of the Grid Computing and Distributed Systems Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft Pty Ltd., a spin-off company of the University, commercialising innovations originating from the GRIDS Lab. He has authored over 250 publications and three books. The books on emerging topics that Dr. Buyya edited include, High Performance Cluster Computing (Prentice Hall, USA, 1999), Content Delivery Networks (Springer, 2008) and Market-Oriented Grid and Utility Computing (Wiley, 2009). Dr. Buyya has contributed to the creation of high-performance computing and communication system software for Indian PARAM supercomputers. He has pioneered Economic Paradigm for Service-Oriented Grid computing and developed key Grid technologies such as Gridbus that power the emerging e-Science and e-Business applications. In this area, he has published hundreds of high quality and high impact research papers that are well referenced. The Journal of Information and Software Technology in Jan 2007 issue, based on an analysis of ISI citations, ranked Dr. Buyya's work (published in Software: Practice and Experience Journal in 2002) as one among the "Top 20 cited Software Engineering Articles in 1986-2005". He received the Chris Wallace Award for Outstanding Research Contribution 2008 from the Computing Research and Education Association of Australasia. He is the recipient of 2009 IEEE Medal for Excellence in Scalable Computing.


On Saturday (10/31 from 2:50pm to 4:20pm)

"Visualization for Performance Debugging of Large-Scale Parallel Applications", Lucas Mello Schnorr (Federal University of Rio Grande do Sul and Grenoble Institute of Technology), Benhur de Oliveira Stein (University of Santa Maria), Guillaume Huard (University Joseph Fourier), and Jean-Marc Vincent (University of Grenoble)


Performance analysis of parallel and distributed applications has been used for several years to optimize the code and improve resources utilization. Despite the strong research effort spent during these years to develop new mechanisms to enhance the analysis task, new challenges for the analysis of these applications are still present. These challenges are mainly due to the appearance of new concepts, such as Grid Computing and more recently Cloud Computing, but also the availability of new infrastructures, such as Large-Scale Clusters, composed by thousands of multi-core processing units. Within this tutorial, we will present the main aspects related to the performance debugging of parallel applications. We give special attention to the impact of large-scale aspects on the techniques that are commonly used, from the tracing mechanisms to the analysis through visualization. Since the tutorial is addressing one of the most interesting tasks of high performance computing and the challenges of large-scale application analysis, it demonstrates an extremely high degree of relevance and addresses a broad spectrum of potential attendees, from computer architecture to highperformance parallel areas. The tutorial is divided in four parts: the definition of common problems found by parallel application developers; the common methodology applied to understand these problems; the implementation of this methodology in grid and cluster environments; and a demonstration of real-case analysis, from trace to the visualization. The first part characterizes the main problems that parallel application developers must face in order to optimize their code. Problems like computing and network bottleneck identification, low resource utilization leading to load imbalances among processes and low performance in large-scale situations are defined. The second part presents the methodology and theoretical approaches to detect these problems, through execution tracing and visualization analysis. The third part presents implemented tools and techniques that are used to analyze performance problems, showing examples of visualization tools. The fourth part presents a demonstration of a real-world large-scale parallel application executed on a Grid and also the analysis of synthetic traces that illustrate commons problems found when doing a performance debugging.


Msc. Lucas Mello Schnorr is Phd candidate at the Federal University of Rio Grande do Sul (Porto Alegre, Brazil) and the Grenoble Institute of Technology (France). He obtained the Bachelors degree in Computer Science at the Federal University of Santa Maria, Brazil, in 2003, and his Masters degree at Federal University of Rio Grande do Sul in 2005. His current research interests include performance visualization of parallel applications and information visualization techniques applied to the analysis of application traces. He is the main developer of Triva and DIMVisual, tools that are used to visualize and integrate large-scale traces.

Prof. Dr. Benhur de Oliveira Stein has been teaching computer science at the University of Santa Maria (Rio Grande do Sul, Brazil) since 1991 where he is now a Professor. Prior, he was an Electrical Engineer from 1983 to 1988. His current interest include performance debugging and visualization of parallel and distributed programs, cluster computing and distributed systems. He obtained a Masters degree in computer science from the Universidade Federal do Rio Grande do Sul at Porto Alegre, Brazil, in 1992 and a PhD in computer science from the Universit Joseph Fourier at Grenoble, France, in 1999.

Prof. Dr. Guillaume Huard is a full time associate professor at the university Joseph Fourier in Grenoble (France). He is conducting his research within the computer science laboratory of Grenoble (LIG Lab) jointly operated by CNRS, INRIA, UJF, UPMF and INPG and he is a member of the MOAIS team. His main research areas of expertise are instructions scheduling, resource constrained scheduling, batch schedulers, deployment and parallel remote execution tools, monitoring and debugging of parallel applications. He is the author of several international papers in journals such as IJPP and conferences such as Europar, Grid and CCGrid. He is also involved in several software development projects such as TakTuk or OAR.

Prof. Dr. Jean-Marc Vincent is associate professor in computer science at University of Grenoble. He received the aggregation in mathematics in 1986 and a PhD thesis is Computer Science in 1990 from University of Paris XI (FRANCE). In 1991, he joined the INRIA-IMAG APACHE project at the University of Grenoble in the performance evaluation group. He is now member of MESCAL INRIA Project in the Laboratory of Informatics in Grenoble. His research interests concerns stochastic modeling and analysis of massively parallel or distributed computer systems. It includes : fundamental studies on dynamics of stochastic discrete event systems (markovian models, (max,+)-algebra, stochastic ordering, stochastic automata networks); software simulation techniques with application to high speed networks (rare event estimation, quality of service...); measurement software tools that provide aggregated or disaggregated information on the behavior of parallel or distributed program executions (software tracers, statistical on-line analyzers...). The application fields of such research concerns scientific computations, parallel efficient algorithms, with a specific focus on statistical analysis tools. He participates to software developments projects on parallel tracing (Tape-PVM), Visualization (Paje), simulation (PSI and PSI2), dynamic web servers (Hypercarte). He directed about 12 thesis in computer science.


Contact: sbac2009