The presentation will focus on an overview of the latest model vector supercomputer SX-ACE with its architectural highlights and applications. NEC has pursued higher sustained performance through a world top-level single-core performance and a memory bandwidth. It is also represented by the highest efficiency on the HPCG benchmark program designed to create a more relevant metric for ranking HPC systems. The proven sales track record of the SX Series has been demonstrated in such markets as academia, earth environment, disaster mitigation, and material design. Furthermore, the talk will outline the concept of NEC’s future system also aimed at big data solutions, which is expected to serve as a powerful vehicle for tackling various scientific and social issues.
Hiroshi TAKAHARA is currently a senior director of IT Platform Division at NEC. After joining NEC with the academic background of geophysics, he has been engaged in sales activities related to high-performance computing systems and scientific application software projects. He has also been leading governmental research and development projects. His current missions include marketing and community relations. He received the ACM Gordon Bell Award in 2002 with respect to climate simulations on the Earth Simulator.
The challenges in today’s high end computing market are driving changes in the way supercomputers should be designed. The scale of current and future high end systems with wide nodes, many integrated cores, more threads per processor, longer vector lengths, and more complex memory hierarchies, brings a new set of challenges for application developers. Are you ready for the future of high performance computing? Is your application performance portable? The technology changes in the supercomputing industry are forcing computational scientists to face new critical system characteristics that will significantly impact the performance and scalability of applications. With this new generation of supercomputers application developers need a programming environment that can help maximize programmability with ease porting and tuning efforts, while not losing sight of performance portability across a wide range of systems. Application developers need systems that can adapt to the application needs and a programming environment that can address and hide the issues of scale and complexity of high end HPC systems. They need sophisticated compilers, tools, and adaptive runtime systems that can help solve multi-disciplinary and multi-scale problems, enabling them to achieve high levels of performance. In this talk I will present Cray’s adaptive supercomputing strategy, which integrates scalar processing, vector processing, multithreading, and hardware accelerators in a single high-performance computing platform, and provides a powerful programming environment with intelligent compilers and adaptive libraries, tools, and runtime systems, that can help users solve multi-disciplinary and multi-scale problems achieving high levels of performance and programmability.
Dr. Luiz DeRose is a Senior Principal Engineer and the Programming Environments Director at Cray Inc, where he is responsible for the programming environment strategy for all Cray systems. Before joining Cray in 2004, he was a research staff member and the Tools Group Leader at the Advanced Computing Technology Center at IBM Research. Dr. DeRose had a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. With more than 20 years of high performance computing experience and a deep knowledge of its programming environments, he has published more than 50 peer-review articles in scientific journals, conferences, and book chapters, primarily on the topics of compilers and tools for high performance computing. Dr. DeRose participated in the definition and creation of the OpenACC standard for accelerators high level programming. He is the Organizing and Program Committee co-chair of the 10th International Workshop on OpenMP (IWOMP) in 2014, was the Global Chair for the Multicore and Manycore Programming topic in Euro-Par 2013, and was the Program Committee co-Chair at the 21st International Conference on Parallel Architecture and Compilation Techniques PACT-2012.
The HPC community is driven by the need of compute power. The next challenge, that is expected to be addressed not before 2020, is Exascale Computing (10^18). The road to Exascale is paved with severe roadblocks that have to be removed to reach this computing power. In the first part of this talk the top 5 roadblocks are identified and described: energy, memory and data movement, programming models, reliability and resiliency, and last but not least, co-design. In the second part of this talk, for each key challenge are presented the Bull approach to address them as part of the Bull Exascale program that has been put in place in 2014 and that will be executed up to 2020.
Mr. Gerardo Ares is an Atos Big Data & Security Competence Center leader and HPC pre-sale solution designer at Brazil. He works at Bull/Atos since 2007 and in the last 5 years he specialized in the HPC and critical system environments. He has 18 years of industry experience in HPC, system and network administration, application middleware, Unix-like distributed computing and network programming. Bachelor degree at Universidad de la Repúbilca del Uruguay (UdelaR) in computer science. He also worked 15 years at the Instituto de Computación (InCo) of UdelaR. In the InCo he was co-responsible of the “Introduction to High Performance Computing” post-graduate course for 5 years and responsible for the Operating System and Operating System workshop in Linux for 6 years. He also was involved in several HPC projects at the InCo as a programmer in shared and distributed memory environments. He is currently a Master student on UdelaR working in a thesis in the area of HPC Numerical Linear Algebra using accelerators in a distributed memory environments.
Within only a couple years the SENAI CIMATEC Supercomputing Center for Industrial Innovation has gone from plans and blueprints on paper into an active, secure, research-oriented HPC facility hosting several world-class projects and running the #1 best-performing HPC system operating in Latin America. On this talk we will cover a bit of the history behind the SENAI CIMATEC HPC Center, showcasing a few challenges and accomplishments regarding the setup and operation of a large HPC facility in Brazil. Focus will be mostly driven by the Oil & Gas project portfolio. The talk will be concluded by addressing the HPC Center’s vision towards bringing opportunities to, interacting and collaborating with the HPC research ecosystem in Brazil and abroad.
Renato Miceli is Director General at the SENAI CIMATEC Supercomputing Center for Industrial Innovation in Salvador, Brazil. He has a BSc. in Computer Science (hons) from Federal University of Campina Grande and is a PhD candidate in Computer Science at Université de Rennes 1. At SENAI CIMATEC Renato oversees the technical and scientific strategies and operation of the Supercomputing Center including managing the HPC infrastructure (#1 in Latin America, 405 TFLOPS, #167 in Top500), carrying out R&D&I projects with academia & industry in several domains and promoting HPC training & tech transfer services. Renato has extensive expertise in both project management and technical delivery esp. performance engineering for Cryptography, Geophysical Analysis, Molecular Dynamics, Meteorology and Financial Services. His current projects focus on Oil & Gas, where he is the PI of projects with the O&G industry and PI of the SENAI CIMATEC Intel Parallel Computing Center for O&G. Until mid-2014 he was Computational Scientist at the Irish HPC Centre (ICHEC), where his work on porting financial applications to accelerators received the 2012 HPCwire Readers Choice Award for the “Best use of HPC for financial services”. Renato is also a PRINCE2 project manager; an ISTQB CTFL Certified Tester; and recipient of the 2012 HiPEAC Collaboration Grant, enabling his placement as Visiting Scholar at the University of Delaware in 2013.
Modernizing your code on Intel® architecture can help you achieve breakthrough performance for highly parallel applications, and you won’t have to re-code your entire problem or master new tools and programming models. While many applications already comprehended modern hardware, many more do not extract parallelism in their algorithms, nor comprehend modern features including larger caches, SIMD, threading, and memory technology.
This presentations provides an overview of how “code modernization techniques” were applied in real case studies in order to achieve substantial speedups, and most important, using existing hardware but carrying their code optimized for future processors or co-processors like Xeon® and Xeon Phi™.
Igor Freitas is an Application Engineer at Intel Brazil in charge of the High Performance Computing initiatives from Software and Services Group in Latin America. Currently is coordinating the Intel® Modern Code program in Brazil where the major goal is to provide full code modernization workshops to the broader community of developers. In the last two years has acted as a Technical Leader responsible to architect a HPC cluster for the Intel Innovation Center in Brazil (Rio de Janeiro), focused on research around “HPC as a Service”, Big Data and Internet of Things pilot projects in different verticals like Oil & Gas, Finance and Healthcare. Working with high performance computing and software developing since 2008, holds a BS in Information Technology and a Master degree in Electrical Engineering, both from State University of Campinas (UNICAMP).
In this session, HP will present the variety of solutions, services and partnerships that made it the absolute leader in the TOP500 list. We will also present some of the new disruptive technologies that we are working at HP Labs that can totally change the computing area.
Cristiano Maciel is the leader solutions architect for High Performance Solutions at HP Brazil. Cristiano is bachelor in computer sciences by the Federal University of Santa Catarina and in its more than 8 years at HP Brazil, he had been designing HPC Clusters from 10s to 100s nodes for the most various workloads. He also is a solutions architect for other type of solutions like, SAP systems, object Storage and virtualization.