(In Last Name Alphabetical Order)
- Professor David Abramson (Director, Research Computing Centre)
- Xuebin Chi (Chinese Academy of Sciences, CN)
- Jung-Hsin Lin (Academia Sinica, TW)
- Miron Livny (University of Wisconsin, US)
- Richard Marciano (Digital Curation Innovation Center, U. of Maryland, US)
Professor David Abramson
Director, Research Computing Centre
Caches all the way down: Infrastructure for Data Science
The rise of big data science has created new demands for modern computer systems. While floating performance has driven computer architecture and system design for the past few decades, there is renewed interest in the speed at which data can be ingested and processed. Early exemplars such as Gordon, the NSF funded system at the San Diego Supercomputing Centre, shifted the focus from pure floating point performance to memory and IO rates. At the University of Queensland we have continued this trend with the design of FlashLite, a parallel cluster equiped with large amounts of main memory, Flash disk, and a distributed shared memory system (ScaleMP’s vSMP). This allows applications to place data “close” to the processor, enhancing processing speeds. Further, we have built a geographically distributed multi-tier hierarchical data fabric called MeDiCI, which provides an abstraction very large data stores cross the metropolitan area. MeDiCI leverages industry solutions such as IBM’s Spectrum Scale and SGI’s DMF platforms.
Caching underpins both FlashLite and MeDiCI. In this talk I will describe the design decisions and illustrate some early application studies that benefit from the approach.
David has been involved in computer architecture and high performance computing research since 1979.
He has held appointments at Griffith University, CSIRO, RMIT and Monash University.
Prior to joining UQ, he was the Director of the Monash e-Education Centre, Science Director of the Monash e-Research Centre, and a Professor of Computer Science in the Faculty of Information Technology at Monash. From 2007 to 2011 he was an Australian Research Council Professorial Fellow.
David has expertise in High Performance Computing, distributed and parallel computing, computer architecture and software engineering.
He has produced in excess of 200 research publications, and some of his work has also been integrated in commercial products. One of these, Nimrod, has been used widely in research and academia globally, and is also available as a commercial product, called EnFuzion, from Axceleon.
His world-leading work in parallel debugging is sold and marketed by Cray Inc, one of the world's leading supercomputing vendors, as a product called ccdb.
David is a Fellow of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronic Engineers (IEEE), the Australian Academy of Technology and Engineering (ATSE), and the Australian Computer Society (ACS). He is currently a visiting Professor in the Oxford e-Research Centre at the University of Oxford.
Dr. Xuebin Chi
Deputy Director, Computing Center of Chinese Academy of Sciences
High Performance Computing Environment and Applications in CAS
During the past twenty years, CAS has made great progresses in high performance computing. Two applications from CAS were in the 2016 ACM Gordon Bell Prize finalist, while one of them became the winner. There is again a historic breakthrough when Sunway TaihuLight Supercomputer wins No. 1 as a new record-breaking capacity supercomputer in 2016, following Tianhe-2’s success which led top500 supercomputer list for three consecutive years. This keynote will introduce how the supercomputing environment of CAS and CNGRID service environment have evolved in the past years. It also talks about to how much extent that supercomputing and scientific visualizasiton can support researches from various areas including high energy physics (i.e. ATLAS), new energy power, material science, meteorology, etc.
As one of China's high performance computing and grid computing academic leaders, Prof. Chi has presided over and participated in a lot of research projects including the National High Technology Research and Development Program of China, the Major State Basic Research Development Program of China, General Program of National Natural Science Foundation of China, Knowledge Innovation Project of the Chinese Academy of Sciences, etc.
Professor Miron Livny
University Of Wisconsin-Madison
On-the-fly Capacity Planning in Support of High Throughput Workloads
Miron Livny received a B.Sc. degree in Physics and Mathematics in 1975 from the Hebrew University and M.Sc. and Ph.D. degrees in Computer Science from the Weizmann Institute of Science in 1978 and 1984, respectively. Since 1983 he has been on the Computer Sciences Department faculty at the University of Wisconsin-Madison, where he is currently the John P. Morgridge Professor of Computer Science, the director of the Center for High Throughput Computing (CHTC), is leading the HTCondor project and serves as the principal investigator and technical director of the Open Science Grid (OSG). He is a member of the scientific leadership team of the Morgridge Institute of Research where he is leading the Software Assurance Market Place (SWAMP) project and is serving as the Chief Technology Officer of the Wisconsin Institutes of Discovery.
Dr. Livny's research focuses on distributed processing and data management systems and involves close collaboration with researchers from a wide spectrum of disciplines. He pioneered the area of High Throughput Computing (HTC) and developed frameworks and software tools that have been widely adopted by academic and commercial organizations around the world.
Professor Richard Marciano
Director, Digital Curation Innovation Center (DCIC), University of Maryland
The Emergence of Computational Archival Science
The large-scale digitization of analog archives, the emerging diverse forms of born-digital archives, and the new ways in which researchers across disciplines (as well as the public) wish to engage with archival material, are resulting in disruptions to transitional archival theories and practices. Increasing quantities of ‘big archival data’ present challenges for the practitioners and researchers who work with archival material, but also offer enhanced possibilities for scholarship through the application of computational methods and tools to the archival problem space, and, more fundamentally, through the integration of ‘computational thinking’ with ‘archival thinking’. The talk will discuss these paradigm shifts in the context of e-infrastructures.
Richard is a professor in the College of Information Studies at the University of Maryland and director of the newly formed Digital Curation Innovation Center (DCIC). He comes from the School of Information and Library Science (SILS) at the University of North Carolina Chapel Hill, where he served as professor and director of the Sustainable Archives and Leveraging Technologies (SALT) lab. Prior to that, he conducted research at the San Diego Supercomputer Center (SDSC) at the University of California San Diego for over a decade with an affiliation in the Division of Social Sciences in the Urban Studies and Planning program. His research interests center on digital preservation, sustainable archives, cyberinfrastructure, and big data. He is currently the U. Maryland lead on a $10.5M 2013-2018 NSF/DIBBs implementation grant with the National Center for Supercomputing Applications at the U. of Illinois Urbana-Champaign called "Brown Dog". He holds degrees in Avionics and Electrical Engineering, a Master's and Ph.D. in Computer Science from the University of Iowa, and conducted a Postdoc in Computational Geography.