In Last Name Alphabetical Order:
- Kohei Cho (Tokai University, JP)
- Ian Foster (University of Chicago,USA)
- Bob Jones (CERN, CH)
- Kenichi Miura (NII, JP)
- Takashi Sasaki (KEK, JP)
Recovery Monitoring of Tsunami Damaged Area using Remote Sensing
On March 11, 2011, Great East Japan Earthquake struck Tohoku Region of Japan. Huge area in the northeast coast of Japan was seriously damages by the magnitude 9.0 earthquake and subsequent tsunami. In late of the year, the author has set up a project for monitoring the recovery of the tsunami damaged area of Miyagi Prefecture by ground survey and satellite image data analysis. Environmental education is another important aspect of this project. The project was funded by Japan Society for the Promotion of Science(JSPS). The first term was from 2012 to 2016, and we are now in the second term which will last until 2021. In this project, we have been monitoring the recovery status of various damaged areas of Miyagi Prefecture by visiting the areas twice a year and comparing multi temporal satellite images. We are also involving students for environmental education. In my talk, various onsite photos and satellite images, which reflect the dramatically recovery of the areas, will be presented. The time series of various satellite images showed how the seriously destroyed paddy fields near the mouth of Kitakami River rapidly recovered by the landfilling. Multi temporal analysis of MODIS NDVI was also very useful for evaluating the recovery status of paddy fields. We cannot stop disasters. Disasters will come suddenly. However, by proper preparation, we may minimize the damages of disasters. Several ways to utilize remote sensing technology for quick disaster monitoring will also be proposed.
Feb. 27, 2011
March 14, 2011
Aug. 21, 2012
Aug. 6, 2015
Professor, Department of Human & Information Science
Executive Director, Research Promotion Division, Tokai University
Executive Director, Tokai University Research & Information Center (TRIC)
Kohei Cho graduated Department of Applied Physics at the Tokyo University of Science in 1979 and finished his master course on remote sensing at Chiba University in 1981. After working ten years at the Remote Sensing Technology Center of Japan (RESTEC) as a remote sensing scientist, he joint Tokai University, He has been the General Secretary of the Asian Association on Remote Sensing (AARS) since 2009, He has published more than 100 papers on remote sensing in national & international journals and proceedings. He is also co-author of 15 books on remote sensing and image processing. His scientific interest includes but not limited to sea ice monitoring using passive microwave sensors, disaster monitoring from space, and e-Learning. He was awarded the Dr. Boon Indrambarya Gold Medal in 2009 and the Samuel Gamble Award in 2012. He is conducting a project on monitoring the damages and recovery of the Japan Earthquake using remote sensing funded by JST since 2012.
Learning Systems for Science
New learning technologies seem likely to transform much of science, as they are already doing for many areas of industry and society. We can expect these technologies to be used, for example, to obtain new insights from massive scientific data and to automate research processes. However, success in such endeavors will require new learning systems: scientific computing platforms, methods, and software that enable the large-scale application of learning technologies. These systems will need to enable learning from extremely large quantities of data; the management of large and complex data, models, and workflows; and the delivery of learning capabilities to many thousands of scientists. In this talk, I review these challenges and opportunities and describe systems that my colleagues and I are developing to enable the application of learning throughout the research process, from data acquisition to analysis.
The European Open Science Cloud on a global stage
Over the past years, numerous policy makers from around the world have articulated a clear and consistent vision of global Open Science as a driver for enabling a new paradigm of transparent, data-driven science as well as accelerating innovation. In Europe, this vision is being realised through an ambitious programme under the heading of the European Open Science Cloud (EOSC). The EOSC will offer 1.7 million European researchers and 70 million professionals in science, technology, the humanities and social sciences a virtual environment with open and seamless services for storage, management, analysis and re-use of research data, across borders and scientific disciplines by federating existing scientific data infrastructures, currently dispersed across disciplines and the EU Member States. The research communities that EOSC aims to serve span the globe and so EOSC cannot operate in isolation but must find ways to be part of the broader, international open science movement. This presentation will give an update on the status of EOSC and the approaches being considered for how EOSC can engage at a global level.
Bob Jones is a senior member of the scientific staff at CERN (http://www.cern.ch) and a leader of the Helix Nebula initiative (http://www.helix-nebula.eu/), a public private partnership to explore the use of commercial cloud services for big data science applications. He is the coordinator for the recently completed HNSciCloud Horizon 2020 Pre-Commercial Procurement project (http://www.hnscicloud.eu/) which is procuring innovative cloud services to establish a cloud platform for the European research community and contributing the European Open Science Cloud (https://ec.europa.eu/research/openscience/index.cfm?pg=open-science-cloud). Bob is the rapporteur for EOSC Sustainability Working Group (https://www.eoscsecretariat.eu/working-groups/sustainability-working-group)
HNSciCloud won the 2019 Procura+ award for procuring innovative open source software (https://www.hnscicloud.eu/news/cern-winner-of-the-procura-awards-2019) His experience in the distributed computing arena includes mandates as the technical director and then project director of the EGEE projects (2004-2010) which led to the creation of EGI (http://www.egi.eu/).
Recent Trends in HPC Architecture and Random Number Generation in the Exascale Era
The number of nodes/cores of the HPC systems has been increasing rapidly in recent years; some currently available systems are already equipped with more than one million cores. As the degree of parallelism in the system reaches this level, the performance of some of the traditional numerical algorithms need to be re-examined, due to a widening imbalance between the computational capability vs available memory size per core and /or inter-processor communication bandwidth. In this regard, the Monte Carlo Methods (MCMs) possess the following advantageous characteristics:
(1) Very high degree of parallelism exists inherently, where very few information is exchanged across nodes/cores,
(2) Required memory size per node/core can be smaller than some deterministic mesh-based algorithms
(3) Due to statistical nature of MCM algorithms, fault resilience is expected to some extent.
Therefore, we should give more considerations in expanding the application areas of MCMs such as the solution of the elliptic partial differential equations, for example. In order to conduct the Monte Carlo calculations correctly and efficiently, the capability to generate high quality pseudo-random number sequence is crucial. The desirable features are:
(1) Long period of the sequence,
(2) High statistical qualities both with serial and across the multiple sequences,
(3) Fast computation in generating the random number sequences,
(4) Efficient initialization of multiple sequences for millions of cores.
We have been developing the “Multiple Recursive Generators (MRG)” based on the 8-th order full polynomials with a large modulus (2^31-1), which can generate statistically high-quality random number sequences with very long periods(~10^74), and easily implementable jump-ahead scheme for effective parallelization. We have recently reformulated the MRG8 for Intel’s KNL and NVIDIA’s P100 GPU – named MRG8-AVX512 and MRG8-GPU, respectively.
This research has been conducted with Dr. Yusuke Nagasaka of Tokyo Institute of Technology (currently with Fujitsu Laboratories) and Dr. John Shalf of Lawrence Berkeley National Laboratory.
Kenichi Miura is a professor emeritus of the National Institute of Informatics, an emeritus fellow of the Fujitsu Laboratories, an affiliate with Lawrence Berkeley Nattional Laboratory, and a visiting professor with the High Energy Accelerator Research Organization (KEK)..
Dr. Miura received his B.S. degree in physics from University of Tokyo in 1968, MS and Ph.D. degrees in computer science from University of Illinois, Urbana-Champaign in 1971 and 1973, respectively. He was also a graduate research assistant in the ILLIAC IV Project. Dr. Miura joined Fujitsu in 1973, and since then he has been involved in mainframe computer design, high-performance computer design, and special-purpose processor design for the radio astronomy. From 1992 to 1996, he was Vice President and General Manager of Supercomputer Group at Fujitsu America, Inc
In 1997 Dr. Miura became Chief Scientist in the HPC Division of Fujitsu, and in 2002, he was appointed as a fellow with the Fujitsu Laboratories Limited, which he served until 2014.
In 2003 he joined the National Institute of Informatics (NII) as a professor in high end computing, and directed the National Research Grid Initiative (NAREGI) Project from 2003 to 2008, which was funded by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). From 2008 to 2012, Dr. Miura again directed a new national project called “Resource Linkage for e-Science (RENKEI)”, which was also funded by MEXT.
Dr. Miura also served as a visiting professor at the Computer and Communications Center of Kyushu University from 1990 to 1993, a visiting professor at the National Astronomical Observatory from 2008 to 2013 and a visiting professor at the Institute of Statistical Mathematics from 2013 to 2018
Dr. Miura’s research interest includes supercomputer architecture, grid computing, parallel numerical algorithms, and computational physics, especially the pseudo-random number generation algorithms for the Monte Carlo simulations.
Dr. Miura received IEEE Computer Society’s Seymour Cray Computer Engineering Award in 2009.
A quarter-century of an open-source software
Geant4 is the software toolkit to simulate the interaction between particles and matter. The development of Geant4 was started in 1994 by the Geant4 international collaboration. Before the establishment of the collaboration, researchers at KEK and CERN were working independently to develop the successor of GEANT3 based on Object-Oriented technologies for SSC and LHC experiments accordingly. The techniques of Object Oriented Analysis/Objet Oriented Design are brought from KEK to the collaboration with earlier results of the analysis and design of a new detector simulation toolkit. The basic design of Geant4 was performed by the researchers from the country where the sun rises. Today, Geant4 is very widely used in various fields such as particle physics, space, engineering, medicine. The first Geant4 general paper has been cited more than 11,000 times already and the number is increasing year by year. Our long sustained efforts in not only the technical aspects but also the sociological aspects of international collaboration will be introduced in this talk.
Takashi Sasaki is a Professor of Computing Research Center at KEK (High Energy Accelerator Research Organization, Japan) and a Professor at SOKENDAI (The Graduate University for Advanced Studies, Japan) since 2007. He received his doctorate in physics from Niigata University in 1992.
He started to work for the development of a future detector simulation framework based on Object-Oriented Technology in 1991. The effort results in Geant4 to start the collaboration with CERN, ESA, and institutions in the world. He is one of the original developers of Geant4 who performed the grand design of it. Geant4 is now widely used in various fields such as particle physics, space, engineering, medicine, and so on.
He worked in the area of GRID computing and e-Science later as a computing platform for experiments at KEK. He managed the team for Grid computing at KEK during 2005 and 2015. Besides research, he is in charge of the design, procurement, and operation of IT systems at KEK. His current research interests include simulation of the interaction between particles and matter and its medical applications. He is leading the team to develop a highly parallel radiation simulator for GPU, MPEXS, to accelerate simulation.