Program
Date:Sept 1st
PASA 2015 Workshop, 8:30A.M-12:00A.M
Keynote: 8:30A.M-9:20A.M
High Energy-efficiency Memory Architecture Design
Yinhe Han, Chinese Academy of Sciences, China.
Section 1: 9:20AM-10:10AM
1) A methodology to build models and predict performance-power in CMPs
Rajiv Nishtala, Marc González Tallada and Xavier Martorell
2) HPC-Oriented Power Evaluation Method
Feng Zhang and Liang Chen
Coffee break: 10:10AM-10:30AM
Section 2: 10:30AM-12:00AM
3) Flex: Flexible and Energy Efficient Scheduling for Big Data Storage
Daokuan Ma, Yongwei Wu, Kang Chen and Weimin Zheng
4) Flexible Desktop Application Management and its Influence on Green Computing
Wenlei Zhu, Yongwei Wu and Kang Chen
5) A State-based Energy/Performance Model for Parallel Applications on Multicore Computers
Yawen Chen, Jason Mair, Zhiyi Huang, David Eyers and Haibo Zhang
Introduction of the Keynote
Abstract:
The power and performance of modern processors becomes increasingly sensitive to the energy-efficiency of main memory due to the aggravating issue of “memory wall” and “power wall”. When memory keeps evolving for higher storage density and bandwidth, it struggles to keep down the overhead of data moving and data retention overhead. In order to tailor a more energy-efficient memory architecture, researchers are looking for perspective solutions to improve the data moving and data retention mechanism, which are the two major sources of overhead. We introduced two important progress recently made to improve the main memory architectures: low power DRAM refresh scheme, which greatly reduces the memory operation power, and computing memory architecture, which eliminates the overhead of data moving by migrating computation to memory end. In contrast to other conventional Process-in-Memory or Near Data Computing architecture, our computing memory architecture, abandons the design approach of moving accelerators or customized processors into memory devices, but begins with exploiting the existing resources inside some emerging non-volatile memory chips to accelerate the key non-compute-intensive functions for emerging big data applications. These two techniques are expected to possibly reshape the future memory architectures for more energy-efficient computing.
Bio:
Yinhe Han is a Professor in Institute of Computing Technology, Chinese Academy of Sciences (CAS). He received his Ph. D degree from CAS in 2006. He is interested on VLSI design, low power architecture, fault-tolerant architecture. He has published more than 60 papers on this area. Prof. Han’s honors include State Technological Invention Award (2012), National Outstanding Dissertation Award Candidates (2008), Outstanding Dissertation Award of China Computer Federation, Best Paper Award of the 2003 Asian Test Symposium. He served on several architecture and design automation conference committees, including HPCA, PACT, DAC, ICCAD, DATE, etc.