1st Deep Learning (DL) on Supercomputers Workshop

SC18 in Dallas, TX
Friday, Nov 16, 8:30AM-12:00PM
Room D161

The 1st Deep Learning (DL) on Supercomputers workshop provides a forum for practitioners working on any and all aspects of DL in the supercomputing context to present their latest research results and development, deployment, and application experiences. The workshop's theme is the intersection of DL and supercomputing: novel uses of supercomputers to accelerate deep learning training and inference, and innovative applications of deep learning in traditional numerical simulation. Its scope encompasses application development in industrial and scientific scenarios using supercomputers; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications on supercomputers. This workshop presents a series of invited talks from researchers who are working on the crossroads of deep learning and supercomputing.

Agenda

8:35 - 9:20 Keynote, "Learning-based predictive models: a new approach to integrating large-scale simulations and experiments"
Brian Van Essen, LLNL
9:20 - 9:40 "CANDLE framework for large scale deep learning"
Tom Brettin, ANL
9:40 - 10:00 "Fast and Accurate Deep Neural Networks Training on Distributed Systems"
Yang You, UC Berkeley
Presentation Slides
10:00 - 10:20 "Deep Learning at NERSC: Usability, Capability, and Everything in Between"
Steve Farrell, NERSC
Presentation Slides
10:20 - 10:40 "Artificial Intelligence Enabled Multiscale Molecular Simulations"
Arvind Ramanathan, ORNL
Presentation Slides
10:40 - 11:00 "Scalable and Distributed DNN Training on Modern HPC Systems"
DK Panda, OSU
Presentation Slides
11:00 - 11:20 "High-Performance AI: A View from Systems and Frameworks"
Judy Qiu, Indiana University
Presentation Slides
11:20 - 11:40 "Large scale deep learning in PFN: from 15-min Imagenet to PFDet"
Hirochika Asai, Preferred Networks
Presentation Slides
11:40 - 12:00 "Enabling Scalable and Efficient Deep Learning on Supercomputers"
Zhao Zhang, TACC
Presentation Slides
Program Co-Chairs:

Zhao Zhang, Texas Advanced Computing Center
Ian Foster, University of Chicago and Argonne National Laboratory

Workshop Committee:
  • Takuya Akiba, Preferred Networks, Japan
  • Valeriu Codreanu, SURFSara, Netherlands
  • Erich Elsen, Google Brain, USA
  • Song Feng, IBM Research, USA
  • Ian Foster (co-chair), University of Chicago and Argonne National Laboratory, USA
  • Boris Ginsburg, Nvidia, USA
  • Jessy Li, University of Texas at Austin, USA
  • Peter Messmer, Nvidia, USA
  • Judy Qiu, Indiana University, USA
  • Arvind Ramanathan, Oak Ridge National Laboratory, USA
  • Mikhail E. Smorkalov, Intel, Russia
  • Rob Schreiber, Cerebras, USA
  • Dan Stanzione, Texas Advanced Computing Center, USA
  • Rick Stevens, University of Chicago and Argonne National Laboratory, USA
  • Wei Tan, Citadel, USA
  • Jordi Torres, Barcelona Supercomputing Center, Spain
  • Daniela Ushizima, Lawrence Berkeley National Laboratory, USA
  • David Walling, Texas Advanced Computing Center, USA
  • Markus Weimer, Microsoft, USA
  • Weijia Xu, Texas Advanced Computing Center, USA
  • Kathy Yelick, University of California, Berkeley & Lawrence Berkeley National Laboratory, USA
  • Zhao Zhang (co-chair), Texas Advanced Computing Center, USA>