Jerome Vienne

Jerome Vienne

Research Associate

High Performance Computing

Jerome joined TACC in 2012 as a Research Associate in the High Performance Computing Group. He got his Ph.D. in Computer Science at the Université de Grenoble (France) under the direction of Jean-Marc Vincent, Jean-Francois Mehaut and Jean-François Lemerre (BULL). Prior to joining TACC, Jerome was a Post-Doctoral Researcher at the Ohio State University in the Network Based Computing Laboratory led by Prof D.K. Panda.

Areas of Research

Performance Analysis and Modeling

High Performance Computing

High Performance Networking


Exascale Computing

Current Projects

Unified Runtime for Supporting Hybrid Programming Models on Heterogeneous Architecture, ​NSF Abstract

A Comprehensive Performance Tuning Framework for the MPI Stack, NSF Abstract

Recent Publications

E. Gallardo, J. Vienne, L. Fialho, P. Teller and J. Browne. Employing MPI_T in MPI Advisor to Optimize Application Performance. The International Journal of High Performance Computing Applications. January 2017.

J. Cao, K. Arya, R. Garg, S. Matott, D.K. Panda, H. Subramoni, J. Vienne and G. Cooperman. System-level Scalable Checkpoint-Restart for Petascale Computing. In IEEE ICPADS, Wuhan, China, December 2016.

R. Garg, J. Vienne and Gene Cooperman. System-level Transparent Checkpointing for OpenSHMEM. In OpenSHMEM 2016: Third workshop on OpenSHMEM and Related Technologies. August 2016.

J. Vienne. Introduction to Parallel Programming with MPI. In F. T. Willmore, E. Jankowski, C. Colina, editors, Introduction to Scientific and Technical Computing. CRC Press, July 2016.

C. Rosales, J. Cazes, K. Milfeld, A. Gómez-Iglesias, L. Koesterke, L. Huang and J. Vienne. A Comparative study of Application Performance and Scalability on the Intel Knights Landing processor. In International Conference on High Performance Computing. June 2016.

R. Garg, J. Cao, K. Arya, G. Cooperman, and J. Vienne. "Extended Batch Sessions and Three-Phase Debugging: Using DMTCP to Enhance the Batch Environment". In: Proceedings of the 2016 Annual Conference on Extreme Science and Engineering Discovery Environment (XSEDE '16). July 2016.

E. Gallardo, J. Vienne, L. Fialho, P. Teller and J. Browne. MPI Advisor: a Minimal Overhead MPI Performance Tuning Tool. In EuroMPI 2015, September 2015.

A. Gomez-Iglesias, J. Vienne, K. Hamidouche, C. S. Simmons, W. L. Barth and D.K. Panda. Scalable Out-of-core OpenSHMEM Library for HPC. In OpenSHMEM 2015: Second workshop on OpenSHMEM and Related Technologies. August 2015.

A. Gomez-Iglesias, D. Pekurovsky, K. Hamidouche, J. Zhang and J. Vienne. Porting Scientific Libraries to PGAS in XSEDE Resources: Practice and Experience. XSEDE'2015 Conference, July 2015.

S. Aseeri, O. Batrashev, M. Icardi, B. Leu, N. Ning, A. Liu, B. Muite, E. Mueller, M. Quell, H. Servat, P. Sheth, R. Speck, M. Van Moer, and J. Vienne. Solving the Klein-Gordon equation using Fourier spectral methods: A benchmark test for computer performance. In 23rd High Performance Computing Symposium (HPC 2015) to be held in Conjunction with 2015 Spring Simulation Multi-Conference, April 2015.

J. Vienne, C. Rosales-Fernandez, and K. Milfeld. Heterogeneous Computing with MPI On Intel Xeon Phi. In J. Reinders and J. Jeffers, editors, High Performance Parallelism Pearls. Morgan Kaufmann, October 2014.

Jerome Vienne. Benefits of Cross Memory Attach for MPI libraries on HPC Clusters. In Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment (XSEDE '14), July 2014.

A. Ramachandran, J. Vienne, R. Van Der Wijngaart, L. Koesterke, and I. Sharapov. Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi. In 6th Int'l Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2), in conjunction with ICPP, Lyon, October 2013.

H. Subramoni, S. Potluri, K. Kandalla, B. Barth, J. Vienne, J. Keasler, K. Tomko, K. Schulz, A. Moody, and D. K. Panda. Design of a Scalable InfiniBand Topology Service to Enable Network-Topology-Aware Placement of Processes. In Supercomputing 2012, Best Student Paper Finalist, Best Paper Finalist, November 2012.

K. Kandalla, A. Buluc, H. Subramoni, K. Tomko, J. Vienne, L. Oliker, and D. K. Panda. Can Network-Offload based Non-Blocking Neighborhood MPI Collectives Improve Communication Overheads of Irregular Graph Algorithms? In International Workshop on Parallel Algorithms and Parallel Software (IWPAPS 2012), to be held in Conjunction with IEEE Cluster 2012, September 2012.

H. Subramoni, J. Vienne, and D. K. Panda. A Scalable InfiniBand Network-Topology-Aware Performance Analysis Tool for MPI. In 5th Int'l Workshop on Productivity and Performance (PROPER 2012), in conjunction with EuroPar, August 2012.

J. Vienne, J. Chen, Md. Wasi ur Rahman, N.S. Islam, H. Subramoni, and D. K. Panda. Performance Analysis and Evaluation of InfiniBand FDR and 40GigE RoCE on HPC and Cloud Computing Systems. In IEEE Hot Interconnects (HOTI-20), August 2012.

M. Luo, H. Wang, J. Vienne, and D. K. Panda. Redesigning MPI Shared Memory Communication for Large Multi-Core Architecture. In Int'l Supercomputing Conference (ISC '12), June 2012.

K. Kandalla, U. Yang, J. Keasler, T. Kolev, A. Moody, H. Subramoni, K. Tomko, J. Vienne, and D. K. Panda. Designing Non-blocking Allreduce with Collective Offload on InfiniBand Clusters: A Case Study with Conjugate Gradient Solvers Int'l Parallel and Distributed Processing Symposium. In IPDPS'12, May 2012.

S. P. Raikar, H. Subramoni, K. Kandalla, J. Vienne, and D. K. Panda. Designing Network Failover and Recovery in MPI for Multi-Rail InfiniBand Cluster. In Int'l Workshop on System Management Techniques, Processes, and Services (SMTPS), in conjunction with Int'l Parallel and Distributed Processing Symposium (IPDPS '12), May 2012.


Ph.D., Computer Science
University of Grenoble (France)

Magister, Computer Science
Joseph Fourier University (France)

M.S., Computer Science and Applied Mathematics
Joseph Fourier University (France)

B.S., Computer Science, and Applied Mathematics
Joseph Fourier University (France)

Memberships/Professional Affiliations