Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

High-performance and scalable non-blocking all-to-all with collective offload on InfiniBand clusters: a study with parallel 3D FFT


Three-dimensional FFT is an important component of many scientific computing applications ranging from fluid dynamics, to astrophysics and molecular dynamics. P3DFFT is a widely used three-dimensional FFT package. It uses the Message Passing Interface (MPI) programming model. The performance and scalability of parallel 3D FFT is limited by the time spent in the Alltoall Personalized exchange (MPI_Alltoall) operations. Hiding the latency of the MPI_Alltoall operation is critical towards scaling P3DFFT. The newest revision of MPI, MPI-3, is widely expected to provide support for non-blocking collective communication to enable latency-hiding. The latest InfiniBand adapter from Mellanox, ConnectX-2, enables offloading of generalized lists of communication operations to the network interface. Such an interface can be leveraged to design non-blocking collective operations. In this paper, we design a scalable, non-blocking Alltoall Personalized Exchange algorithm based on the network offload technology. To the best of our knowledge, this is the first paper to propose high performance non-blocking algorithms for dense collective operations, by leveraging InfiniBand’s network offload features. We also re-design the P3DFFT library and a sample application kernel to overlap the Alltoall operations with application-level computation. We are able to scale our implementation of the non-blocking Alltoall operation to more than 512 processes and we achieve near perfect computation/communication overlap (99%). We also see an improvement of about 23% in the overall run-time of our modified P3DFFT when compared to the default-blocking version and an improvement of about 17% when compared to the host-based non-blocking Alltoall schemes.

This is a preview of subscription content, log in to check access.


  1. 1.

    Mamidala AR, Kumar R, De D, Panda DK (2008) MPI collectives on modern multicore clusters: performance optimizations and communication characteristics. In: 8th IEEE international symposium on cluster computing and the grid, Lyon, pp 130–137

  2. 2.

    Donis DA, Yeung PK, Pekurovsky D (2008) Turbulence simulations on O(104) processors. In: TeraGrid

  3. 3.

    Graham R, Poole S, Shamis P, Bloch G, Boch N, Chapman H, Kagan M, Shahar A, Rabinovitz I, Shainer G (2010) Overlapping computation and communication: barrier algorithms and Connectx-2 CORE-direct capabilities. In: Proceedings of the 22nd IEEE international parallel & distributed processing symposium, workshop on communication architectures for clusters (CAC)’10

  4. 4.

    Subramoni H, Kandalla K, Sur S, Panda DK (2010) Design and evaluation of generalized collective communication primitives with overlap using ConnectX-2 offload engine. In: The 18th annual symposium on high performance interconnects, HotI

  5. 5.

    Hoefler T, Lumsdaine A (2008) Message progression in parallel computing—to thread or not to thread. In: Proceedings of the IEEE international conference on cluster computing

  6. 6.

    Hoefler T, Squyres J, Rehm W, Lumsdaine A (2006) A case for non-blocking collective operations. In: Frontiers of high performance computing and networking. ISPA 2006 workshops. Lecture notes in computer science, vol 4331, pp 155–164

  7. 7.

    Mellanox technologies. ConnectX-2 Architecture.

  8. 8.

    MPI Forum. MPI: a message passing interface.

  9. 9.


  10. 10.

    Karonis NT, de Supinski BR, Foster I, Gropp W, Lusk E, Bresnahan J (2000) Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In: Proceedings of the 14th international symposium on parallel and distributed processing, p 377

  11. 11.

    Parallel three-dimensional fast Fourier transforms (P3DFFT) library, San Diego Supercomputer Center (SDSC).

  12. 12.

    Graham R, Poole S, Shamis P, Bloch G, Boch N, Chapman H, Kagan M, Shahar A, Rabinovitz I, Shainer G (2010) ConnectX2 InfiniBand management queues: new support for network offloaded collective operations. In: CCGrid’10, Melbourne, Australia, May 17–20

  13. 13.

    Laizet S, Lamballais E, Vassilicos JC (2010) A numerical strategy to combine high-order schemes, complex geometry and parallel computing for high resolution DNS of fractal generated turbulence. Comput Fluids 39:471–484

  14. 14.

    Top500. Top500 supercomputing systems, Oct 2010

  15. 15.

    Voltaire. Fabric collective accelerator (FCA)

Download references

Author information

Correspondence to Krishna Kandalla.

Additional information

This research is supported in part by U.S. Department of Energy grants #DE-FC02-06ER25749 and #DE-FC02-06ER25755; National Science Foundation grants #CCF-0833169, #CCF-0916302, #OCI-0926691 and #CCF-0937842; grant from Wright Center for Innovation #WCI04-010-OSU-0; grants from Intel, Mellanox, Cisco, QLogic, and Sun Microsytems.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Kandalla, K., Subramoni, H., Tomko, K. et al. High-performance and scalable non-blocking all-to-all with collective offload on InfiniBand clusters: a study with parallel 3D FFT. Comput Sci Res Dev 26, 237 (2011).

Download citation


  • Non-blocking collective communication
  • InfiniBand network offload
  • 3DFFT
  • Alltoall personalized exchange
  • Message passing interface (MPI)