On Reducing I/O Overheads in Large-Scale Invariant Subspace Projections
Obtaining highly accurate predictions on properties of light atomic nuclei using the Configuration Interaction (CI) method requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian, H. One particular approach, the J-scheme, requires the projection of the H matrix into an invariant subspace. Since the matrices can be very large, enormous computing power is needed while significant stresses are put on the memory and I/O sub-systems. By exploiting the inherent localities in the problem and making use of the MPI one-sided communication routines backed by RDMA operations available in the new parallel architectures, we show that it is possible to reduce the I/O overheads drastically for large problems. This is demonstrated in the subspace projection phase of J-scheme calculations on 6Li nucleus, where our new implementation based on one-sided MPI communications outperforms the previous I/O based implementation by almost a factor of 10.
KeywordsInvariant Subspace Communication Overhead Diagonal Block Remote Memory Load Imbalance
Unable to display preview. Download preview PDF.
- 6.Aktulga, H.M., Yang, C., Ng, E., Maris, P., Vary, J.P.: Large-scale Parallel null space calculation for nuclear configuration interaction. In: Proc. of HPCS 2011, Istanbul, Turkey, July 4 - 8 (2011)Google Scholar
- 7.Gropp, W., Huss-Lederman, S., Lumsdaine, A., Lusk, E., Nitzberg, B., Saphir, W., Snir, M.: MPI – The Complete Reference. The MPI-2 Extensions, vol. 2. MIT Press, Cambridge (1998)Google Scholar
- 9.Çatalyürek, Ü.V., Kaya, K., Uçar, B.: Integrated Data Placement and Task Assignment for Scientific Workflows in Clouds. In: Proc. of HPDC, The Fourth International Workshop on Data Intensive Distributed Computing (DIDC) (June 2011)Google Scholar