6 Acknowledgments
We gratefully acknowledge use of “Jazz,” a 350-node computing cluster oper-
ated by the Mathematics and Computer Science Division at Argonne National
Laboratory as part of its Laboratory Computing Resource Center. Jianwei Li
at Northwestern University and Tyce McLarty at Lawrence Livermore National
Laboratory contributed benchmark data. We thank them for their efforts.
References
1. ALC, the ASCI Linux Cluster. http://www.llnl.gov/linux/alc/.
2. A. Ching, A. Choudhary, W. Liao, R. Ross, and W. Gropp. Efficient structured
data access in parallel file systems. In Proceedings of Cluster 2003, Hong Kong,
November 2003.
3. Avery Ching, Alok Choudhary, Kenin Coloma, Wei keng Liao, Robert Ross, and
William Gropp. Noncontiguous I/O accesses through MPI-IO. In Proceedings of
the Third IEEE/ACM International Symposium on Cluster Computing and the
Grid, pages 104–111, Tokyo, Japan, May 2003. IEEE Computer Society Press.
4. Avery Ching, Alok Choudhary, Wei keng Liao, Robert Ross, and William Gropp.
Noncontiguous I/O through PVFS. In Proceedings of the 2002 IEEE International
Conference on Cluster Computing, September 2002.
5. IBM DataStar Cluster. http://www.npaci.edu/DataStar/.
6. IEEE/ANSI Std. 1003.1. Portable operating system interface (POSIX)–part 1:
System application program interface (API) [C language], 1996 edition.
7. Florin Isaila and Walter F. Tichy. View I/O: Improving the performance of non-
contiguous I/O. In Proceedings of IEEE Cluster Computing Conference, Hong
Kong, December 2003.
8. LCRC, the Argonne National Laboratory Computing Project.
http://www.lcrc.anl.gov.
9. Xiasong Ma, Marianne Winslett, Jonghyun Lee, and Shengke Yu. Improving MPI
IO output performance with active buffering plus threads. In Proceedings of the
International Parallel and Distributed Processing Symposium. IEEE Computer So-
ciety Press, April 2003.
10. MPI-2: Extensions to the message-passing interface. The MPI Forum, July 1997.
11. Jean-Pierre Prost, Richard Treumann, Richard Hedges, Bin Jia, and Alice Koniges.
MPI-IO GPFS, an optimized implementation of MPI-IO on top of GPFS. In
Proceedings of Supercomputing 2001, November 2001.
12. The Parallel Virtual File System, version 2. http://www.pvfs.org/pvfs2.
13. Rajeev Thakur and Alok Choudhary. An Extended Two-Phase Method for Access-
ing Sections of Out-of-Core Arrays. Scientific Programming, 5(4):301–317, Winter
1996.
14. Rajeev Thakur, William Gropp, and Ewing Lusk. A case for using MPI’s derived
datatypes to improve I/O performance. In Proceedings of SC98: High Performance
Networking and Computing. ACM Press, November 1998.
15. Joachim Worringen, Jesper Larson Traff, and Hubert Ritzdorf. Fast parallel non-
contiguous file access. In Proceedings of SC2003: High Performance Networking
and Computing, Phoenix, AZ, November 2003. IEEE Computer Society Press.