Skip to main content

A System Interconnection Device for Small-Scale Clusters

  • Conference paper
  • First Online:
Book cover Cloud Computing (CloudComp 2015)

Abstract

The performance of a physical cluster ultimately depends on two factors. One is the capability of individual computing nodes and the other is the networking speed among them. Recent processors are being greatly developed in terms of enhancing the processing speed while lowering the monetary cost. As for the networking technologies, nowadays dominant solutions have disadvantages such as high installation price and low protocol efficiency. Such drawbacks become the bottleneck of improving the ‘performance per cost’ ratio of the cluster as a whole. This paper proposes an alternative system interconnection device especially for application in small-scale clusters. The non-transparent bridges in PCI Express technology are employed to allow PCI Express packets to directly transmit across networked computing nodes. The performance is measured under two kinds of data transmission schemes, with two different benchmarking tools, respectively. Currently the proposed device delivers a peak unidirectional throughput of 8.6 gigabits per second.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. 50 Years of Moore’s Law. http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html

  2. Interconnect Family System Share. http://www.top500.org/statistics/list/

  3. Bencivenni, M., Bortolotti, D., Carbone, A., Cavalli, A., Chierici, A., et al.: Performance of 10 Gigabit ethernet using commodity hardware. IEEE Trans. Nucl. Sci. 57(2), 630–641 (2010)

    Article  Google Scholar 

  4. Interconnect analysis: 10 GigE and InfiniBand in high performance computing. White Paper, HPC Advisory Council (2009)

    Google Scholar 

  5. Koop, M.J. Huang, W., Gopalakrishnan, K., Panda, D.K.: Performance analysis and evaluation of PCIe 2.0 and quad-data rate InfiniBand. In: 16th IEEE Symposium on High Performance Interconnects, pp. 85–92, August 2008

    Google Scholar 

  6. PCI Express Base Specification, Revision 3.0, PCI-SIG, November 2010

    Google Scholar 

  7. Ravindran, M.: Extending cabled PCI express to connect devices with independent PCI domains, SysCon 2008. In: IEEE International Systems Conference, Montreal, Canada, April 2008

    Google Scholar 

  8. Hanawa, T., Boku, T., Miura, S., Okamoto, T., Sato, M., et al.: Low-power and high-performance communication mechanism for dependable embedded systems. In: International Workshop on Innovative Architecture for Future Generation High-Performance Processors and Systems, pp. 67–73 (2008)

    Google Scholar 

  9. Hanawa, T., Kodama, Y., Boku, T., Sato, T.: Interconnection network for tightly coupled accelerators architecture. In: IEEE 21st Annual Symposium on High-Performance Interconnects, San Jose, USA, pp. 79–82, August 2013

    Google Scholar 

  10. Byrne, J., Chang, J., Lim, K.T., Ramirez, L., Ranganathan, P.: Power-efficient networking for balanced system designs: early experiences with PCIe In: HotPower 2011 Proceedings of the 4th Workshop on Power-Aware Computing and Systems, Article No. 3 2011

    Google Scholar 

  11. Krishnan, V.: Evaluation of an integrated PCI express IO expansion and clustering fabric. In: 16th IEEE Symposium on High Performance Interconnects, Stanford University, USA, pp. 93–100, August 2008

    Google Scholar 

  12. Ren, Y., Kim, Y.W., Kim, H.Y.: Implementation of system interconnection devices using PCI express. In: IEEE International Conference on Consumer Electronics, Las Vegas, USA, pp. 300–301, January 2015

    Google Scholar 

  13. Percival, D.: PCI express clustering. In: PCI-SIG Developers Conference, Israel (2011)

    Google Scholar 

  14. Bu-Khamsin, A.: Socket direct protocol over PCI express interconnect: design, implementation and evaluation. MS thesis, Simon Fraser University (2012)

    Google Scholar 

  15. Cooper, S.: Using PCIe over cable for high speed CPU-to-CPU communications. PCI-SIG Developers Conference (2008)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the ICT R&D program of MSIP/IITP. [10038768, The Development of Supercomputing System for the Genome Analysis].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ye Ren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Cite this paper

Ren, Y., Kim, Y.W., Kim, H.Y. (2016). A System Interconnection Device for Small-Scale Clusters. In: Zhang, Y., Peng, L., Youn, CH. (eds) Cloud Computing. CloudComp 2015. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 167. Springer, Cham. https://doi.org/10.1007/978-3-319-38904-2_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-38904-2_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-38903-5

  • Online ISBN: 978-3-319-38904-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics