Skip to main content

Supercomputing for the Future, Supercomputing from the Past (Keynote)

  • Conference paper
High Performance Embedded Architectures and Compilers (HiPEAC 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4917))

  • 726 Accesses

Abstract

Supercomputing is a zero billion dollar market but a huge driving boost for technology and systems for the future.

Today, applications in the engineering and scientific world are the major users of the huge computational power offered by supercomputers. In the future, the commercial and business applications will increasingly have such high computational demands.

Supercomputers, once built on technology developed from scratch have now evolved towards the integration of commodity components. Designers of high end systems for the future have to closely monitor the evolution of mass market developments. Such trends also imply that supercomputers themselves provide requirements for the performance and design of those components.

The current technology integration capability is actually allowing for the use of supercomputing technologies within a single chip that will be used in all markets. Stressing the high end systems design will thus help develop ideas and techniques that will spread everywhere. A general observation about supercomputers in the past is their relatively static operation (job allocation, interconnect routing, domain decompositions, loop scheduling) and often little coordination between levels.

Flexibility and dynamicity are some key ideas that will have to be further stressed in the design of future supercomputers. The ability to accept and deal with variance (rather than stubbornly trying to eliminate it) will be important. Such variance may arise from the actual manufacturing/operation mode of the different components (chip layout, MPI internals, contention for shared resources such as memory or interconnect, ...) or the expectedly more and more dynamic nature of the applications themselves. Such variability will be perceived as load imbalance by an actual run. Properly addressing this issue will be very important.

The application behavior typically shows repetitive patterns of resource usage. Even if such patterns may be dynamic, very often the timescales of such variability allows for the application of prediction techniques and matching resources to actual demands. Our foreseen systems will thus have dynamic mechanisms to support fine grain load balancing, while the policies will be applied at a coarse granularity.

As we approach fundamental limits in single processor design specially in terms of the performance/power ratio, multicore chips and massive parallelism will become necessary to achieve the required performance levels. A hierarchical structure is one of the unavoidable approaches to future systems design. Hierarchies will show up at all levels from processor to node and system design, both in the hardware and in the software.

The development of programming models (extending current ones or developing new ones) faces a challenge of providing the mechanism to express a certain level of hierarchy (but not too much/detailed) that can be matched by compilers, run times and OSs to the potentially very different underlying architectures. Programmability and portability of the programs (both functional and performance wise, both forward and backwards) is a key challenge for these systems.

The approach to address a massively parallel and hierarchical system with load balancing issues will require coordination between different scheduling/resource allocation policies and a tight integration of the design of the components at all levels: processor, interconnect, run time, programming model, applications, OS scheduler, storage and Job scheduler.

By approaching the way of operation between supercomputers and general purpose, this zero billion dollar market can play a very important role of future unified-computing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Editor information

Per Stenström Michel Dubois Manolis Katevenis Rajiv Gupta Theo Ungerer

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Valero, M., Labarta, J. (2008). Supercomputing for the Future, Supercomputing from the Past (Keynote). In: Stenström, P., Dubois, M., Katevenis, M., Gupta, R., Ungerer, T. (eds) High Performance Embedded Architectures and Compilers. HiPEAC 2008. Lecture Notes in Computer Science, vol 4917. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77560-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-77560-7_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-77559-1

  • Online ISBN: 978-3-540-77560-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics