Computational Approaches

  • Michael EckertEmail author
Part of the SpringerBriefs in History of Science and Technology book series (BRIEFSHIST)


The advent of the electronic computer opened new approaches to the turbulence problem. As early as in 1946 John von Neumann discerned turbulence as a challenge for the Electronic Computer Project at the Princeton Institute for Advanced Study (IAS). In the 1950s the IAS-computer became a role model of a first generation of digital high speed computers. The Cold War fuelled not only the development of computers but also the birth of computational sciences—first at Los Alamos and other facilities where research on atomic bombs and other weapons entailed a host of problems that could be solved by computational means only. Another area of rapid growth was numerical weather forecast, where atmospheric turbulence became a major problem. Concepts like Large Eddy Simulation (LES) were developed in order to compute flows on geophysical scales. The small unresolved scales required “sub-grid” modelling. Turbulence models based on the decomposition in mean and fluctuating quantities, however, had to address the “closure problem” because they resulted in more unknowns than equations. Until computers would be powerful enough for Direct Numerical Simulation (DNS), modelling of turbulent flows by one or another closure method remained the only viable computational approach.

“I have always believed,” John von Neumann wrote in 1946 to a colleague, “that very high speed computing could replace some—but of course not all—functions of a wind tunnel.” Neumann referred in this letter to the wind tunnel as an “analogy” computer in contrast to the “electronic digital computer” which he had just begun to develop with a team of engineers and applied mathematicians at the Princeton Institute for Advanced Study. It was supposed “to handle problems of very high complexity—actually of much higher complexity than that of the typical wind tunnel—or flow-problems...” Alluding to the giant wind tunnels at the large aeronautical facilities in the USA he added “that such a machine would be much smaller and cheaper than a conventional wind tunnel.”1

Thus the use of numerical means for solving flow problems was debated even before the first digital computers entered the scene. But even fifteen years later, the turbulence problem appeared inaccessible by such means. Stanley Corrsin, a rising star among the turbulence researchers in the USA, summarized in 1961 the research on “Turbulent Flow” in the magazine American Scientist in this way: “At present, we have a qualitative understanding of the phenomenon, and can even predict some of its features quantitatively from Newton’s Laws. But much of the core of the turbulence problem has yet to yield to formal theoretical attack.” By this time the electronic computer had already been developed into a new tool for solving problems in fluid dynamics numerically. “In closing, we must certainly speculate on the future role of large computing machines in turbulence research. Valuable computations have already been made,” Corrsin concluded his review. But a rough estimate about the computational requirements to simulate three-dimensional turbulence at high Reynolds numbers down to the Kolmogorov microscale left him pessimistic: “The foregoing estimate is enough to suggest the use of analog instead of digital computation; in particular, how about an analog consisting of a tank of water?” (Corrsin 1961, pp. 300, 324) .

6.1 John von Neumann and the Electronic Computer Project

As an advisor of the atomic bomb project at Los Alamos, New Mexico, and the Ballistic Research Laboratory (BRL) at Aberdeen, Maryland, Neumann was fully aware of the role of computational tools for the development of new weapons. But he envisioned more uses than just weapons. While the legendary “Electronic Numerical Integrator and Computer (ENIAC)” was designed for the computation of firing tables and still being developed, Neumann planned a high-speed computing machine for general purposes2:

The use of a completed machine should at first be purely scientific experimenting [...] It is clear now that the machine will cause great advances in hydrodynamics, aerodynamics, quantum theory of atoms and molecules, and the theory of partial differential equations in general. In fact I think that we are still not able to visualize even approximately how great changes it will cause in these fields. [...] Finally it should be used in applied fields: it will open up entirely new possibilities in celestial mechanics, dynamic meteorology, various fields of statistics, and in certain parts of mathematical economics, to mention only the most obvious subjects.

In order to materialise this vision Neumann turned to Carl-Gustav Rossby, an expert on atmospheric flows, for advice on how to make use of the computer for meteorology. Rossby suggested to focus at first on the equations for the large-scale circulation on the globe, as a prerequisite for future numerical weather forecast. He proposed to form a small team of theoretical meteorologists. By summer 1946 it was clear that meteorology would be a major part of the Electronic Computer Project at the IAS (Harper 2008, Chap. 4). Rossby had already participated in the turbulence symposium at the Fifth International Congress for Applied Mechanics in 1938 at Cambridge, Massachusetts, with lectures on turbulent mixing in the atmosphere. In his discussions with Neumann about the role of the computer for numerical weather forecast turbulence must have surfaced as a major problem. When Neumann delivered in May 1946 a speech before a Navy agency which would fund his Electronic Computer Project during the following years, turbulence was explicitly exposed as a major problem3:

Clearly one of the major difficulties of fluid dynamics, which turns up at the most varied occasions, is the phenomenon of turbulence. The major reasons why we cannot do much about it analytically are that it involves a nonlinear, partial differential equation (and it is really nonlinear; you lose the decisive phenomena if you attempt to linearize it), and that it is quite intrinsically three-dimensional. [...] In addition, a fourth dimension (time) has to be added to these three, because turbulence is necessarily a transient, nonstationary phenomenon. [...] I don’t say that the problems of turbulence are necessarily the most important ones to be solved by fast machine calculation, but I would rather expect that many other problems that one will want to solve with such machines will prove to have a lot in common with the general situation that exists in the case of turbulence, and turbulence happens to be a significant and relatively familiar example.

Thus turbulence played from the very beginning a major role as a paradigmatic challenge for Neumann’s computer program. “An important aspect of this program embraces various forms of the theory of turbulence,” he introduced a review on turbulence three years later, when the IAS-computer was still under development. The review resulted from a trip through Europe which Neumann used to meet the leading experts and to discuss all aspects of the turbulence problem. He was conscious that the computational approach would require “quite advanced facilities”, but that did not prevent him to keep turbulence as an important item on the agenda of his “high-speed computing program”. It deserved particular attention “in connection with the theoretical problems of turbulence” (von Neumann 1949, p. 438) .

6.2 Early Numerical Solutions of the Stability Problem

“The problems which lead to turbulence” ranked on top of Neumann’s agenda. He found it “very plausible that turbulence is a phenomenon of instability”. Turbulent flow, from this perspective, “represents one or more solutions of a higher stability” acquired at higher Reynolds numbers. Instead of one “turbulent solution” of the Navier-Stokes equation, Neumann envisioned a multitude of solutions characterized by statistical properties in which “the essential and physically reproducible traits of turbulence” would be manifest. The study of turbulence, in Neumann’s view, should proceed in two steps (von Neumann 1949, pp. 438–440) :

Firstly, the stability properties of the laminar flow must be investigated. Secondly, there is need of a complete theory of the common statistical properties of large, statistically homologous families of solutions, which exhibit the characteristic turbulent traits. Thus, the theory immediately divides into two halves: That of the stability theories and that of the statistical theories.

Therefore it is not accidental that Lin’s recent review of the Orr-Sommerfeld approach attracted Neumann’s attention as a starting point for the first step. At the suggestion of Lin, Neumann asked Chaim Pekeris, a former student of Rossby and one of the first collaborators in Neumann’s Electronic Computer Project, to make the case of Heisenberg’s doctoral thesis, i. e. the solution of the Orr-Sommerfeld equation for plane Poiseuille flow, subject of a numerical study. Pekeris was the ideal candidate for this task. In the 1930s, he had chosen the disputed theory of hydrodynamic stability to put to the test his skills as an applied mathematician (Pekeris 1936, 1938) . While the computer at the IAS was still under development, Neumann and Pekeris resorted to the Selective Sequence Electronic Computer (SSEC) at the IBM Watson Laboratory at Columbia University. The SSEC was the first machine which could perform electronic computations by a stored program (Bashe 1982). It extended along three sides of a hall “60 feet long and 20 feet wide so that visitors actually stood inside the computer,” as a history of the IBM Watson Laboratory remarked (Brennan 1971, p. 21).
By 1950 Pekeris had prepared the problem for a first test run at the SSEC, but the result “did not settle the question at issue,” Llewellyn H. Thomas later commented these beginnings. Thomas had been recruited as an applied mathematician at the Watson Laboratory and pursued the problem when Pekeris left Neumann’s group in order to start electronic computer development in Israel. Two years later Thomas reported success. In a preliminary communication he concluded that “it may now be regarded as proved that plane Poiseuille flow becomes unstable at about R = 5800” (Thomas 1952, p. 813) . In his final paper he presented more details about these computations (Thomas 1953) . They may be regarded as the first computational solution of what was recognized as the turbulence problem three decades ago. By the same token, the numerical results justified the asymptotic methods with which Lin had improved Heisenberg’s approach (see Sect.  4.3). The computational results settled the dispute about Heisenberg’s and Lin’s results. Lin further consolidated the theory with a review on hydrodynamic stability (Lin 1955)  that served Heisenberg several years later “as an indication that approximation methods derived from physical intuition are frequently more reliable than rigorous mathematical methods, because in the case of the latter it is easier for errors to creep into the fundamental assumptions” (Heisenberg 1969, p. 47) .
Fig. 6.1

John von Neumann (right) next to J. Robert Oppenheimer, director of the Princeton Institute for Advanced Study, in October 1952 at the dedication of the IAS computer

Neumann’s IAS-computer (Fig. 6.1) became a role model for similar computers in the USA and abroad. Electronic computers from this generation were expensive facilities; it is therefore not astonishing that early numerical investigations of hydrodynamic stability originated at research centres like the Jet Propulsion Laboratory or The Northrop Aircraft Company–Cold War facilities which could afford such computers (Brown 1959; Mack 1960). With the rise of Computation Centres elsewhere, “Computer-Aided Analysis of Hydrodynamic Stability” (Kurtz and Crandall 1962) emerged as a third research mode besides theory and experiment.4

By the same time, new experiments (Klebanoff et al. 1962) and theoretical studies (Stuart 1962) suggested a scenario for the transition to turbulence which was far beyond the reach of the Orr-Sommerfeld approach. Direct numerical flow simulations at Los Alamos about the vortex formation in the wake of an obstacle reproduced the scenario known from laboratory experiments: a transition from laminar steady flow via a “Kármán vortex street” to a turbulent wake at high Reynolds numbers (Harlow and Fromm 1963). Although these numerical experiments were only two-dimensional, the similarity with the vortex formation in laboratory experiments raised the hope that the computer could also become a tool for solving more fundamental problems.

For the time being, however, computational investigations of hydrodynamic stability did not lay the riddle of the onset of turbulence to rest. The demarcation between stable and unstable states of flow could indicate the initiation of the transition to turbulence, but not the processes that occurred within the transition zone. No theory could account for this complexity. “Experiment and theory agreed, as far as eigenvalues and eigenfunctions were concerned,” the authors of a textbook on Stability of Parallel Flows summarized the research front in their field by the mid 1960s. “Yet turbulent transition was not understood, and it still remains an enigma” (Betchov and William O. Criminale 1967, p. 3). In a 1968-review “On the many faces of transition” the speaker conjectured “that many instability paths to turbulence are admissible”. In view of this complexity the prediction of transition was “a peculiarly nondeterministic problem” (Morkovin 1969, p. 2).

The rapid development of electronic digital computers made the problem of laminar-turbulent transition a recurrent item on the agenda of applied mathematicians, physicists and engineers with access to computing centres. Yet the “bewildering variety of transitional behavior” (Morkovin 1969, p. 1) eluded the computational capabilities at least for another decade. In 1979 a first symposium on “Laminar-turbulent transition” was held in Stuttgart under the umbrella of IUTAM. Subsequently IUTAM sponsored other symposia under the same title in Novosibirsk (1984) and Toulouse (1989), to name only those which consolidated this research field as an ongoing concern for IUTAM.5 The symposium in Toulouse was opened with a “Dialogue on Progress and Issues in Stability and Transition Research” presented by two experts who had contributed to this research field for more than thirty years. With the focus on the more recent developments since the first IUTAM Symposium in 1979 they expressed their belief “that there is no universality of the evolutionary path to turbulent flow even in geometrically similar mean laminar shear flows.” They acknowledged “major developments in the tools of transition research: experiment, analysis and computation” and regarded the latter “now a full partner in our threepronged attack on the understanding of transition.” But even contemporary supercomputers like CRAY II added little to solve the fundamental riddles of transition. The “formulation of computational approaches” remained an issue for the future (Morkovin and Reshotko 1989, pp. 3–4).

6.3 The Origins of Large-Eddy Simulation

Although stability theories ranked first in John von Neumann’s survey on “Recent Theories of Turbulence”, he certainly considered fully developed turbulence as a problem that deserved more attention—not the least because “it plays a decisive role in the momentum as well as the energy exchanges of the terrestrial atmosphere and the oceans—that is, in meteorology and in oceanography,” as he remarked in his survey under the headline “Computational Possibilities” (von Neumann 1949, p. 467) . The intended use of the IAS-computer for numerical weather forecast involved the consideration of atmospheric turbulence. Neumann and his Meteorology Group at the IAS not only pioneered the simulation of large-scale atmospheric circulation but also gave an impact on what became known as Large Eddy Simulation (LES) of turbulence.

Computational fluid dynamics on coarse grained grids had to cope in general with the problem of the smaller scales below the resolution of the finite elements of the mesh—not only in turbulence. Besides the problem of maintaining numerical stability it was not clear how to deal with the unresolved sub-grid scales. The computation of shock waves, for example, was enabled by the smoothing effect of an “artificial viscosity” which was added to the basic equations (von Neumann and Richtmyer 1950) . In turbulence, the eddy viscosity appeared as the appropriate candidate for relating sub-grid scales to the numerically resolved grid. “The lateral transfer of momentum and heat by the non-linear diffusion, which parametrically is supposed to simulate the action of motions of sub-grid scale, accounts for a significant portion of the total eddy transfer.” This was the way how turbulence was included by Joseph Smagorinsky in an early model about atmospheric circulation (Smagorinsky 1963, p. 99) . It marks the beginning of LES.

Smagorinsky was a meteorologist at the U.S. Weather Bureau in Washington, D.C., and his model represents the first effort to derive the main features of the general atmospheric circulation directly from the Navier-Stokes equations. “The present study is an attempt to employ the primitive equations for general circulation experiments,” Smagorinsky declared about the goal of his work. It was “an outgrowth of collaboration with J. G. Charney, N. A. Phillips, and J. von Neumann, who engaged in the initial planning stages of this investigation” (Smagorinsky 1963, p. 100) . Thus Smagorinsky acknowledged his affiliation with Neumann’s group at Princeton. “In 1949, I was invited as an occasional visitor, from my base in Washington, D.C., to assist the group in extending its one-dimensional linear barotropic calculations”, he recalled later his first encounter with Neumann’s group. “On behalf of the Weather Bureau, I also was asked to become familiar with the theoretical aspects of a more realistic model” (Smagorinsky 1983, p. 7) . Upon Neumann’s suggestion the Weather Bureau established in 1955 under Smagorinsky’s direction a General Circulation Research Section, the precursor of the Geophysical Fluid Dynamics Laboratory (GFDL) (Edwards 2010, Chap. 7). In contrast to numerical weather forecast, the GFDL aimed at models for the general atmospheric circulation pattern as developed in the long run, what Neumann called the “infinite forecast”.6

It is therefore not accidental that Smagorinsky’s effort to model turbulence was published in the Monthly Weather Review. The year before Smagorinsky’s collaborator Douglas K. Lilly had published in another meteorological journal, Tellus, a study “On the numerical simulation of buoyant convection” which may be regarded as a precursor to Smagorinsky’s model. “Doug did invent the essence of LES along the way!”, Smagorinsky praised Lilly’s early contributions to turbulence many years later (Kanak 2004, p. 3). In 1964 Lilly became employed as senior scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. In November 1966 he presented a paper on “The Representation of Small-Scale Turbulence in Numerical Simulation Experiments” at an IBM Scientific Computing Symposium. Earlier approaches “lacked suitable mechanisms for simulating the development and maintenance of a three dimensional turbulent energy cascade”, Lilly qualified the previous two-dimensional models. “The future practicality of such computations seems to require development of equations describing the transport of turbulent energy into and through the inertial range” (Lilly 1967, Abstract) .

Three years later, James Deardorff provided solid evidence for the practicability of LES with “A numerical study of three-dimensional turbulent channel flow at large Reynolds numbers” (Deardorff 1970) . Deardorff was Lilly’s colleague at the NCAR. “Doug showed me how to finite-difference the vorticity and thermal-diffusion equations so as to conserve kinetic energy and temperature variance in the absence of sources and sinks,” Deardorff recalled how Lilly introduced him to computational fluid dynamics at the NCAR7:

After computing power had increased to the point where it was conceivable to study turbulence in three dimensions, in the late 1960s, the problem arose of how to simulate the dissipation of turbulent kinetic energy cascaded to scales too small to represent explicitly. Doug was very well acquainted with J. Smagorinsky’s work on this subject, and was very helpful in advising me on how to apply Smagorinsky’s method to small-scale turbulence. Doug had already done his own research on this problem, and so could recommend a coefficient of proportionality between the magnitude of the subgrid-scale eddy coefficient and the resolvable strain rate.

Although the focus at the NCAR was on atmospheric circulation, Deardorff was eager to demonstrate the practicability of LES more generally. Instead of publishing his study like Smagorinsky and Lilly in a meteorological journal, he chose the Journal of Fluid Mechanics and explicitly declared as his goal “to test this meteorological approach upon an interesting case of laboratory turbulence: plane Poiseuille flow (channel flow) driven by a uniform pressure gradient” (Deardorff 1970, p. 454) . Atmospheric flow remained Deardorff’s predominant interest, but turbulence played an ever growing role. Three years later he published a study on “The Use of Subgrid Transport Equations in a Three-Dimensional Model of Atmospheric Turbulence” in the Journal of Fluids Engineering (Deardorff 1973) . “Numerical modeling of the details of fluid flow at large Reynolds numbers has progressed at a rate commensurate to the development of the digital computer,” Deardorff introduced his model. It simulated the flow over a heated ground area of eight square kilometers and a height of 2 km, using a computational grid of 40 \(\times \) 40 \(\times \) 40 points. The computation was executed on a supercomputer of the 1970s, the CDC 7600, designed by Seymour Cray. Another researcher with access to such supercomputers, Anthony Leonard from the NASA Ames Research Center, Moffett Field, California, used by the same time the label “large eddy simulation” for this approach (Leonard 1974).
Deardorff’s approach became applied at other establishments which could afford the required computational facilities. Ulrich Schumann, a doctoral student at the Technical University Karlsruhe and collaborator at the Karlsruhe Nuclear Reactor Center, submitted in 1973 his dissertation on numerical investigations of turbulent flows in plane channels and concentric annuli. Schumann also averaged the Navier-Stokes equations over grid volumes, but modified Deardorff’s model at the sub-grid level. His computations were performed on the mainframe computer of the Karlsruhe Nuclear Reactor Center, an IBM 370/165; typical computational times took several hours (Schumann 1973, 1975)  (Fig. 6.2).
Fig. 6.2

Schumann’s computation of turbulent velocities at one moment in an annular space. Arrows represent the velocity components in a plane perpendicular to the axis, contour lines the axial velocities (Schumann 1973, Fig. 17)

The LES-approach also entered other engineering establishments. In 1977, it was presented to the community of aeronautical science in the AIAA Journal (Ferziger 1977, Abstract):

Large eddy simulations are a numerical technique in which large-scale turbulent structures are computed explicitly, and the small structures are modeled. Arguments for believing this method to be superior to more conventional approaches are given, the basis of the method is given, and some typical results displayed. The results show that the method does have enormous promise, but much further development is required.

The computations were carried out like those of Deardorff four years earlier on CDC 7600 computers, with running times of 90 min for cubic grids with \(64^3\) grid points. “Large eddy simulation methods,” the author concluded, “already have displayed a great deal of potential as important tools for understanding and predicting turbulent flows.” Furthermore, LES raised “the hope of partially replacing expensive experimental work with less costly computation. Over the longer range, it is possible that LES methods will become a standard computational tool” (Ferziger 1977, p. 1267).

Since the late 1970s, LES was on the way towards “a really big industry”.8 Since the 1990s LES may be studied from textbooks (Galperin and Orszag 1993; Sagaut 2005; Grinstein et al. 2007). Even broader oriented textbooks on turbulence, like Stephen Pope’s Turbulent Flows published in 2000, include comprehensive chapters on LES (Pope 2000, Chap. 13) . The “big industry” in this research area may be estimated from a glimpse into the ISI science citation index from that year which listed 164 papers including the keywords “large-eddy-simulation”. “By 2004 this number had doubled to over 320 per year Sagaut” (2005, Foreword to the Third Edition).

6.4 The Closure Problem

Strictly speaking, the closure problem of turbulence dates back to 1894 when Reynolds derived equations for fully developed turbulence: the Reynolds-averaged Navier-Stokes (RANS) equations. These are unclosed, i.e. they contain more variables than equations. The additional variables appear as time-averaged products of fluctuating velocities and are known as Reynolds stresses. Prandtl’s mixing length approach from 1925 and his “new system of formulae for developed turbulence” from 1945 may be regarded as different efforts to cope with the problem of closure. However, these are retrospective evaluations. Closure became enunciated as a turbulence problem in its own right only in the wake of the first numerical simulations of turbulence. Invoking the appropriate “closure” raised among early turbulence modellers the hope “that the barricades against successful turbulence theories are finally beginning to crumble under the combined attack of empiricism, analytic rigor, and numerical simulation” (Fox and Lilly 1972, p. 52) .

Once more it is not accidental that the protagonists in this area came from institutions that disposed of powerful computational means and were oriented to applied research like the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. Large-Eddy Simulations of geophysical circulation on meshes where the distance between the grid points corresponded to hundreds of kilometers involved sub-grid modelling of turbulence which required closure equations that accounted for scales down to the Kolmogorov microscale of some millimeters. Turbulence modelling for industrial applications or in aeronautical research had to cope with scales from fractions of millimeters to several meters. In 1968 a conference was convened in Stanford on the computation of turbulent boundary layers with the goal to compare different closure methods. The applied context is illustrated by the sponsors of this event: the Mechanics Division of the U.S. Air Force Office of Scientific Research and the industrial sponsors of the Internal Flow Program of the Mechanical Engineering Department at Stanford University (Kline et al. 1969).

In the aftermath of this conference, William Craig Reynolds from the Department of Mechanical Engineering at Stanford University reviewed the “Recent Advances in the Computation of Turbulent Flows” in a course on turbulence for the American Institute of Chemical Engineers. By that time, in 1970, Reynolds already distinguished between a variety of closure models for different purposes. MVF (Mean Velocity Field) closure was regarded as appropriate for boundary layer flow. Another type was designated as MTE closure (Mean Turbulent Energy); it was found suitable in situations when the mean flow changed abruptly. Closure methods based on the RANS equations were labelled as MRS closure (Mean Reynolds Stress), but “such closures are not yet tools for practical analysis.” Another type of closure was designated with the acronym FVF (Fluctuating Velocity Field). The review closed with an outlook to the future (Reynolds 1974, p. 242) :

It should not be long before simple boundary-layer flows are routinely handled in industry by MVF prediction methods. These methods are easy to use, require a minimum of input data, and give results which are usually adequate for engineering purposes. MTE methods will become increasingly important to both engineers and scientists, for they afford the possibility of including at least some important effects missed by MVF methods. The debate over the gradient-diffusion vs. large-eddy-transport closures will continue, and both methods will probably continue to be used with nearly equal success. MRS methods will be explored from the scientific side, but probably will not be used to any substantial degree in engineering work for some time to come.

It was obvious that closure methods were crucial for computational turbulence modelling in a host of applications. The rapid sequence of review articles illustrates the pace of progress in this field. In 1972, two NCAR-researchers summarized the state of turbulence modelling for geophysical flows, also “as a test for turbulence theory” (Fox and Lilly 1972) . By the same time Peter Bradshaw, a professor of Experimental Aerodynamics at the Department of Aeronautics at the Imperial College in London, elaborated on “the closure problem” in a lecture on “The understanding and prediction of turbulent flow” with the focus on turbulence models suited for engineering purposes (Bradshaw 1972) . In the same year the AIAA Journal presented to the community of aeronautical engineers “A Survey of the Mean Turbulent Field Closure Models” (Mellor and Herring 1972). In a textbook on turbulent boundary layer flow published in 1974 “as an outgrowth of work done primarily in the Aerodynamics Research Group at the Douglas Aircraft Company” a chapter reviewed the closure methods which had been developed so far for boundary layer turbulence (Cebeci and Smith 1974).
In 1976 William Craig Reynolds, by now chairman of the Department of Mechanical Engineering at Stanford University, updated his survey from 1970 for the Annual Review of Fluid Mechanics. “By the mid-1960s there were several workers actively developing turbulent-flow computation schemes based on the governing partial differential equations (pde’s),” he remarked about the recent beginnings of turbulence modelling. “The first such methods used only the equations for the mean motions, but second-generation methods began to incorporate turbulence pde’s.” He distinguished the following turbulence models (Reynolds 1976, pp. 183–184) :

1. Zero-equation models–models using only the pde for the mean velocity field, and no turbulence pde’s.

2. One-equation models–models involving an additional pde relating to the turbulence velocity scale.

3. Two-equation models–models incorporating an additional pde related to a turbulence length scale.

4. Stress-equation models–models involving pde’s for all components of the turbulent stress tensor.

5. Large-eddy simulations–computations of the three-dimensional time-dependent large-eddy structure and a low-level model for the small-scale turbulence.

A successful two-equation model from the second generation, for example, was the k-\(\varepsilon \)-model developed in the early 1970s at the Department of Mechanical Engineering of the Imperial College in London. Closure was achieved by two partial differential equations for the mean flow behaviour in which the dependent variables were the turbulence kinetic energy k and its dissipation rate \(\varepsilon \). “It is the simplest kind of model that permits prediction of both near-wall and free-shear-flow phenomena without adjustments to constants or functions,” the authors praised its virtues, “it successfully accounts for many low Reynolds-number features of turbulence; and its use has led to accurate predictions of flows with recirculation as well as those of the boundary-layer kind” (Launder and Spalding 1974, p. 287).

Turbulence modelling involved the consideration of specific flow configurations, empirically determined constants and computational economy rather than universal aspects of turbulence. Despite an explosive increase in computer power and algorithmic sophistication in Computational Fluid Dynamics (CFD) this has not changed over the subsequent decades. The closure problem remained crucial for turbulence modelling in a host of engineering applications. “Closure models continue to play a major role in applied CFD and remain an area of active research and development” (Durbin 2017, p. 77).

6.5 Direct Numerical Simulation (DNS)

The only way to avoid the closure problem was to solve the Navier-Stokes equations directly down to the smallest scales, i.e. without decomposing velocities and pressures in mean and fluctuating values. From the perspective of the 1960s such an approach was elusive. As Corrsin estimated in 1961, the computational grid would require at least \(10^{12}\) points in order to resolve for a flow at a Reynolds number of \(10^4\) the turbulent eddies down to the Kolmogorov microscale. “The number of ‘bits’ of information is 2.7 \(\times \) \(10^{13}\)” (Corrsin 1961, p. 324) , far beyond the computational capabilities at the time and in the foreseeable future. A decade later, an estimate for the direct numerical simulation of turbulent pipe flow arrived at about \(10^9\) s or 100 years for even a “somewhat scanty turbulence computation” (Emmons 1970, p. 33).

DNS remained prohibitive for many more years as a computational approach in the research of turbulent flows, not to mention engineering applications. With respect to computational economy, “Progress in the development of a Reynolds stress turbulence closure” seemed most rewarding (Launder et al. 1975)—to quote the title of “This Week’s Citation Classic” from ten years later.9 Computationally more expensive was LES with appropriate sub-grid modelling. “At the present time, simulation can provide detailed information only about the large scales of flows in simple geometries,” a review remarked about the LES-approach in 1984, “and is advantageous when many flow quantities at a single instant are needed (especially quantities involving pressure) or where the experimental conditions are hard to control or are expensive or hazardous.” In contrast to the early applications on geophysical scales, the term “large” called for some qualification. It designated the scales affected by the boundary conditions (which could apply to rather small laboratory dimensions of engineering interests), while the small sub-grid scales were assumed to display more universal features: “The LES approach lies between the extremes of direct simulation, in which all fluctuations are resolved and no model is required, and the classical approach of O. Reynolds, in which only mean values are calculated and all fluctuations are modeled” (Rogallo and Moin 1984, p. 102).

Unlike ten years before, when LES ranked highest on the five-points-list of computational approaches on turbulence simulation, the LES approach was now regarded as intermediate between the (not yet accessible) resolution of all scales in DNS and the turbulence modelling by closure assumptions. For the time being, however, DNS was confined to flows at very low Reynolds numbers and therefore used only for comparative purposes. “In addition to calculating model parameters, direct simulations are also used to determine how well the forms of the SGS models represent ‘exact’ SGS stresses,” the review from 1984 remarked about the early use of DNS as a tool for improving sub-grid scale models (Rogallo and Moin 1984, p. 110).

Tangible progress of DNS as a viable approach in its own right was first achieved in the late 1980s in studies about the onset of turbulence. Yet here, too, DNS was used in the company of other methods. With respect to the computational effort it was extremely expensive. “The LES calculations required roughly 10 h of CPU time compared with the several hundred hours for the corresponding DNS computations,” a review on the simulation of the transition in wall-bounded shear flows compared the effort to compute a particular boundary-layer transition (Kleiser and Zang 1991, p. 530).

When several years later DNS became ripe for a first survey article, the reviewers left no doubt “that DNS is a research tool, and not a brute-force solution to the Navier-Stokes equations for engineering problems” (Moin and Mahesh 1998, p. 539). DNS could be used as a tool to analyse features of turbulent flows that were inaccessible otherwise. It complemented experimental investigations in the laboratory. This had already been demonstrated in studies about the structure of the turbulent boundary layer: “Access to velocity, pressure, and vorticity fields in three-dimensional space and time allowed DNS to fill in the gaps in the popular notions of boundary layer structure. In retrospect, this use of DNS represented a major change in the accepted role of computations in turbulence research” (Moin and Mahesh 1998, p. 568).


  1. 1.

    John von Neumann to Howard Emmons, 3 April 1946, quoted in Aspray (1990, p. 274).

  2. 2.

    John von Neumann, Memorandum, 5 September 1945, quoted in Aspray (1990, p. 52).

  3. 3.

    Neumann, quoted in Stern (1981, p. 267).

  4. 4.

    There is a rich literature on “computer experiments” and “computer simulation” as a new mode of research; for an overview from an epistemological perspective see Winsberg (2010).

  5. 5.
  6. 6.

    Neumann in a “Proposal for a Project on the Dynamics of the General Circulation”, reproduced in Smagorinsky (1983, p. 30) : “Indeed, determining the ordinary circulation pattern may be viewed as a forecast over an infinite period of time, since it predicts what atmospheric conditions will generally prevail when they have become, due to the lapse of very long time intervals, causally and statistically independent of whatever initial conditions may have existed.”.

  7. 7.

    Quoted in Kanak (2004, pp. 5–6).

  8. 8.

    Anthony Leonard, Interview by Heidi Aspaturian. Pasadena, California, November 6, 9, 14, and 21, 2012. Oral History, (26 October 2019). See the review articles (Rogallo and Moin 1984; Schumann and Friedrich 1985; Lesieur and Métais 1996; Meneveau and Katz 2000).

  9. 9.

    This Week’s Citation Classic. Current Contents, Number 50, 10 December 1984.

Copyright information

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.ForschungsinstitutDeutsches MuseumMunichGermany

Personalised recommendations