Decision Making under Deep Uncertainty pp 201222  Cite as
InfoGap (IG): Robust Design of a Mechanical Latch
Abstract

InfoGap (IG) Decision Theory, introduced in Chap. 5, is used to formulate, and solve, the analysis of robustness in the early design of a mechanical latch.

The three components necessary to assess robustness of the latch design (a system model, performance requirement, and representation of uncertainty) are discussed.

The robustness analysis indicates that the nominal design can accommodate significant uncertainty before failing to deliver the required performance.

The discussion concludes with the assessment of a variant design to show how a decision (“which design should be chosen?”) can be confidently reached despite the presence of significant gaps in knowledge.
10.1 Introduction
The role of numerical simulation to aid in decisionmaking has grown significantly in the past three decades for a diverse number of applications, such as financial modeling, weather forecasting, and design prototyping (Oden et al. 2006). Despite its widespread use, numerical simulations suffer from unavoidable sources of uncertainty, such as making simplifying assumptions to represent nonidealized behavior, unknown initial conditions, and variability in environmental conditions. The question posed herein is: How can simulations support a confident decision, given their inherent sources of uncertainty? This chapter gives an answer to this question using InfoGap (IG) Decision Theory, the philosophy of which is presented in Chap. 5.
The main factors in selecting an approach to evaluate the effect of uncertainty on a decision are (1) the nature and severity of the uncertainty, and (2) computational feasibility. In cases of deep uncertainty, the development of a probability distribution cannot confidently be made. Further, uncertainty can often be unbounded and decisionmakers will need to understand how much uncertainty they are accepting when they proceed with a decision. A few examples of comparing methods can be found in Hall et al. (2012) and BenHaim et al. (2009). Hall et al. (2012) compare IG with RDM (Chap. 2). BenHaim et al. (2009) compare IG with robust Bayes (a min–max approach with probability bounds known as Pboxes, and coherent lower previsions). In all cases, the major distinction is the prior knowledge, or how much information (theoretical understanding, physical observations, numerical models, expert judgment, etc.) is available, and what the analyst is willing to assume if this information does not sufficiently constrain the formulation of the decisionmaking problem. One can generally characterize IG as a methodology that demands less prior information than other methods in most situations. Furthermore, Pboxes can be viewed as a special case of IG analysis (Ferson and Tucker 2008), which suggests that these two approaches may be integrated to support probabilistic reasoning. The flexibility promoted by IG, however, also makes it possible to conveniently analyze a problem for which probability distributions would be unknown, or at least uncertain, and when a realistic and meaningful worst case cannot be reliably identified (BenHaim 2006).
Applying IG to support confident decisionmaking using simulations hinges on the ability to establish the robustness of the forecast (or predicted) performance to modeling assumptions and sources of uncertainty. Robustness, in this context, means that the performance requirement is met even if some of the modeling assumptions happen to be incorrect. In practice, this is achieved by exercising the simulation model to explore uncertainty spaces that represent gaps in knowledge—that is, the “difference” between bestknown assumptions and how reality could potentially deviate from them. An analysis of robustness then seeks to establish that performance remains acceptable within these uncertainty spaces. Some requirementsatisfying decisions will tolerate more deviation from bestknown assumptions than others. Given two decisions offering similar attributes (feasibility, safety, cost, etc.), preference should always be given to the more robust one—that is, the solution that tolerates more uncertainty without endangering performance.
As discussed in Chap. 5, three components are needed to carry out an IG analysis: A system model, a performance requirement, and a representation of uncertainty. This chapter shows how to develop these components to assess robustness for the performance of a mechanical latch in an early phase of design prototyping. IG is particularly suitable for design prototyping, because it offers the advantages of accommodating minimal assumptions while communicating the results of an analysis efficiently through a robustness function.
Our discussion starts in Sect. 10.2 by addressing how the three components of an infogap analysis (system model, performance requirement, and uncertainty model) can be formulated for applications that involve policy topics. The purpose is to emphasize that infogap applies to a wide range of contexts, not just those from computational physics and engineering grounded in firstprinciple equations. Section 10.3 introduces our simple mechanical example—the design of a latch, and its desired performance requirements given that the geometry, material properties, and loading conditions are partially unknown. The section also discusses the system model defined to represent the design problem. Details of the simulation (how the geometry is simplified, how some of the modeling assumptions are justified, how truncation errors are controlled, etc.) are omitted, since they are not essential to understand how an analysis of robustness is carried out.
Section 10.4 discusses the main sources of uncertainty in the latch design problem, how they are represented with an infogap model of uncertainty, and the implementation of robustness analysis. Two competing latch designs are evaluated in Sect. 10.5 to illustrate how confident decisions can be reached despite the presence of significant gaps in knowledge. A summary of the main points made is provided in Sect. 10.6.
10.2 Application of InfoGap Robustness for Policymaking
Our application suggests how infogap robustness (see Chap. 5) can be used to manage uncertainty in the earlystage design of a latch mechanism (Sects. 10.3 and 10.4) and how robustness functions may be exploited to support decisionmaking (Sect. 10.5). The reader should not be misled, however, in believing that this methodology applies only to contexts described by firstprinciple equations such as the conservation of momentum (also known as Newton’s 2nd law, “Force = Mass × Acceleration”) solved for the latch example. The discussion presented here emphasizes the versatility of IG robustness for other contexts, particularly those involving policy topics for which wellaccepted scientific models might be lacking.
Consider two highconsequence policymaking applications:

Climate change: Policy decisions to address the impact on human activity of changes in the global climate (and vice versa) tend to follow either the precautionary principle or the scientific design of intervention. In the first case (precautionary principle), decisionmakers would err on the side of early intervention to mitigate the potentially adverse consequences of climate change, even if the scientific understanding of what causes these changes, and what the consequences might be, is lacking. In the second case (scientific design of intervention), longerrange planning is implied while a stronger emphasis would be placed on gaining a better scientific understanding and reducing the sources of uncertainty before policy is enacted. In the presence of incomplete understanding of the phenomena that drive changes in the global climate, effects on the planet’s ecosystem, and potential consequences for human activity, it is unclear which early intervention strategies to adopt and how effective they might be. Given scientific uncertainty, however, policymakers are inclined to adopt early precautionary intervention.

Longrange infrastructure planning: The world population is increasingly concentrated in urban areas. The ten largest urban areas, such as Tokyo (Japan), Jakarta (Indonesia), Delhi (India), and New York City (USA), feature population densities between 2,000 and 12,000 individuals per km^{2}, thus exceeding densities in rural areas by more than two to three ordersofmagnitude. Managing these population centers offers serious challenges in terms of housing, transportation, water and power supplies, access to nutrition, waste management, and many other critical systems. Future infrastructure needs to be planned for many decades in the presence of significant uncertainty regarding population and economic growth, urbanization laws, and the adoption of future technologies. The development, for example, of peertopeer transportation systems might render it necessary to rethink how conventional public transportation networks and taxi services are organized. The challenge of infrastructure planning is to design sufficient flexibility in these interconnected engineered systems when some of the factors influencing them, together with the performance requirements themselves, might be partially unknown.
The challenge of policymaking for these and similar problems is, of course, how to manage the uncertainty. These applications often involve incomplete scientific understanding of the processes involved, elements of extrapolation or forecasting beyond known or tested conditions, and aspects of the decisionmaking practice that are not amenable to being formulated with mathematical models. Infogap robustness, nevertheless, makes it possible to assess whether a policy decision would deliver the desired outcomes even if the definition of the problem features significant uncertainty and some of the assumptions formulated in the analysis are incorrect.
Even though simulations such as Fig. 10.1 are grounded in firstprinciple descriptions, they are not immune to uncertainty. Executing this calculation with a onedegree resolution (i.e., 360 grid points around the Earth), for example, implies that some of the computational zones are as large as 314 km^{2} near the equator, which is nearly five times the surface area of Washington, D.C. It raises the question of whether localized eddies that contribute to phenomena such as the Atlantic Ocean’s Gulf Stream are appropriately represented. Beyond the question of adequacy, settings such as resolution, fidelity with which various processes are described, and convergence of numerical solvers generate numerical uncertainty. These imply that code predictions could differ, maybe significantly, from the “truebutunknown” conditions that analysts seek to know.
Other commonly encountered sources of uncertainty in firstprinciple simulations include the variability or incomplete knowledge of initial conditions, boundary conditions, constitutive properties (material properties, reactive chemistry, etc.), and source functions (e.g., how much greenhouse gas is introduced into the atmosphere?). This is in addition to not always understanding how different processes might be coupled (e.g., how does the chemistry of the ocean change due to increased acidity of the atmosphere?). Modelform uncertainty, which refers to the fact that the functional form of a model might be unknown, is also pervasive in computational sciences. An example would be to select a mathematical equation to represent the behavior of a chemical at conditions that extrapolate beyond what can be experimentally tested in a laboratory. Finally, largescale simulation endeavors often require passing information across different code platforms. Such linkages can introduce additional uncertainty, depending on how the variables solved for in one code are mapped to initialize the calculation in another code.
The aforementioned sources of uncertainty, while they are multifaceted in nature and can be severe, are handled appropriately by a number of wellestablished methods, such as statistical sampling (Metropolis and Ulam 1949), probabilistic reliability (Wu 1994), worstcase analysis, and IG robustness. In the last case, the system model is the simulation flow that converts input settings to performance outcomes. The performance requirement defines a single criterion or multiple criteria that separate success from failure. The uncertainty model describes the sources of variability and lackofknowledge introduced by the simulation flow. Once the three components are defined, a solution procedure is implemented to estimate the robustness function of a given decision. Competing decisions can then be assessed by their ability to meet the performance requirement. Likewise, the confidence placed in a decision is indicated by the degree to which its forecasted performance is robust, or insensitive, to increasing levels of uncertainty in the formulation of the problem. Regardless of how sophisticated a simulation flow might be, IG analysis always follows this generic procedure, as is discussed in Sects. 10.3 and 10.4 for the latch application.
IG robustness makes it possible to assess the extent to which a policy decision is affected by what may be unknown, even in the presence of sources of uncertainty that do not lend themselves to parametric representations such as probability distributions, polynomial chaos expansions, or intervals. Accounting for an uncertainty such as the gray region of Fig. 10.3 is challenging if a functional form is lacking. One might not know if the world’s population can be modeled as increasing or decreasing, or even if the trend can be portrayed as monotonic.
Another type of uncertainty, often encountered in the formulation of policy problems and which lends itself naturally to infogap analysis, is qualitative information or expert opinions that introduce vagueness or nonspecificity. For example, one might state from Fig. 10.3 that “World population is growing”, without characterizing this trend with a mathematical equation. Policymakers might seek to explore if decisions they consider can accommodate this type of uncertainty while delivering the desired outcomes. The components of such an analysis would be similar to those previously discussed. A system model is needed to analyze the consequences of specific conditions, such as “the population is growing” or “the population is receding”, and a performance requirement is formulated to separate success from failure. The uncertainty model would, in this case, include alternative statements (e.g., “the population is growing” or “the population is growing faster”) to assess if the decision meets the performance requirement given such an uncertainty.
10.3 Formulation for the Design of a Mechanical Latch
The geometry of the latch shown on the right of Fig. 10.6 is simplified by converting the round corners to straight edges. This results in dimensions of 3.9 mm (length) by 4.0 mm (width) by 0.8 mm (thickness). The latch’s head, to which the contact displacement is applied, protrudes 0.4 mm above the surface. A perfectly rigid attachment to the compartment door is assumed, which makes it possible to neglect the door altogether and greatly simplifies the implementation.
The nominal value, however, is only a best estimate obtained from the manufacturing of similar devices, and it is desirable to guarantee that the latch can meet its performance requirement given the application of contact displacements different from 0.50 mm (either smaller or greater).
With this formulation, the analyst can select a desired safety factor and ascertain how much uncertainty can be tolerated given this requirement. A wellknown tradeoff, which is observed in Sect. 10.4, is that demanding more performance by selecting a larger value of f_{S} renders the design more vulnerable (or less robust) to modeling uncertainty.
The analysis of mechanical latches is a mature field after several decades of their common use in many industries. An example is given in BASF (2001), where peak strain values obtained for latches of different geometries are estimated using a combination of closedform formulae and empirical factors. While these simplifications enable derivations “by hand,” they also introduce assumptions that are thought to be inappropriate for this design problem. The decision is therefore made to rely on a finite element representation (Zienkiewicz and Taylor 2000) to estimate stresses that result from imposing the displacement (10.1) and assess whether the requirementcompliant condition (10.2) is met. The finite element model discretizes the latch’s geometry into elemental volumes within which the equationsofmotion are solved. The Newmark algorithm is implemented to integrate the equations in time (Newmark 1959). This procedure also introduces assumptions, such as the type of interpolation function selected for the elemental volumes. These choices add to the discretization of the geometry to generate truncation errors. Even though they are important, these meshsize and runtime considerations are not discussed in order to keep our focus on the infogap analysis of robustness.
10.4 The InfoGap Robust Design Methodology
This section discusses the methodology applied to achieve an infogap robust design. Three issues need to be discussed before illustrating how the robustness function is calculated and utilized to support decisionmaking. The first issue is to define the design space. The second issue is to determine the sources of uncertainty against which the design must be robust. The third question is how to represent this uncertainty mathematically without imposing unwarranted assumptions. These choices are discussed before showing how the robustness function is derived.
Several parameters of the geometry are available to define the design space, including the length, width, thickness, and overall configuration of the latch’s geometry. For computational efficiency, it is desirable to explore an assmallaspossible design space while ensuring that the parameters selected for design optimization exercise an assignificantaspossible influence on performance, which here is the peak stress, σ_{Max}, of Eq. (10.2).
The first issue is to define the design space by judiciously selecting parameters that describe the geometry of the latch. This is achieved using global sensitivity analysis to identify the most influential parameters (Saltelli et al. 2000). Five sizing parameters are considered. They are the total length (L), width (W_{C}), and depth (D_{C}) of the latch; and geometry (length, L_{M}, and depth, D_{H}) of the surface where the displacement U_{Contact} is applied. An analysisofvariance is performed based on a threelevel, fullfactorial design of computer experiments that requires 3^{5} = 243 finite element simulations. The parameters L (total length) and W_{C} (width) are found to account for approximately 76% of the total variability of peakstress predictions when the five dimensions (L; W_{C}; D_{C}; D_{H}; L_{M}) are varied between their lower and upper bounds. Design exploration is therefore restricted to the pair (L; W_{C}).
The second issue is to define the sources of modeling uncertainty against which the design must be robust. This uncertainty represents the fact that realworld conditions might deviate from what is assumed in the simulation model. To make matters more complicated, the magnitude of these deviations, which indicates by how much the model could be incorrect, is unknown. Furthermore, precise probability distributions are lacking. It is essential that the representation of uncertainty can account for these attributes of the problem without imposing unwarranted assumptions.
Definition of sources of uncertainty in the simulation model
Variable  Description  Nominal value  Typical range 

E  Modulus of elasticity  2.0 GPa  2.0–3.0 GPa 
G  Shear modulus  0.73 GPa  0.71–1.11 GPa 
ν  Poisson’s ratio  0.37  0.35–0.40 
ρ  Mass density  1.20 × 10^{+3} kg/m^{3}  1.20–1.25 × 10^{+3} kg/m^{3} 
U _{Contact}  Applied contact displacement  0.50 mm  0.20–0.80 mm 
F _{OS}  Dynamic overshoot factor  1.0  0.5–1.5 
The three components of IG analysis applied to the latch design problem
Note that the IG uncertainty model (10.7) does not introduce any correlation between variables, because such information is usually unknown in an early design stage. A correlation structure that would be only partially known can easily be included. An example of infogapping the unknown correlation of a financial security model is given in BenHaim (2010).
At this point of the problem formulation, a twodimensional design space p = (L; W_{C}) is defined together with the performance requirement (10.2). Modeling uncertainty is identified in Table 10.1 and represented mathematically in Eq. (10.7). The finite element simulation indicates that the peak stress experienced by the nominal design is σ_{Max} = 28.07 MPa, which does not exceed the yield stress of 55 MPa and provides a safety factor of f_{S} = 49%. Even accounting for truncation error introduced by the lack of resolution in the discretization (see the discussion of Fig. 10.9), the conclusion is that the nominal design is requirementcompliant.
The robustness function, which is progressively constructed in Fig. 10.8 by exploring largeruncertainty spaces, U(h), maps the worstcase performance as a function of horizonofuncertainty. Its shape indicates the extent to which performance deteriorates as increasingly more uncertainty is considered. A robust design is one that tolerates as much uncertainty as possible without entering the “failure” domain (red region) for which the requirement is no longer met.
Applying the concept of robustness to the latch design problem is simple. One searches for the maximal (worst) stress, \( \mathop {\hbox{max} }\limits_{{\left\{ {{\varvec{\uptheta}} \in U\left( h \right)} \right\}}} \sigma_{\text{Max}} \left( {\varvec{\uptheta}} \right) \), obtained from finite element simulations where the variables θ vary within the uncertainty space U(h) defined in Eq. (10.7). As stated in Eq. (10.8), robustness is the greatest size of the uncertainty space such that the design is requirementcompliant irrespective of which model is analyzed within U(\( \hat{h} \)). Said differently, all system models belonging to the uncertainty space \( U(\hat{h}) \) are guaranteed to meet the performance requirement of Eq. (10.2). The horizonofuncertainty, h, is nevertheless unknown and may exceed \( \hat{h} \). Not all system models in uncertainty sets U(h), for h greater than \( \hat{h} \), are compliant.
The procedure, therefore, searches for the worstcase peak stress within the uncertainty space U(h). This is a global optimization problem (Martins and Lambe 2013) whose resolution provides one datum for the robustness function, such as the point (σ_{1}; h_{1}) in Fig. 10.8b that results from exploring the uncertainty space U(h_{1}). For simplicity, the uncertainty spaces illustrated on the left side of the figure are represented with two variables, θ = (θ_{1}; θ_{2}). It should not obscure the fact that most applications will deal with largersize spaces (the latch has six variables θ_{k}). Figure 10.8c indicates that the procedure is repeated by increasing the horizonofuncertainty from h_{1} to h_{2}, hence performing an optimization search over a larger space U(h_{2}).
The procedure outlined above stops when requirementcompliance is no longer guaranteed—that is, as the worstcase peak stress exceeds the critical stress σ_{Critical}. The corresponding point \( (\sigma_{\text{Critical}} ; \hat{h} \)) is necessarily located on the edge of the (red) failure region. By definition of robustness (10.8), \( \hat{h} \) is the maximum level of uncertainty that can be tolerated while guaranteeing that the performance requirement is always met.
The robustness function is obtained by continuing the process suggested in Fig. 10.8, where more and more points are added by considering greater and greater levels of horizonofuncertainty. For any performance limit σ_{Critical} on the horizontal axis, the corresponding point on the vertical axis is the greatest tolerable uncertainty, namely, the robustness \( \hat{h}\left( {\sigma_{\text{Critical}} } \right) \). The positive slope of the robustness function indicates a tradeoff between performance requirement and robustness, as we now explain. Suppose that the analyst is willing to allow the peak stress to reach 55 MPa, which provides no safety margin (f_{S} = 0). From Fig. 10.9, it can be observed that the greatest tolerable horizonofuncertainty is \( \hat{h} \) = 0.40. (Note that this value accounts for truncation error, which effectively “shifts” the robustness function by 4.76 MPa to the right.) It means that the design satisfies the performance requirement (10.2) as long as none of the model variables θ = (E; G; v; ρ; U_{Contact}; F_{OS}) deviates from its nominal value by more than 40%. Said differently, the design is guaranteed to satisfy the critical stress criterion as long as realworld conditions do not deviate from nominal settings of the simulation by more than 40%, even accounting for truncation effects.
Suppose, however, that the analyst wishes to be more cautious—for instance, by requiring that the peak stress not exceed 45 MPa. Now the safety factor is f_{S} = 18%. From Fig. 10.9, not exceeding this peak stress is satisfied if model variables do not deviate from their nominal values by more than approximately 10%. In other words, the more demanding requirement (f_{S} = 18%) is less robust to uncertainty (\( \hat{h} \) = 0.10) than the less demanding requirement (f_{S} = 0 and \( \hat{h} \) = 0.40). More generally, the positive slope of the robustness function expresses the tradeoff between greater caution in the mechanical performance (increasing f_{S}) and greater robustness against uncertainty in the modeling assumptions (increasing \( \hat{h} \)).
Choices offered to the decisionmaker are clear. If it can be shown that realworld conditions cannot possibly deviate from those assumed in the simulation model by more than 40%, then the nominal design is guaranteed requirementcompliant. Otherwise an alternate design offering greater robustness should be pursued. This second path is addressed next.
10.5 Assessment of Two Competing Designs
Figure 10.11 illustrates that, when no modeling uncertainty is considered, the variant design clearly predicts a better performance. This is observed on the horizontal axis (at \( \hat{h} \) = 0) where the peak stress of the variant geometry (σ_{Max} = 16 MPa) is less than half the value for the nominal design (σ_{Max} = 34 MPa). This result is consistent with the fact that in the variant design the applied force is spread over a larger surface area, which reduces stresses generated in the latch.
Suppose that the analyst requires a safety factor of f_{S} = 18%, implying that the stress must be no greater than 45 MPa. As observed in Fig. 10.9 (reproduced in the blue curve of Fig. 10.11), the nominal geometry tolerates up to 10% change in any or all of the model variables without violating the performance requirement. The largersize geometry, however, can tolerate up to 100% change without violating the same requirement. In other words, the variant design is more robust (\( \hat{h} \) = 1.0 instead of \( \hat{h} \) = 0.10 nominally) at this level of stress (σ_{Critical} = 45 MPa).
The slopes of the two robustness functions can also be compared in Fig. 10.11. The slope represents the tradeoff between robustness and performance requirement. A steep slope implies a low cost of robustness that can be increased by a relatively small relaxation of the required performance. The figure suggests that the cost of robustness for the nominal design (blue curve) is higher than for the variant geometry (green curve). Selecting the 20% larger design is undoubtedly a better decision, given that it delivers better predicted performance (lower predicted value of σ_{Max}) and is less vulnerable to potentially incorrect modeling assumptions. In fact, the variant design offers an 18% safety margin (f_{S} = 18%) even if model variables deviate from their nominal values by up to 100% (\( \hat{h} \) = 1.0). The only drawback of the variant design is the ≈44% larger volume that increases manufacturing costs relative to those of the nominal design.
10.6 Concluding Remarks
This chapter has presented an application of simulationbased IG robust design. The need for robustness stems from recognizing that an effective design should guarantee performance even if realworld conditions deviate from modeling and analysis assumptions. Infogap robustness is versatile, easy to implement, and does not require assuming information that is not available.
IG robust design is applied to the analysis of a mechanical latch for a consumer electronics product to provide a simple, mechanical illustration. The performance criterion is the peak stress at the base of the latch resulting from displacements that are applied to open or close the compartment. The geometry, simulation model, and loading scenario are simplified for clarity. Round corners, for example, that mitigate stress concentrations, are altered to straight edges. Likewise, severe impact loads experienced when dropping the device on a hard surface are not considered. The description of the analysis, however, is comprehensive and can easily be translated to other, more realistic, applications.
The robustness of the nominal design is studied to assess the extent to which performance is immune to sources of uncertainty in the problem. This uncertainty expresses the fact that realworld conditions could differ from what is assumed in the simulation without postulating either probability distributions or knowledge of worst cases. One example of how realworld conditions can vary from modeling assumptions is the variability of material properties. Uncertainty also originates from assumptions embodied in the simulation model that could be incorrect. One example is the dynamic overshoot factor used to mitigate the ignorance of how materials behave when subjected to fasttransient loads. The analysis of the mechanical latch pursued in this chapter indicates that the design can tolerate up to 40% uncertainty without exceeding the peakstress performance requirement.
The performance of an alternate design, which proposes to spread the contact force over a larger surface area, is assessed for its ability to provide more robustness than the nominal design. The analysis indicates that the variant latch is predicted to perform better, while its robustness to modeling uncertainty is greater at all performance requirements. The variant geometry features, however, a 44% larger volume, which would imply higher manufacturing costs. The discussion presented in this chapter illustrates how an analysis of robustness helps the decisionmaker answer the question of whether an improvement in performance, or the ability to withstand more uncertainty about realworld conditions, warrants the cost associated with a design change.
The simplicity of the example discussed here should not obscure the fact that searching for a robust design might come at a significant computational expense if the simulation is expensive or the uncertainty space is largedimensional. This is nevertheless what automation is for and what software is good at. Developing the technology to perform largescale explorations frees the analyst to apply his/her creativity to more challenging aspects of the design.
References
 Bamber, J. L., Riva, R. E. M., Vermeersen, B. L. A., & Le Brocq, A. M. (2009). Reassessment of the potential sealevel rise from a collapse of the West Antarctic Ice Sheet. Science, 324(5929), 901–903.CrossRefGoogle Scholar
 BenHaim, Y., Dasco, C. C., Carrasco, J., & Rajan, N. (2009). Heterogeneous uncertainties in cholesterol management. International Journal of Approximate Reasoning, 50, 1046–1065.CrossRefGoogle Scholar
 BenHaim, Y. (2006). Infogap decision theory: Decisions under severe uncertainty (2nd edn.). Oxford: Academic Press Publisher.CrossRefGoogle Scholar
 BenHaim, Y. (2010). Infogap economics: An operational introduction (pp. 87–95). PalgraveMacmillan.Google Scholar
 Ferson, S., & Tucker, W. T. (2008). Probability boxes as infogap models. In Annual Conference of the North American Fuzzy Information Processing Society—NAFIPS 2008. Article number 4531314.Google Scholar
 Hall, J. W., Lempert, R. J., Keller, K., Hackbarth, A., Mijere, C., & McInerney, D. J. (2012). Robust climate policies under uncertainty: A comparison of robust decision making and infogap methods. Risk Analysis, 32(10), 1657–1672.CrossRefGoogle Scholar
 Hemez, F. M., & Kamm, J. R. (2008). A brief overview of the stateofthepractice and current challenges of solution verification. In Computational methods in transport: Verification and validation (pp. 229–250). Springer Publisher.Google Scholar
 Malone, R. C., Smith, R. D., Maltrud, M. E., & Hecht, M. W. (2003). Eddyresolving ocean modeling. Los Alamos Science, 28, 223–231.Google Scholar
 Martins, J. R. R. A., & Lambe, A. B. (2013). Multidisciplinary design optimization: A survey of architectures. AIAA Journal, 51, 2049–2075.CrossRefGoogle Scholar
 Metropolis, N., & Ulam, S. (1949). The Monte Carlo method. Journal of the American Statistical Association, 44, 335–341.CrossRefGoogle Scholar
 Mollineaux, M. G., Van Buren, K. L., Hemez, F. M., & Atamturktur, S. (2013). Simulating the dynamics of wind turbine blades: Part I. Model Development and Verification, Wind Energy, 16, 694–710.CrossRefGoogle Scholar
 Newmark, N. M. (1959). A method of computation for structural dynamics. ASCE Journal of Engineering Mechanics, 85, 67–94.Google Scholar
 Oden, J. T., Belytschko, T., Fish, J., Hughes, T. J. R., Johnson, C., Keyes, D., Laub, A., Petzold, L., Srolovitz, D., & Yip, S. (2006) Revolutionizing engineering science through simulation. In National science foundation blue ribbon panel on simulationbased engineering. Google Scholar
 Saltelli, A., Chan K., & Scott, M. (2000). Sensitivity analysis. Wiley.Google Scholar
 Smith, R., & Gent, P. (2002). Reference manual for the parallel ocean program (POP), Ocean Component of the Community Climate System Model, National Center for Atmospheric Research, Boulder, CO. (Also, Technical Report LAUR022484 of the Los Alamos National Laboratory, Los Alamos, NM.).Google Scholar
 United Nations. (2014). United Nations Department of Economic and Social Affairs (DESA) Continent Population 1950 to 2100, Wikimedia Commons, The Free Media Repository, Retrieved October 27, 2017, From: commons.wikimedia.org/w/index.php?title=File:UN_DESA_continent_population_1950_to_2100.svg&oldid=130209070.
 van Buren, K. L., Mollineaux, M. G., Hemez, F. M., & Atamturktur, S. (2013). Simulating the dynamics of wind turbine blades: Part II, model validation and uncertainty quantification. Wind Energy, 16, 741–758.CrossRefGoogle Scholar
 Wu, Y.T. (1994). Computational method for efficient structural reliability and reliability sensitivity analysis. AIAA Journal, 32, 1717–1723.CrossRefGoogle Scholar
 Zienkiewicz, O. C., & Taylor, R. L. (2000). The finite element method, volume 1: The basis. ButterworthHeinemann Publisher.Google Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.