1 Introduction

Reliability based design optimization (RBDO) deals with obtaining optimal designs characterized by a low probability of failure. In RBDO problems, there is a trade-off between obtaining higher reliability and lowering cost [1]. There have been several methods developed to estimate the reliability level, such as reliability index approach (RIA), Performance Measure Approach (PMA), the sequential optimization and reliability assessment (SORA) and methods to improve SORA. The main drawback the Reliability Index Approach (RIA) (Lee and Kwak 1987) [2], lies in the numerical effort required to solve the system. Numerical applications show that the classical RBDO converges slowly, or even fails to converge, due to difficulties to compute the reliability constraints. To overcome these difficulties, Tu and Choi (1999) [3] introduced the concept of the performance measure approach (PMA), where the probability measure is converted into a performance measure by solving an inverse reliability problem, which consists in searching for the point with minimum performance on the target reliability surface [4]. The Sequential Optimization and Reliability Assessment (SORA) is developed to improve the efficiency of probabilistic optimization. The SORA method employs a single-loop strategy with a serial of cycles of deterministic optimization and reliability assessment which the SORA method is proposed by Du and Chen in [5]. Du et al. (2008) proposed a methodology of sequential optimization and reliability assessment for Multidisciplinary Design Optimization (MDO) [6]. Cho et al. (2010) proposed Reliability-based design optimization using a family of methods of moving asymptotes (an effective method for reliability based design optimization is proposed enhancing SORA method by a family of Methods of Moving Asymptotes (MMA) approximations) [7]. Cho et al. (2011) proposed Reliability-based design optimization using convex linearization and sequential optimization and reliability assessment method (an effective method for RBDO is proposed enhancing SORA method by convex linearization) [8]. Li et al. (2013) proposed fuzzy random uncertainty-based MDO–SORA, called the FRMDO–SORA approach [9]. Huang et al. (2013) in order to further improve computational efficiency and extend the application of the current SORA method, proposed Enhanced SORA (ESORA) by considering constant and varying variances of random design variables while keeping the sequential framework [10]. Huang et al. (2016) proposed an incremental shifting vector approach within the framework of SORA. The approach proceeds by performing a shifting vector calculation and then solving a deterministic design optimization in each step, and eventually converges to the optimal solution. An incremental shifting strategy is proposed to ensure stable convergence in the iteration process [11]. Yi et al. (2016) proposed an approximate SORA where approximate Most Probable Target Point (MPTP) and approximate probabilistic performance measure (PPM) are adopted in reliability assessment [12].

The objective of this paper is to further improve the computational efficiency of the current SORA by avoiding reliability assessment for satisfied probabilistic constraints in each cycle until all probabilistic constraints are be satisfied. The accuracy and efficiency of the proposed method is compared with the SORA method with several illustrative numerical examples. This paper is composed as following section. In Sect. 2, the SORA method as a RBDO method is briefly reviewed as well as PMA method. The Augmented SORA (ASORA) is proposed in Sect. 3. Some RBDO, RBMDO and a multi objective examples are used to illustrate the efficiency of the proposed method in Sects. 4 and 5, followed by the conclusions in Sect. 6.

2 Reliability Based Design Optimization

The mathematical definition of a deterministic optimization can be ordinarily formulated as follows:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\quad \,\,\,f({\mathbf{d}},{\mathbf{p}})} \hfill \\ {{\text{Subject}}\,{\text{to}}\,\,\,\,g_{j} ({\mathbf{d}} ,{\mathbf{p}}) \le 0,\,\,j = 1, \ldots \,,n} \hfill \\ {\quad \quad \quad \quad \quad \,\,\,\,{\mathbf{d}}^{L} \le {\mathbf{d}} \le {\mathbf{d}}^{U} } \hfill \\ \end{array} $$
(1)

where \( f\left( . \right) \) is the objective function, \( {\mathbf{d}} \) is the vector of design variables, \( {\mathbf{p}} \) is the vector of design parameters, \( {\text{g}}_{j} \left( {\varvec{d},\varvec{p}} \right)\varvec{ } j = 1\sim n \) are constraint functions, \( {\mathbf{d}}^{L} \) and \( {\mathbf{d}}^{U} \) are the lower bound and upper bound of the design variable vector, respectively. In deterministic optimization, the uncertainty in the problem cannot be considered directly. The RBDO is one of the most practical methods for considering the uncertainty in the design problem and archiving reliable optimal results. The mathematical formulation of RBDO problem is typical written as follows:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\;f({\mathbf{d}},{\mathbf{X}},{\mathbf{P}})} \hfill \\ {{\text{Subject}}\,{\text{to}}\;{ \Pr }[G_{j} ({\mathbf{d}},{\mathbf{X}},{\mathbf{P}}) \le 0] \ge\Phi (\beta_{j}^{t} ),\,\,\,j = 1,\, \ldots ,n} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\mathbf{d}}^{L} \le {\mathbf{d}} \le {\mathbf{d}}^{U} } \hfill \\ \end{array} $$
(2)

where \( f\left( . \right) \) is the objective function, \( {\mathbf{d}} \) is the vector of deterministic design variables, \( {\mathbf{X}} \) are the vector of random design variables, \( {\mathbf{p}} \) is the vector of random design parameters, \( G_{j} ({\mathbf{d}},{\mathbf{X}},{\mathbf{P}}),\,\,j = 1\,{ \sim }\,n \) are probabilistic constraint functions or performance functions, \( {\mathbf{d}}^{L} \) and \( {\mathbf{d}}^{U} \) are the lower bound and upper bound of the design variable vector, and \( \Phi (.) \) is Cumulative Function Distribution (CFD) of the standard random variable and \( \beta_{j}^{t} \) denotes target reliability index of the \( j{\text{ - th}} \) probabilistic constraint. PMA and RIA are possessed double loop structure which are common methods to solving RBDO problem, SORA method is nonetheless more functional. SORA converts the double loop structure to single loop or serial loop leads to improving efficiency.

2.1 Performance Measure Approach (PMA)

The PMA method obtains the reliable results by searching the minimum value of the constraint function under the condition of satisfying the target reliability index. RBDO based on PMA can be formulated as follows:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,\,\,\,f({\mathbf{d}},{\mathbf{p}})} \hfill \\ {{\text{Subject}}\,{\text{to}}\,G_{mj} \le 0,\,\,\,j = 1, \ldots ,n} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\mathbf{d}}^{L} \le {\mathbf{d}} \le {\mathbf{d}}^{U} } \hfill \\ \end{array} $$
(3)

where \( G_{mj} \) is the maximum of the performance function. In PMA, the sub-optimization problem for the reliability assessment is formulated as follows in U-space:

$$ \begin{aligned} & {\text{Minimize}}\,\,\,\,G_{mj} = { \hbox{max} }G_{j} ({\mathbf{U}}) \\ & {\text{Subject}}\,{\text{to}}\,\,\,\left\| {\mathbf{U}} \right\| = \beta_{j}^{t} \\ \end{aligned} $$
(4)

The formulation of reliability analysis in PMA approach is defined as the inverse of reliability analysis in RIA approach. Therefor the result obtained from PMA approach is expressed as the inverse most probable point (IMPP). Optimization problem in Eq. (4) can be solved by general optimization algorithms or using advanced mean value (AMV) method, conjugate mean value (CMV) method, or hybrid mean value (HMV) method [13]. However the AMV method is well suited and more functional for PMA.

2.2 Sequential Optimization Reliability Assessment (SORA)

The SORA method uses a single-loop strategy in which a series of cycles of optimization and reliability assessment is employed. In SORA, optimization and reliability assessment are decoupled from each other in each cycle. Therefore, reliability assessment is not required within the optimization and the reliability assessment is conducted after the optimization. The key concept of the SORA method is to shift the boundaries of violated deterministic constraints to the feasible direction and immobility boundaries of constraints are satisfied. With the reliability assessment being carried out based on the MPP and the optimal points of the previous cycle, the new MPP is obtained. As a result, the shifted vector is computed according to Eq. (5). It should be noted that each probabilistic constraint has its own shift vector because each one has its own MPP and provided that the new MPP is close enough to the mean values of optimal points are obtained from optimization of pervious cycle, the shifted vector changes tends to zero in each cycle. This leads to the immobility of the constraints boundaries. Until all constraints are satisfied, this process continues in each cycle, repeatedly.

$$ {\mathbf{s}}_{j}^{k} = {\varvec{\upmu}}_{X}^{k} - {\mathbf{X}}_{{MPP_{j} }}^{k} $$
(5)

where \( {\varvec{\upmu}}_{X}^{k} \) is the vector of mean values of random variables, and \( {\mathbf{X}}_{{MPP_{j} }}^{k} \) is the corresponding MPP of the \( j{\text{ - th}} \) probabilistic constraint obtained in each cycle. With the SORA method, the equivalent deterministic optimization of the original RBDO problem in cycle k is formulated as follows:

$$ \begin{array}{*{20}l} {\mathop {\text{Minimize}}\limits_{{{\mathbf{d}}^{k} ,\,{\varvec{\upmu}}_{{\mathbf{X}}}^{k} }} \,\,\,\,f({\mathbf{d}}^{k} ,\,{\varvec{\upmu}}_{{\mathbf{X}}}^{k} ,\,{\varvec{\upmu}}_{{\mathbf{P}}} )} \hfill \\ {{\text{Subject}}\,{\text{to}}\,\,\,G_{j} ({\mathbf{d}}^{k} ,\,{\varvec{\upmu}}_{{\mathbf{X}}}^{k} - {\mathbf{s}}_{j}^{k} ,\,{\mathbf{P}}_{{_{{{\mathbf{MPP}}_{j} }} }}^{k} ) \le 0,\,\,j = 1, \ldots ,n} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\mathbf{d}}^{L} \le {\mathbf{d}}^{k} \le {\mathbf{d}}^{U} \,,\,\,\,{\varvec{\upmu}}_{{\mathbf{X}}}^{L} \le {\varvec{\upmu}}_{{\mathbf{X}}}^{k} \le {\varvec{\upmu}}_{{\mathbf{X}}}^{U} } \hfill \\ \end{array} $$
(6)

where \( {\mathbf{d}}^{k} \) is the vector of deterministic design variables, and \( {\varvec{\upmu}}_{X}^{k} \) is the vector of mean values of random design variables, and \( {\varvec{\upmu}}_{{\mathbf{P}}} \) is the vector of mean values of random design parameters. A flowchart of the ASORA method is depicted in Fig. 1.

Fig. 1.
figure 1

Flowchart of SORA method.

3 Augmented Sequential Optimization Reliability Assessment (ASORA)

3.1 Proposed Method

As mentioned earlier, the SORA method employs a series of cycles of reliability assessment and deterministic optimization to decouple the double loop structure. The key concept of the SORA method is to shift the boundaries of violated deterministic constraints to the feasible direction and immobility boundaries of constraints are satisfied.

In SORA, as long as all constraints are not satisfied, the reliability analysis will be done for all constraints. However, shifted vectors obtained for satisfied probabilistic constraints will be have changes either equal or close to zero, which leads to immobility boundaries of this constraints. But, it should be noted that the reliability analysis for satisfied constraints inflicts significant computational costs. It also, cannot be expected that the changes of the shifted vectors which obtained for satisfied constraints be exactly equal to zero. This arises that the reliability analysis will not be done only with MPP. In addition to MPP which is obtained from previous cycle, it requires the optimal points which is obtained at each cycle which one of the factors affecting on the optimal points in each cycle are the shifted vectors for all constraint which are not satisfied yet. In ASORA, in order to avoid the impact of such factors, as well as reducing the number of performance function calls for satisfied probabilistic constraints, by Eq. (7) (which is mentioned in [11]) in each cycle for all constraint can comprehend which of the probabilistic constraints are still active or not. While the probabilistic constraint is inactive which means that the probabilistic constraint is satisfied and according to Eq. (8) the value of the corresponding shifted vectors changes will be equal to zero and the MPP for next cycle according to Eq. (9) will be equal to MPP in current cycle. This brings about a significant reduction in the number of performance call functions in reliability assessment for inactive probabilistic constraints. On the other hand, if the probabilistic constraints are still active, the corresponding shifted vectors are defined according to the SORA method by Eq. (5). Actually, in ASORA, as long as satisfied all probabilistic constraints, no reliability analysis require for satisfied probabilistic constraints in each cycle.

$$ G\left( { - \beta_{t} \left( {\frac{{\nabla G\left( {\varvec{u}_{MPP}^{k - 1} } \right)}}{{\left\| {\nabla G\left( {\varvec{u}_{MPP}^{k - 1} } \right)} \right\|}}} \right)} \right) \ge 0 $$
(7)

where \( \varvec{u}_{MPP}^{k - 1} \) is the MPP in U-space of previous cycle and \( \nabla G\left( . \right) \) is the gradient vector of performance function at point \( \varvec{u}_{MPP}^{k - 1} \) and k is the k -th cycle.

$$ \begin{aligned} & {\mathbf{s}}_{j}^{k} = {\mathbf{s}}_{j}^{k - 1} + \Delta {\mathbf{s}} \\ & \Delta {\mathbf{s}} = 0 \\ \end{aligned} $$
(8)
$$ \begin{aligned} {\mathbf{u}}_{{{\mathbf{MPP}}_{j} }}^{k} = {\mathbf{u}}_{{{\mathbf{MPP}}_{j} }}^{k - 1} \hfill \\ {\mathbf{X}}_{{{\mathbf{MPP}}_{j} }}^{k} = {\mathbf{X}}_{{{\mathbf{MPP}}_{j} }}^{k - 1} \hfill \\ \end{aligned} $$
(9)

where \( \Delta {\mathbf{s}} \) is the sifted vector changes that is zero for inactive performance functions. A flowchart of the ASORA method is depicted in Fig. 2.

Fig. 2.
figure 2

Flowchart of ASORA method.

The ASORA method proposed in this study consists of the steps given below:

  1. 1.

    Set \( {\mathbf{s}} = 0,\,{\mathbf{u}}_{{{\mathbf{MPP}}_{j} }}^{k = 0} = 0 \). Implement a deterministic optimization of the design problem and the optimal design is chosen as the initial design point, \( {\mathbf{d}}^{0} \) and set \( {\mathbf{X}}_{{{\mathbf{MPP}}_{j} }}^{k = 0} = {\varvec{\upmu}}_{x}^{k = 0} \).

  2. 2.

    Transform MPP from original space into standard normal space. Calculate Eq. (7) based on the MPP and the optimal points of the previous cycle.

    • If Eq. (7) returns a nonnegative value, the corresponding probabilistic constraint is inactive and though \( \Delta {\mathbf{s}} \) is equal zero and shifted vector calculate by Eq. (8) and set Eq. (9).

    • If Eq. (7) returns a negative value, the corresponding probabilistic constraint is active and the corresponding shifted vector obtains through the new MPP which is obtained by implementing the reliability assessment for this constraint.

  3. 3.

    Optimization and obtain optimal points.

  4. 4.

    Investigate convergence (If convergence was achieved, stop the iteration and the result of optimization is the reliable optimal design. Otherwise, set k = k + 1 and return to step 2).

4 Numerical Example

4.1 Single-Disciplinary RBDO Problem

4.1.1 Example 1: Multiple Limit States

A classical problem in RBDO literature consists in finding the optimal mean of random variables [14]. Mathematical model is considered as the following:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,\,\,\,f({\mathbf{d}}) = d_{1} + d_{2} } \hfill \\ {{\text{subject}}\,{\text{to}}\,\,\,\,\,\Pr [G_{i} ({\mathbf{\rm X}}) > 0] \le\Phi ( - \beta_{i}^{t} ),\,\,i = 1, \ldots ,8} \hfill \\ {{\text{where}},\,\,\,\,\,\,G_{1} ({\mathbf{X}}) = \frac{{X_{1}^{2} X_{2} }}{20} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{2} ({\mathbf{X}}) = \frac{{(X_{1} + X_{2} - 5)^{2} }}{30} + \frac{{(X_{1} - X_{2} - 12)^{2} }}{120} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{3} ({\mathbf{X}}) = \frac{80}{{(X_{1}^{2} + 8X_{2} + 5)}} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\beta_{1}^{t} = \beta_{2}^{t} = \beta_{3}^{t} = 3} \hfill \\ {\,\,\,\,\,\,\,\,0 \le d_{i} \le 10\,\,{\text{for}}\,\,i = 1,2,\,\,\,\,\,X_{i} { \sim }\,N(d_{i} ,0.3)\,\,{\text{for}}\,\,i = 1,2} \hfill \\ {\,\,\,\,\,\,\,\,{\mathbf{d}}^{(0)} = [ 3. 1 1 0 7 , { 2} . 0 6 0 9 ]} \hfill \\ \end{array} $$
(10)

It has two random design variables \( X_{1} \) and \( X_{2} \) with mean values of \( \mu_{1} ,\,\mu_{2} \) and the standard deviations of \( \sigma_{1} ,\,\sigma_{2} \) and three probabilistic constraints and the target reliability indexes are selected to be 3. The initial design point is selected as the result of deterministic optimization. All random variables are statistically independent and have normal distribution.

The optimization results provided in Table 1 and the optimal results are almost equal to that in the reference [4] and all the probabilistic constraints satisfy the corresponding target reliability at the optimum. From Fig. 3, it can be seen that the shifted vectors of ASORA method for third probabilistic constraint is either equal to zero. In contrast, in SORA, the shifted vector for third probabilistic constraint is not equal to zero and has value. The number of objective function and performance function for ASORA are more than SORA method. Therefore, can be apperceived that ASORA is not suited for so simple RBDO problem in order to reducing computational cost compared to SORA.

Table 1. Summery of optimal results for Example 1
Fig. 3.
figure 3

Shifted vectors for Example 1.

4.1.2 Example 2: The Hock and Schittkowski Problem

The second example [15] has ten random variables and eight probabilistic constraints and the target reliability indexes are selected to be 3. The mathematical model considered as the following:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,f({\mathbf{d}}) = d_{1}^{2} + d_{2}^{2} + d_{1} d_{2} - 14d_{1} - 16d_{2} + (d_{3} - 10)^{2} + 4(d_{4} - 5)^{2} + (d_{5} - 3)^{2} + 2(d_{6} - 1)^{2} } \hfill \\ {\quad \quad \quad \quad \quad \quad + 5d_{7}^{2} + 7(d_{8} - 11)^{2} + 2(d_{9} - 10)^{2} + (d_{10} - 7)^{2} + 45} \hfill \\ {{\text{subject}}\,{\text{to}}\,\Pr [G_{i} ({\mathbf{\rm X}}) > 0] \le\Phi ( - \beta_{i}^{t} ),\,\,i = 1, \ldots ,8} \hfill \\ {{\text{where}},\quad G_{1} ({\mathbf{\rm X}}) = \frac{{4X_{1} + 5X_{2} - 3X_{7} + 9X_{8} }}{105} - 1,} \hfill \\ {\quad \quad \quad \;\;G_{2} ({\mathbf{\rm X}}) = 10X_{1} - 8X_{2} - 17X_{7} + 2X_{8} } \hfill \\ {\quad \quad \quad \;\;G_{3} ({\mathbf{\rm X}}) = \frac{{ - 8X_{1} + 2X_{2} + 5X_{9} - 2X_{10} }}{12} - 1,} \hfill \\ {\quad \quad \quad \;\;G_{4} ({\mathbf{\rm X}}) = \frac{{3(X_{1} - 2)^{2} + 4(X_{2} - 3)^{2} + 2X_{3}^{2} - 7X_{4} }}{120} - 1,} \hfill \\ {\quad \quad \quad \;\;G_{5} ({\mathbf{\rm X}}) = \frac{{5X_{1}^{2} + 8X_{2} + (X_{3} - 6)^{2} - 2X_{4} }}{40} - 1,} \hfill \\ {\quad \quad \quad \;\;G_{6} ({\mathbf{\rm X}}) = \frac{{0.5(X_{1} - 8)^{2} + 2(X_{2} - 4)^{2} + 3X_{5}^{2} - X_{6} }}{30} - 1,} \hfill \\ {\quad \quad \quad \;\;G_{7} ({\mathbf{\rm X}}) = X_{1}^{2} + 2(X_{2} - 2)^{2} - 2X_{1} X_{2} + 14X_{5} - 6X_{6} ,} \hfill \\ {\quad \quad \quad \;\;G_{8} ({\mathbf{\rm X}}) = - 3X_{1} + 6X_{2} + 12(X_{9} - 8)^{2} - 7X_{10} ,} \hfill \\ {\quad \beta_{1}^{t} = \beta_{2}^{t} = \ldots = \beta_{8}^{t} = 3.0,} \hfill \\ {\quad d_{i} \ge 0\,\,for\,\,i = 1, \ldots ,10,} \hfill \\ {\quad X_{i} \,{ \sim }\,N(d_{i} ,0.02)\,\,for\,\,i = 1, \ldots ,10,} \hfill \\ {\quad {\mathbf{d}}^{(0)} = [2.17,\,2.36,\,8.77,\,5.10,\,0.99,\,1.43,\,1.32,\,9.83,\,8.28,\,8.38]} \hfill \\ \end{array} $$
(11)

All random variables are statistically independent and have normal distribution. The initial design point is selected as the result of deterministic optimization. The optimization results are summarized in Table 2.

Table 2. Summery of optimal results for Example 2

The optimization results obtained by the two methods are listed in Table 2. The optimal results are almost equal to that in the reference [8]. In both methods, all the probabilistic constraints satisfy the corresponding target reliability at the optimum. The number of objective function for ASORA is almost equal to that for SORA and the number of performance function for ASORA method are much less than SORA method which leads to reducing computational costs. From this example can be apperceived that, unlike the Example 1 (Sect. 4.1.1), the ASORA method can be more effective in more complex RBDO problem.

4.1.3 Example 3: Speed Reducer

The last numerical example [15] is a speed reducer Fig. 4. It has 7 random variables and 11 probabilistic constraints. The objective function is to minimize weight; the constraints are related to bending and contact stress, longitudinal displacement, stress of the shaft and geometry. The design variables are gear width (×1), teeth module (×2), number of teeth in the pinion (×3), distance between bearings (×4, ×5) and axis diameter (×6, ×7). The design variables are assumed to be statistically independent normal random variables with a standard deviation of 0.005. The description of optimization problem is:

Fig. 4.
figure 4

Speed reducer.

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,\,\,\,f({\mathbf{d}}) = 0.7854\,d_{1} d_{2}^{2} \,(3.3333\,d_{3}^{2} + 14.9334\,d_{3} - 43.0934) - 1.508\,d_{1} (d_{6}^{2} + d_{7}^{2} )} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + 7.477(d_{6}^{3} + d_{7}^{3} ) + 0.7854(d_{4} d_{6}^{2} + d_{5} d_{7}^{2} )} \hfill \\ {{\text{subject}}\,{\text{to}}\,\,\,\,\,\Pr [G_{i} ({\mathbf{X}}) \ge 0] \le\Phi ( - \beta_{i} )\,\,\,\,\,\,\,\,\,i = 1, \ldots ,11} \hfill \\ {{\text{where}},\,\,\,\,\,\,G_{1} ({\mathbf{X}}) = \frac{27}{{X_{1} X_{2}^{2} X_{3} }} - 1,\,\,\,\,\,\,G_{2} ({\mathbf{X}}) = \frac{397.5}{{X_{1} X_{2}^{2} X_{3}^{2} }} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{3} ({\mathbf{X}}) = \frac{{1.93\,X_{4}^{3} }}{{X_{2} X_{3} X_{6}^{4} }} - 1,\,\,\,\,\,\,G_{4} ({\mathbf{X}}) = \frac{{1.93\,X_{5}^{3} }}{{X_{2} X_{3} X_{7}^{4} }} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{5} ({\mathbf{X}}) = \sqrt {\frac{{(745\,X_{4} /X_{2} X_{3} )^{2} + 16.9\,*10^{6} }}{{0.1\,X_{6}^{3} - 1100}}} ,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{6} ({\mathbf{X}}) = \sqrt {\frac{{(745\,X_{5} /X_{2} X_{3} )^{2} + 157.5\,*10^{6} }}{{0.1\,X_{7}^{3} - 850}}} ,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{7} ({\mathbf{X}}) = X_{2} X_{3} - 40,\,\,\,\,\,\,G_{8} ({\mathbf{X}}) = 5 - \frac{{X_{1} }}{{X_{2} }},\,\,\,\,\,\,G_{9} ({\mathbf{X}}) = \frac{{X_{1} }}{{X_{2} }} - 12,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,G_{10} ({\mathbf{X}}) = \frac{{(1.5\,X_{6} + 1.9)}}{{X_{4} }} - 1,\,\,\,\,\,\,G_{11} ({\mathbf{X}}) = \frac{{(1.1\,X_{7} + 1.9)}}{{X_{5} }} - 1,} \hfill \\ {\,\,\,\,\,\,\,\,\,\beta_{1} = \ldots = \beta_{7} = 3.0,} \hfill \\ {\,\,\,\,\,\,\,\,\,2.6 \le d_{1} \le 3.6,\,\,\,\,\,\,\,\,0.7 \le d_{2} \le 0.8,\,\,\,\,\,\,\,\,17 \le d_{3} \le 28,\,\,\,\,\,\,\,\,7.3 \le d_{4} \le 8.3,} \hfill \\ {\,\,\,\,\,\,\,\,\,7.3 \le d_{5} \le 8.3,\,\,\,\,\,\,\,\,\,2.9 \le d_{6} \le 3.9,\,\,\,\,\,\,\,\,5.0 \le d_{7} \le 5.5} \hfill \\ {\,\,\,\,\,\,\,\,\,X_{i} \sim N(d_{i} ,0.005)\,\,for\,\,i = 1, \ldots \,,7,} \hfill \\ {\,\,\,\,\,{\mathbf{d}}^{(0)} = [3.50,\,0.70,\,17.00,\,7.30,\,7.72,\,3.35,\,5.29]} \hfill \\ \end{array} $$
(12)

The initial design point is selected as the result of deterministic optimization. The optimization results are summarized in Table 3.

Table 3. Summery of optimal results for Example 3

The optimization results provided in Table 3 and the optimal results are almost equal to that in the reference [10]. All the probabilistic constraints satisfy the corresponding target reliability at the optimum. The number of objective function for ASORA is almost equal to that for SORA and the number of performance function for ASORA are much less than SORA. From Table 3, it can be seen that reducing number of performance function in ASORA leads to reducing the cycles compared to SORA.

5 Multi-disciplinary RBDO Problem

5.1 Example 4

The MDO numerical example is a conventional RBD problem that formulated as a MDF framework [6]. For demonstration, the problem is artificially decomposed into two subsystems (disciplines) and then formulated as a RBMDO problem as shown in Fig. 5.

Fig. 5.
figure 5

System structure of Example 4.

It has 2 disciplines with 3 design variables \( d_{1} ,\,d_{2} \) and \( d_{s} \). The random design parameters are \( X_{1} \;{ \sim }\;N\left( {5,0.5} \right) \), \( X_{2} \;{ \sim }\;N\left( {1,0.1} \right) \) and \( X_{s} \;{ \sim }\;N\left( {0,0.3} \right) \). It has two probabilistic constraints and the target reliability indexes are selected to be 3. The description of optimization problem is:

$$ \begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{{_{{\left( {d_{s} ,d_{1} ,d_{2} } \right)}} }} \,\,v\,\,\, = (\bar{v}_{1} + \bar{v}_{2} ) = \left( {d_{s} + \bar{x}_{s} } \right)^{2} + d_{1}^{2} + d_{2}^{2} } \hfill \\ {\,s.t.\,\,\,\,\,\,\,G_{1} = - (x_{1}^{*,(1)} - (d_{s} + x_{s}^{*,(1)} + 2d_{1} + 2y_{21}^{(1)} ))\, \le 0\,\,} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,G_{2} = - (5d_{s} + 5x_{1}^{*,(2)} + 3d_{2} - 4y_{12}^{(2)} - x_{2}^{*,(2)} ) \le 0} \hfill \\ {\left\{ \begin{aligned} y_{12}^{(1)} = d_{s} + x_{s}^{*.(1)} + d_{1} + y_{21}^{(1)} \hfill \\ y_{21}^{(1)} = d_{s} + x_{s}^{*.(1)} + d_{2} - y_{12}^{(1)} \hfill \\ \end{aligned} \right.\,\,\,\,\,\,\,\left\{ \begin{aligned} y_{12}^{(2)} = d_{s} + x_{s}^{*.(2)} + d_{1} + y_{21}^{(2)} \hfill \\ y_{21}^{(2)} = d_{s} + x_{s}^{*.(2)} + d_{2} - y_{12}^{(2)} \hfill \\ \end{aligned} \right.} \hfill \\ {\beta_{1}^{t} = \beta_{2}^{t} = 3.0,\,\,\,\,\,} \hfill \\ {{\mathbf{d}}_{i} \ge 0\,\,for\,\,i = 1,2,S} \hfill \\ {x_{s} \sim N(0,0.3),\,\,\,\,\,x_{1} \sim N(5,0.5),\,\,\,\,\,x_{2} \sim N(1,0.1)\,} \hfill \\ {{\mathbf{d}}^{(0)} = [1.6667,\,1.6667,\,1.6667]} \hfill \\ \end{array} $$
(13)

From Table 4, it can be seen that both methods can converge to almost the same optimal points, which are also equal to those from the reference, where the objective is 15.1843 and the optimal design is (2.2498, 2.2498, 2.2498) and all the probabilistic constraints satisfy the corresponding target reliability at the optimum. It is also noted that the percentile value of \( G_{1} \) for both methods is zero at the optimal point. This means that \( G_{1} \) is active and the reliability of \( G_{1} \) is exactly the same as the required reliability. The percentile value of \( G_{2} \) for both method is less than zero, and therefore, the reliability of \( G_{2} \) is greater than the required reliability and this constraint is inactive. Also from Table 4, the percentile value of \( G_{2} \) for SORA is almost equal to that in the reference [6], while this value for ASORA method is equal to that for deterministic model as well as is less than SORA. Also it shows, however the second constraint had been inactive, has been slightly shifted its boundary in SORA method. In contrast, the second constraint is not shifted in ASORA due to it is not active. Also, from Fig. 6, it can be seen that the shifted vectors of ASORA method for second probabilistic constraint, which is inactive is either equal to zero while of SORA is not. The number of objective function and number of performance function for ASORA are much less than SORA. The ASORA method has the better performance compare to the SORA in terms of reducing computational costs.

Table 4. Summery of optimal results for Example 4
Fig. 6.
figure 6

System structure of Example 4.

5.1.1 Example 5

The second example is multi limit state (Sect. 4.1.1 single disciplines) which was derived by Natasha Smith [16] to a multidisciplinary design optimization numerical example. The RBMDO model is formulated as:

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,\,\,\,f({\mathbf{d}}) = d_{1} + d_{2} } \hfill \\ {{\text{Subject}}\,{\text{to}}\,\,\,\,\Pr [G_{i} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) \le 0] \le 0.0013,\,\,\,i = 1, \ldots ,3} \hfill \\ {{\text{Where}}\,\,\,\,\,\,\,{\mathbf{d}} = [d_{1} ,d_{2} ],\,\,\,\,{\mathbf{X}} = \left[ {X_{1} \sim N\left( {d_{1} ,0.3} \right),\,X_{2} \sim N\left( {d_{2} ,0.3} \right)} \right]} \hfill \\ {\,\,\,\,\,G_{1} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) = \frac{{u_{1} X_{1} }}{20} - 1,} \hfill \\ {\,\,\,\,\,G_{2} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) = \frac{{u_{2} }}{30} + \frac{{\left( {X_{1} - X_{2} - 12} \right)^{2} }}{120} - 1,} \hfill \\ {\,\,\,\,\,G_{3} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) = \frac{80}{{X_{1}^{2} + 8X_{2} + 5}} - 1,} \hfill \\ {{\text{MDA: }}A_{1} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) = u_{1} = X_{1} (u_{2} - X_{1} + 5) = 0} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,A_{2} ({\mathbf{X}},{\mathbf{u}}({\mathbf{X}})) = u_{2} - \frac{{u_{1} }}{{X_{2} }} - X_{2} + 5 = 0} \hfill \\ {{\text{s}} . {\text{t}}\,\,\,X_{1} \ne X_{2} \,\,\,\,\,{\mathbf{d}}^{0} { = [3} . 1 2 5 1 { 2} . 0 4 7 9 ]} \hfill \\ \end{array} $$
(14)

The MDO structure has been implemented as MDF framework. It has 2 disciplines and 2 random design variables with normal distribution. Also it has 2 probabilistic constraints and the target reliability indexes are selected to be 3.

The optimization results provided in Table 5 and are almost equal to that in the reference [16]. All the probabilistic constraints satisfy the corresponding target reliability at the optimum. The number of objective functions for ASORA are more than SORA. But the number of performance functions for ASORA are much less than SORA and this difference is more than the objective function, and therefore, it shows that ASORA is more efficient than SORA in reducing the computational cost. From Table 5, it can be seen that percentile value of third probabilistic constraint of ASORA is difference compared to SORA. The zero percentile value of \( G_{1} \) and \( G_{2} \) means that these constraint are active and the reliability of \( G_{1} \) and \( G_{2} \) are exactly the same as the required reliability. The percentile value of \( G_{3} \) for both method is less than zero, and therefore, the reliability of \( G_{3} \) is greater than the required reliability and it means the third constraint is inactive.

Table 5. Summery of optimal results for Example 5

5.2 Multi-objective RBDO Problem

5.2.1 Example 6: Two-Bar Truss Structure

The popular two-bar truss structure problem Fig. 7 is adapted from a robust optimization problem from [17] as a benchmark numerical example in this study for the multi-objective reliability optimization. The design variables are the cross section diameter (d) and the structure height (H). The random parameters of the problem are: the vertical force (P ∼ N (150, 5) kN) the structure width (B ∼ N (750, 10) mm), the Elastic modulus (E ∼ N (2.1e5, 5e3) N/mm2), and the member thickness (t ∼ N (2.5, 0.4) mm). The optimization problem is to minimize the volume and the vertical displacement of the structure subject to constraints of stress, buckling as well as the bound constraints Eq. (15).

Fig. 7.
figure 7

Two-bar truss structure.

$$ \begin{array}{*{20}l} {{\text{Minimize}}\,\,\,\,\,\,f_{1} (d,H):\,\,\,\,\,volume = 2\pi dt\sqrt {B^{2} + H^{2} } } \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,f_{2} (d,H):\,\,\,\,\,deflection = \frac{{P(B^{2} H^{2} )^{{\frac{3}{2}}} }}{{(2\pi EdH)^{2} }}} \hfill \\ {{\text{Subject}}\,{\text{to}}\,\,\,\,\,\,\Pr [G_{j} ({\mathbf{X}}) \le 0] \le\Phi ( - \beta_{{t_{j} }} ),\,\,\,\,j = 1,2} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\beta_{{t_{j} }} = 3,\,\,\,j = 1,2} \hfill \\ {{\text{where}}\,\,\,\,\,\,\,\,\,\,\,G_{1} ({\mathbf{X}}):S \le S_{\hbox{max} } ,\,\,\,\,\,\,G_{2} ({\mathbf{X}}):S \le S_{crit} } \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,20 \le d \le 80,\,\,200 \le H \le 1000} \hfill \\ {S = \frac{{P\sqrt {B^{2} + H^{2} } }}{2\pi tdH},\,\,S_{crit} = \frac{{\pi^{2} E(t^{2} + d^{2} )}}{{8(B^{2} + H^{2} )}},\,\,S_{\hbox{max} } = 400\,{\text{MPa}}} \hfill \\ \end{array} $$
(15)

The problem is solved as a deterministic one using the NSGA-II algorithm and the reliability Pareto front is obtained from both methods for three reliability targets \( \beta = 2,\,\,2.6 \) and 3 are depicted in Fig. 4. The Pareto fronts show a good compliance between both methods in achieving reliable optimal results. From Fig. 8 as the reliability target increases the reliable Pareto fronts are further to the deterministic one. Also the proposed method is well suited for the multi-objective reliability based design optimization of problems that involve computationally demanding constraints functions and it shows the good accuracy and efficiency of ASORA compared to SORA method.

Fig. 8.
figure 8

Pareto fronts of Example 6.

6 Conclusion

SORA is one of the most efficient single-loop methods to achieve a reliable optimal design in RBDO problem. In this paper, the ASORA method has been proposed to reduce the number of the satisfied probabilistic constraints call functions in the reliability assessment of the SORA method which clearly leads to a considerable reduction in the computational cost. In this method, using the MPP and the optimal points of the previous cycle, it is examined whether each probabilistic constraint is inactive and the reliability assessment will be done only for active probabilistic constraints. The key concept of the ASORA method is refraining from reliability assessment for satisfied probabilistic constraints in each cycle until all probabilistic constraints are be satisfied.

As demonstrated by the numerical examples, the number of constraint function calls of ASORA are much less than SORA method with using the same starting points and the optimization method. Therefore, in the reducing computational costs, the ASORA method performs more efficiently compared to the SORA method.