Skip to main content
Log in

A neural network to solve quadratic programming problems with fuzzy parameters

  • Published:
Fuzzy Optimization and Decision Making Aims and scope Submit manuscript

Abstract

In this paper, a representation of a recurrent neural network to solve quadratic programming problems with fuzzy parameters (FQP) is given. The motivation of the paper is to design a new effective one-layer structure neural network model for solving the FQP. As far as we know, there is not a study for the neural network on the FQP. Here, we change the FQP to a bi-objective problem. Furthermore, the bi-objective problem is reduced to a weighting problem and then the Lagrangian dual is constructed. In addition, we consider a neural network model to solve the FQP. Finally, some illustrative examples are given to show the effectiveness of our proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Abdel-Malek, L. L., & Areeractch, N. (2007). A quadratic programming approach to the multi-product newsvendor problem with side constraints. European Journal of Operational Research, 176(8), 55–61.

    MathSciNet  Google Scholar 

  • Ammar, E., & Khalifa, H. A. (2003). Fuzzy portfolio optimization a quadratic programming approach. Chaos, Solitons and Fractals, 18, 1045–1054.

    Article  MathSciNet  MATH  Google Scholar 

  • Bazaraa, M. S., Shetty, C., & Sherali, H. D. (1979). Nonlinear programming, theory and algorithms. New York: Wiley.

    MATH  Google Scholar 

  • Chen, Y.-H., & Fang, S.-C. (2000). Neurocomputing with time delay analysis for solving convex quadratic programming problems. IEEE Transactions on Neural Networks, 11, 230–240.

    Article  Google Scholar 

  • Cruz, C., Silva, R. C., & Verdegay, J. L. (2011). Extending and relating different approaches for solving fuzzy quadratic problems. Fuzzy Optimization and Decision Making, 10(3), 193–210.

    Article  MathSciNet  MATH  Google Scholar 

  • Effati, S., Pakdaman, M., & Ranjbar, M. (2011). A new fuzzy neural network model for solving fuzzy linear programming problems and its applications. Neural Computing & Applications, 20, 1285–1294.

    Article  Google Scholar 

  • Effati, S., Mansoori, A., & Eshaghnezhad, M. (2015). An efficient projection neural network for solving bilinear programming problems. Neurocomputing, 168, 1188–1197.

    Article  Google Scholar 

  • Effati, S., & Ranjbar, M. (2011). A novel recurrent nonlinear neural network for solving quadratic programming problems. Applied Mathematical Modelling, 35, 1688–1695.

    Article  MathSciNet  MATH  Google Scholar 

  • Eshaghnezhad, M., Effati, S., & Mansoori, A. (2016). A neurodynamic model to solve nonlinear pseudo-monotone projection equation and its applications. IEEE Transactions on Cybernetics. doi:10.1109/TCYB.2016.2611529.

  • Friedman, M., Ma, M., & Kandel, A. (1999). Numerical solution of fuzzy differential and integral equations. Fuzzy Set and Systems, 106, 35–48.

    Article  MathSciNet  MATH  Google Scholar 

  • Hopfield, J. J., & Tank, D. W. (1985). Neural computation of decisions in optimization problems. Biological Cybernetics, 52, 141–152.

    MathSciNet  MATH  Google Scholar 

  • Khalil, H. K. (1996). Nonlinear systems. Michigan: prentice-hall.

    Google Scholar 

  • Liu, S. T. (2009). A revisit to quadratic programming with fuzzy parameters. Chaos, Solitons & Fractals, 41, 1401–1407.

    Article  MathSciNet  MATH  Google Scholar 

  • Lupulescu, V. (2009). On a class of fuzzy functional differential equations. Fuzzy Sets and Systems, 160, 1547–1562.

    Article  MathSciNet  MATH  Google Scholar 

  • Mansoori, A., Effati, S., & Eshaghnezhad, M. (2016). An efficient recurrent neural network model for solving fuzzy non-linear programming problems. Applied Intelligence. doi:10.1007/s10489-016-0837-4.

  • Miettinen, K. M. (1999). Non-linear multiobjective optimization. Boston: Kluwer Academic.

    MATH  Google Scholar 

  • Panigrahi, M., Panda, G., & Nanda, S. (2008). Convex fuzzy mapping with differentiability and its application in fuzzy optimization. European Journal of Operational Research, 185(1), 47–62.

    Article  MathSciNet  MATH  Google Scholar 

  • Petersen, J. A. M., & Bodson, M. (2006). Constrained quadratic programming techniques for control allocation. IEEE Transactions on Control Systems Technology, 14(9), 1–8.

    Google Scholar 

  • Silva, R. C., Cruz, C., & Verdegay, J. L. (2013). Fuzzy costs in quadratic programming problems. Fuzzy Optimization and Decision Making, 12(3), 231–248.

    Article  MathSciNet  Google Scholar 

  • Wang, G., & Wu, C. (2003). Directional derivatives and sub-differential of convex fuzzy mappings and application in convex fuzzy programming. Fuzzy Sets and Systems, 138, 559–591.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, H.-C. (2003). Saddle Point Optimality Conditions in Fuzzy Optimization Problems. Fuzzy Optimization and Decision Making, 2(3), 261–273.

    Article  MathSciNet  Google Scholar 

  • Wu, H.-C. (2004). Evaluate fuzzy optimization problems based on biobjective programming problems. Computers and Mathematics with Applications, 47, 893–902.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, H.-C. (2004). Duality theory in fuzzy optimization problems. Fuzzy Optimization and Decision Making, 3(4), 345–365.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, X.-L., & Liu, Y.-K. (2012). Optimizing fuzzy portfolio selection problems by parametric quadratic programming. Fuzzy Optimization and Decision Making, 11(4), 411–449.

    Article  MathSciNet  MATH  Google Scholar 

  • Xia, Y., & Wang, J. (2000). A recurrent neural network for solving linear projection equations. Neural Networks, 13, 337–350.

    Article  Google Scholar 

  • Zhong, Y., & Shi, Y. (2002). Duality in fuzzy multi-criteria and multi-constraint level linear programming: A parametric approach. Fuzzy Sets and Systems, 132, 335–346.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to express our special thanks to the anonymous referees and editor for their valuable suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sohrab Effati.

Appendices

Appendix 1: Some results on fuzzy calculus

Lemma 7.1

Let \({\tilde{f}}, {\tilde{g}}\) be convex fuzzy mappings defined on \(C\subseteq {\varOmega }\), and \(int\ C\ne \emptyset \), then,

$$\begin{aligned} {\tilde{\partial }}(\lambda {\tilde{f}})(x)=\lambda {\tilde{\partial }} {\tilde{f}}(x),\quad (\lambda >0),\qquad {\tilde{\partial }}({\tilde{f}}+{\tilde{g}})(x)={\tilde{\partial }} {\tilde{f}}(x)+{\tilde{\partial }} {\tilde{g}}(x). \end{aligned}$$

Proof

From Theorem 2.14, \(\lambda {\tilde{f}}\) and \({\tilde{f}}+{\tilde{g}}\) are convex fuzzy mapping. The completeness of the proof follows from theorem 23.8 in Zhong and Shi (2002). \(\square \)

Theorem 7.2

Let \({\tilde{f}}, {\tilde{g}}\) be convex fuzzy mappings defined on \(C\subseteq {\varOmega }\), and \(int\ C\ne \emptyset \), if \({\tilde{f}}, {\tilde{g}}\) are differential at \(x^*\), then \(\lambda {\tilde{f}}\) (\(\lambda >0\)) and \({\tilde{f}}+{\tilde{g}}\) are also differential at \(x^*\), i. e.,

$$\begin{aligned} {\tilde{\nabla }} (\lambda {\tilde{f}})(x^*)=\lambda {\tilde{\nabla }} {\tilde{f}}(x^*),\quad {\tilde{\nabla }}({\tilde{f}}+{\tilde{g}})(x^*)={\tilde{\nabla }} {\tilde{f}}(x^*)+{\tilde{\nabla }} {\tilde{g}}(x^*). \end{aligned}$$

Proof

From Theorem 2.14, \(\lambda {\tilde{f}}\) and \({\tilde{f}}+{\tilde{g}}\) are convex fuzzy mapping. Using Definition 2.11 and Lemma 7.1, the proof is trivial. \(\square \)

Appendix 2: Some results on FQP

Lemma 7.3

If fuzzy matrix \({\tilde{H}}\), is positive semi-definite and symmetric, \(x\in {\mathbb {R}}^n_+\), then \(x^T{\tilde{H}}x\) is a fuzzy number and,

$$\begin{aligned} x^T{\tilde{H}}x&=\left( \left\{ \left( \sum _{i,j=1}^nx_ix_j\underline{h_{ij}}(\alpha ), \sum _{i,j=1}^nx_ix_j\overline{h_{ij}}(\alpha ), \alpha \right) : \alpha \in [0,1]\right\} \right) _{n\times n}\\&=(\{(x^T{\underline{H}}(\alpha )x, x^T{\overline{H}}(\alpha )x, \alpha ): \alpha \in [0,1])\})_{n\times n}, \end{aligned}$$
(24)

where \(x=(x_1,x_2,\ldots ,x_n)^T, {\tilde{H}}=(\{(\underline{h_{ij}}(\alpha ), \overline{h_{ij}}(\alpha ), \alpha ): \alpha \in [0,1]\})_{n\times n}\).

Proof

The proof follows from Definition 3.1. \(\square \)

Lemma 7.4

Let fuzzy matrix \({\tilde{H}}\) be positive semi-definite and symmetric, \(x\in {\mathbb {R}}^n_+\), then \({\tilde{h}}(x)=x^T{\tilde{H}}x\) is a convex fuzzy mapping.

Proof

The proof follows from Definition 3.1, Theorem 2.13, and Lemma 7.3. \(\square \)

Now, consider the FQP defined in (1). Here, we are going to prove some results for the FQP.

Lemma 7.5

Let fuzzy matrix \({\tilde{H}}\) be a positive semi-definite and symmetric, then \({\tilde{f}}(x)={\tilde{c}}^Tx+\frac{1}{2}x^T{\tilde{H}}x\) in (1) is a convex fuzzy mapping.

Proof

Since \(x\ge 0, {\tilde{c}}^Tx\) is a convex fuzzy mapping. Using Lemma 7.4 and Theorem 2.14, the proof is complete. \(\square \)

Remark 7.6

Since in FQP (1), \({\tilde{f}}(x)\) is a convex fuzzy mapping and \(T=\{x:\ x\ge 0, {\tilde{A}}x\le {\tilde{b}}\}\) is a convex feasible set, so FQP (1) is a convex fuzzy programming.

Remark 7.7

As in crisp programming problem, we say a fuzzy programming is convex if both objective function and the region solution are convex.

Lemma 7.8

The fuzzy mapping \({\tilde{f}}(x)={\tilde{c}}^Tx+\frac{1}{2}x^T{\tilde{H}}x\) is differentiable on \(int\ {\mathbb {R}}^n_+\) and,

$$\begin{aligned} {\tilde{\nabla }}{\tilde{f}}(x)={\tilde{c}}+{\tilde{H}}{x}. \end{aligned}$$
(25)

Proof

The proof follows from Theorem 7.2 and Lemma 7.5. \(\square \)

Appendix 3: Proof of Theorem 3.2

Proof

Since \({\bar{x}}\) is a local optimal solution, there exists a neighborhood \(N({\bar{x}})\) around \({\bar{x}}\), such that:

$$\begin{aligned} {\tilde{f}}({\bar{x}})\le {\tilde{f}}(x),\quad \forall \ x\in N({\bar{x}}). \end{aligned}$$

i.e., according to the Definition 2.3, we get,

$$\begin{aligned} {\underline{f}}({\bar{x}})(\alpha )\le {\underline{f}}(x)(\alpha ),\qquad {\overline{f}}({\bar{x}})(\alpha )\le {\overline{f}}(x)(\alpha ),\qquad 0\le \alpha \le 1. \end{aligned}$$
(26)

By contradiction, suppose that \({\bar{x}}\) is not a global optimal solution, so that \({\tilde{f}}(x^*)<{\tilde{f}}({\bar{x}})\) for some \(x^*\in T\), where \(T=\{x:\ x\ge 0, {\tilde{A}}x\le {\tilde{b}}\}\) is the feasible set. In other words, we have:

$$\begin{aligned} {\underline{f}}({x}^*)(\alpha )\le {\underline{f}}({\bar{x}})(\alpha ),\qquad {\overline{f}}({x}^*)(\alpha )\le {\overline{f}}({\bar{x}})(\alpha ),\qquad 0\le \alpha \le 1. \end{aligned}$$

From the convexity of \({\tilde{f}}\) for all \(\lambda \in (0,1)\), we have:

$$\begin{aligned} \underline{f}(\lambda x^*+(1-\lambda )\bar{x})(\alpha )&\le \lambda \underline{f}(x^*)(\alpha )+ (1-\lambda )\underline{f}(\bar{x})(\alpha )\\&\le \lambda \underline{f}(\bar{x})(\alpha )+(1-\lambda )\underline{f}(\bar{x})(\alpha )=\underline{f}(\bar{x})(\alpha ),\\ \overline{f}(\lambda x^*+(1-\lambda )\bar{x})(\alpha )&\le \lambda \overline{f}(x^*)(\alpha )+(1-\lambda ) \overline{f}(\bar{x})(\alpha )\\ {}&\le \lambda \overline{f}(\bar{x})(\alpha )+(1-\lambda )\overline{f}(\bar{x})(\alpha )=\overline{f}(\bar{x})(\alpha ). \end{aligned}$$

But for \(\lambda >0\) and sufficiently small, \(\lambda x^*+(1-\lambda ){\bar{x}}\in N({\bar{x}})\). Hence, the above inequalities contradict with (26), this leads to the conclusion that \({\bar{x}}\) is a global optimal solution. Suppose that \({\bar{x}}\) is not the unique global optimal solution, so that there exists a \({\hat{x}}\in T, {\hat{x}}\ne {\bar{x}}\), such that \({\tilde{f}}({\hat{x}})={\tilde{f}}({\bar{x}})\), i. e.,

$$\begin{aligned} {\underline{f}}({\hat{x}})(\alpha )={\underline{f}}({\bar{x}})(\alpha ),\qquad {\overline{f}}({\hat{x}})(\alpha )={\overline{f}}({\bar{x}})(\alpha ),\qquad 0\le \alpha \le 1. \end{aligned}$$

By the strict convexity,

$$\begin{aligned}&{\underline{f}}(\frac{1}{2}{\hat{x}}+\frac{1}{2}{\bar{x}})(\alpha )<\frac{1}{2}{\underline{f}}({\hat{x}})(\alpha )+\frac{1}{2}{\underline{f}}({\bar{x}})(\alpha )={\underline{f}}({\bar{x}})(\alpha ),\\&{\overline{f}}(\frac{1}{2}{\hat{x}}+\frac{1}{2}{\bar{x}})(\alpha )<\frac{1}{2}{\overline{f}}({\hat{x}})(\alpha )+\frac{1}{2}{\overline{f}}({\bar{x}})(\alpha )={\overline{f}}({\bar{x}})(\alpha ). \end{aligned}$$

By the convexity of \(T, \frac{1}{2}{\hat{x}}+\frac{1}{2}{\bar{x}}\in T\), and the above inequalities violate global optimality of \({\bar{x}}\). Hence, \({\bar{x}}\) is the unique global minimum. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mansoori, A., Effati, S. & Eshaghnezhad, M. A neural network to solve quadratic programming problems with fuzzy parameters. Fuzzy Optim Decis Making 17, 75–101 (2018). https://doi.org/10.1007/s10700-016-9261-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10700-016-9261-9

Keywords

Navigation