Advertisement

A Collaborative Neurodynamic Optimization Approach to Bicriteria Portfolio Selection

  • Man-Fai Leung
  • Jun WangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)

Abstract

In this paper, a collaborative neurodynamic optimization approach is applied for bicriteria portfolio selection in the Markowitz mean-variance framework. The bicriteria portfolio selection problem consists of two objectives (risk and return) which are scalarized using a weighted Chebyshev function. Multiple neurodynamic optimization models are used to generate a set of Pareto-optimal solutions. Particle swarm optimization is used to diversify the Pareto-optimal solutions by optimizing the weights of the scalarized objective functions. Experimental results show the superiority of the applied approach.

Keywords

Portfolio selection Multiobjective optimization Neurodynamic optimization approach 

References

  1. 1.
    Markowitz, H.: Portfolio selection. J. Financ. 7(1), 77–91 (1952)Google Scholar
  2. 2.
    Steinbach, M.C.: Markowitz revisited: mean-variance models in financial portfolio analysis. SIAM Rev. 43(1), 31–85 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Cui, X., Zhu, S., Li, D., Sun, J.: Mean-variance portfolio optimization with parameter sensitivity control. Optim. Methods Softw. 31(4), 755–774 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Cui, X., Li, D., Li, X.: Mean-variance policy for discrete-time cone-constrained markets: time consistency in efficiency and the minimum-variance signed supermartingale measure. Math. Financ. 27(2), 471–504 (2017)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kolm, P.N., Tütüncü, R., Fabozzi, F.J.: 60 years of portfolio optimization: practical challenges and current trends. Eur. J. Oper. Res. 234(2), 356–371 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Ogryczak, W.: Multiple criteria linear programming model for portfolio selection. Ann. Oper. Res. 97(1–4), 143–162 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Hopfield, J.J., Tank, D.W.: Computing with neural circuits: a model. Science 233(4764), 625–633 (1986)CrossRefGoogle Scholar
  8. 8.
    Liu, S., Wang, J.: A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans. Neural Netw. 17(6), 1500–1510 (2006)CrossRefGoogle Scholar
  9. 9.
    Hu, X., Wang, J.: Design of general projection neural networks for solving monotone linear variational inequalities and linear and quadratic optimization problems. IEEE Trans. Syst. Man Cybern. B 37(5), 1414–1421 (2007)CrossRefGoogle Scholar
  10. 10.
    Hu, X., Wang, J.: An improved dual neural network for solving a class of quadratic programming problems and its k-winners-take-all application. IEEE Trans. Neural Netw. 19(12), 2022 (2008)CrossRefGoogle Scholar
  11. 11.
    Liu, Q., Wang, J.: A one-layer recurrent neural network with a discontinuous activation function for linear programming. Neural Comput. 20(5), 1366–1383 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Liu, Q., Wang, J.: A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans. Neural Netw. 19(4), 558–570 (2008)CrossRefGoogle Scholar
  13. 13.
    Xia, Y., Feng, G., Wang, J.: A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Trans. Neural Netw. 19(8), 1340–1353 (2008)CrossRefGoogle Scholar
  14. 14.
    Guo, Z., Liu, Q., Wang, J.: A one-layer recurrent neural network for pseudoconvex optimization subject to linear equality constraints. IEEE Trans. Neural Netw. 22(12), 1892–1900 (2011)CrossRefGoogle Scholar
  15. 15.
    Liu, Q., Wang, J.: A one-layer recurrent neural network for constrained nonsmooth optimization. IEEE Trans. Syst. Man Cybern. B 41(5), 1323–1333 (2011)CrossRefGoogle Scholar
  16. 16.
    Liu, Q., Wang, J.: Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions. IEEE Trans. Neural Netw. 22(4), 601–613 (2011)CrossRefGoogle Scholar
  17. 17.
    Liu, Q., Guo, Z., Wang, J.: A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw. 26, 99–109 (2012)zbMATHCrossRefGoogle Scholar
  18. 18.
    Hosseini, A., Wang, J., Hosseini, S.M.: A recurrent neural network for solving a class of generalized convex optimization problems. Neural Netw. 44, 78–86 (2013)zbMATHCrossRefGoogle Scholar
  19. 19.
    Liu, Q., Wang, J.: A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints. IEEE Trans. Neural Netw. Learn. Syst. 24(5), 812–824 (2013)CrossRefGoogle Scholar
  20. 20.
    Li, G., Yan, Z., Wang, J.: A one-layer recurrent neural network for constrained nonsmooth invex optimization. Neural Netw. 50, 79–89 (2014)zbMATHCrossRefGoogle Scholar
  21. 21.
    Liu, Q., Huang, T., Wang, J.: One-layer continuous-and discrete-time projection neural networks for solving variational inequalities and related optimization problems. IEEE Trans. Neural Netw. Learn. Syst. 25(7), 1308–1318 (2014)CrossRefGoogle Scholar
  22. 22.
    Li, G., Yan, Z., Wang, J.: A one-layer recurrent neural network for constrained nonconvex optimization. Neural Netw. 61, 10–21 (2015)zbMATHCrossRefGoogle Scholar
  23. 23.
    Liu, Q., Wang, J.: A projection neural network for constrained quadratic minimax optimization. IEEE Trans. Neural Netw. Learn. Syst. 26(11), 2891–2900 (2015)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Xia, Y., Wang, J.: A bi-projection neural network for solving constrained quadratic optimization problems. IEEE Trans. Neural Netw. Learn. Syst. 27(2), 214–224 (2016)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Le, X., Wang, J.: A two-time-scale neurodynamic approach to constrained minimax optimization. IEEE Trans. Neural Netw. Learn. Syst. 28(3), 620–629 (2017)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Qin, S., Le, X., Wang, J.: A neurodynamic optimization approach to bilevel quadratic programming. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2580–2591 (2017)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Liu, Q., Yang, S., Wang, J.: A collective neurodynamic approach to distributed constrained optimization. IEEE Trans. Neural Netw. Learn. Syst. 28(8), 1747–1758 (2017)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Yan, Z., Fan, J., Wang, J.: A collective neurodynamic approach to constrained global optimization. IEEE Trans. Neural Netw. Learn. Syst. 28(5), 1206–1215 (2017)CrossRefGoogle Scholar
  29. 29.
    Che, H., Wang, J.: A two-timescale duplex neurodynamic approach to biconvex optimization. IEEE Trans. Neural Netw. Learn. Syst. (2018, in press).  https://doi.org/10.1109/TNNLS.2018.2884788
  30. 30.
    Yang, S., Liu, Q., Wang, J.: A collaborative neurodynamic approach to multiple-objective distributed optimization. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 981–992 (2018)CrossRefGoogle Scholar
  31. 31.
    Leung, M.F., Wang, J.: A collaborative neurodynamic approach to multiobjective optimization. IEEE Trans. Neural Netw. Learn. Syst. 29(11), 5738–5748 (2018)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer, Boston (1999)zbMATHGoogle Scholar
  33. 33.
    Shi, Y., Eberhart, R.C.: Empirical study of particle swarm optimization. In: Congress on Evolutionary Computation, pp. 1945–1950 (1999)Google Scholar
  34. 34.
    Van Veldhuizen, D.A., Lamont, G.B.: On measuring multiobjective evolutionary algorithm performance. In: Congress on Evolutionary Computation, pp. 204–211 (2000)Google Scholar
  35. 35.
    Bader, J., Zitzler, E.: HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19(1), 45–76 (2011)CrossRefGoogle Scholar
  36. 36.
    Xia, Y.: An extended projection neural network for constrained optimization. Neural Comput. 16(4), 863–883 (2004)zbMATHCrossRefGoogle Scholar
  37. 37.
    Leung, M.F., Wang, J.: A neurodynamic approach to multiobjective linear programming. In: 15th International Symposium on Neural Networks, pp. 11–18 (2018)Google Scholar
  38. 38.
    Chang, T.J., Meade, N., Beasley, J.E., Sharaiha, Y.M.: Heuristics for cardinality constrained portfolio optimisation. Comput. Oper. Res. 27(13), 1271–1302 (2000)zbMATHCrossRefGoogle Scholar
  39. 39.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.A.M.T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)CrossRefGoogle Scholar
  40. 40.
    Zhang, Q., Li, H., Maringer, D., Tsang, E.: MOEA/D with NBI-style Tchebycheff approach for portfolio management. In: Congress on Evolutionary Computation, pp. 1–8 (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceCity University of Hong KongKowloonHong Kong
  2. 2.School of Data ScienceCity University of Hong KongKowloonHong Kong
  3. 3.Shenzhen Research InstituteCity University of Hong KongShenzhenChina

Personalised recommendations