Multi-objective Optimization in High-Dimensional Molecular Systems

  • Debora Slanzi
  • Valentina Mameli
  • Marina Khoroshiltseva
  • Irene Poli
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 830)


The paper proposes a methodological approach to design complex experiments for multi-objective optimization. The strategy is based on evolutionary statistical inference to search for the optimal values in high-dimensional experimental spaces. We developed this approach to study a particular molecular system and discover the best molecules to be proposed as candidate drugs.


Multi-objective optimization Evolutionary strategies Statistical models 



The authors would like to acknowledge the fruitful collaboration with Darren Green and his Molecular Design group at GlaxoSmithKline (GSK), Medicines Research Centre, Stevenage (UK).


  1. 1.
    Kerns, E.H., Di, L.: Drug-Like Properties: Concepts, Structure Design and Methods: From ADME to Toxicity Optimization. Academic Press, San Diego (2008)Google Scholar
  2. 2.
    Coello, C.A., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms for Solving Multi-Objective Problems. Genetic and Evolutionary Computation. Springer, New York (2006). Scholar
  3. 3.
    Lobato, F.S., Steffen, V.: Multi-objective Optimization Problem. SpringerBriefs in Mathematics. Springer International Publishing, Cham (2017). Scholar
  4. 4.
    Ekins, S., et al.: Evolving molecules using multi-objective optimization: applying to ADME/Tox. Drug Discov. Today 15, 410–451 (2010)Google Scholar
  5. 5.
    Li, H., et al.: An effective docking strategy for virtual screening based on multi-objective optimization algorithm. BMC Bioinform. 10(58), 1–12 (2009)Google Scholar
  6. 6.
    Soto, A.J., et al.: Multi-objective feature selection in QSAR using a machine learning approach. QSAR Comb. Sci. 28, 1509–1523 (2009)CrossRefGoogle Scholar
  7. 7.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Cheng, B., Titterington, D.M.: Neural networks: a review from a statistical perspective. Stat. Sci. 9(1), 2–30 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Bühlmann, P., Hothorn, T.: Boosting algorithms: regularization, prediction and model fitting. Stat. Sci. 22(4), 477–505 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Baragona, R., Battaglia, F., Poli, I.: Evolutionary Statistical Procedures: An Evolutionary Computation Approach to Statistical Procedures Designs and Applications. Springer, Heidelberg (2013). Scholar
  11. 11.
    Borrotti, M., De March, D., Slanzi, D., Poli, I.: Designing lead optimization of MMP-12 inhibitors. Comput. Math. Methods Med. 2014, 8 (2014). Article ID 258627CrossRefGoogle Scholar
  12. 12.
    Pickett, S.D., Green, D.V.S., Hunt, D.L., Pardoe, D.A., Hughes, I.: Automated lead optimization of MMP-12 inhibitors using a genetic algorithm. ACS Med. Chem. Lett. 2(1), 28–33 (2011)CrossRefGoogle Scholar
  13. 13.
    Brown, P.J., Ridout, M.S.: Level-screening designs for factors with many levels. Ann. Appl. Stat. 10(2), 864–883 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Giovannelli, A., Slanzi, D., Khoroshiltseva, M., Poli, I.: Model-based lead molecule design. In: Rossi, F., Piotto, S., Concilio, S. (eds.) WIVACE 2016. CCIS, vol. 708, pp. 103–113. Springer, Cham (2017). Scholar
  15. 15.
    Mameli, V., Lunardon, N., Khoroshiltseva, M., Slanzi, D., Poli, I.: Reducing dimensionality in molecular systems: a Bayesian non-parametric approach. In: Rossi, F., Piotto, S., Concilio, S. (eds.) WIVACE 2016. CCIS, vol. 708, pp. 114–125. Springer, Cham (2017). Scholar
  16. 16.
    Lameijer, E.-W., Bäck, T., Kok, J.N., Ijzerman, A.D.P.: Evolutionary algorithms in drug design. Nat. Comput. 4(3), 177–243 (2005)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Csaki, F. (eds.) 2nd International Symposium on Information Theory, pp. 267–281. Akademiai Kiado, Budapest (1973)Google Scholar
  18. 18.
    Livingstone, D.: Artificial Neural Networks: Methods and Applications. Humana Press, Totowa (2008)Google Scholar
  19. 19.
    Poli, I., Jones, R.D.: A neural net model for prediction. J. Am. Stat. Assoc. 89(425), 117–121 (1994)CrossRefzbMATHGoogle Scholar
  20. 20.
    Foresee, F.D., Hagan, M.T.: Gauss-Newton approximation to Bayesian learning. In: Proceedings of the International Joint Conference on Neural Networks (1997)Google Scholar
  21. 21.
    Moller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6, 525–533 (1993)CrossRefGoogle Scholar
  22. 22.
    MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)CrossRefzbMATHGoogle Scholar
  23. 23.
    Daina, A., Michielin, O., Zoete, V.: SwissADME: a free web tool to evaluate pharmacokinetics, drug-likeness and medicinal chemistry friendliness of small molecules. Sci. Rep. 7, Article No. 42717 (2017)Google Scholar
  24. 24.
    Guan, Y., Zhu, Q., Huang, D., Zhao, S., Jan, L., Peng, J.: An equation to estimate the difference between theoretically predicted and SDS PAGE-displayed molecular weights for an acidic peptide. Sci. Rep. 5, Article No. 13370 (2015)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Debora Slanzi
    • 1
    • 2
  • Valentina Mameli
    • 1
  • Marina Khoroshiltseva
    • 1
  • Irene Poli
    • 1
    • 2
  1. 1.European Centre for Living TechnologyVeniceItaly
  2. 2.Department of Environmental Sciences, Informatics and StatisticsCa’ Foscari University of VeniceVeniceItaly

Personalised recommendations