Skip to main content

Convergence Acceleration for Multiobjective Sparse Reconstruction via Knowledge Transfer

  • Conference paper
  • First Online:
  • 2189 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11411))

Abstract

Multiobjective sparse reconstruction (MOSR) methods can potentially obtain superior reconstruction performance. However, they suffer from high computational cost, especially in high-dimensional reconstruction. Furthermore, they are generally implemented independently without reusing prior knowledge from past experiences, leading to unnecessary computational consumption due to the re-exploration of similar search spaces. To address these problems, we propose a sparse-constraint knowledge transfer operator to accelerate the convergence of MOSR solvers by reusing the knowledge from past problem-solving experiences. Firstly, we introduce the deep nonlinear feature coding method to extract the feature mapping between the search of the current problem and a previously solved MOSR problem. Through this mapping, we learn a set of knowledge-induced solutions which contain the search experience of the past problem. Thereafter, we develop and apply a sparse-constraint strategy to refine these learned solutions to guarantee their sparse characteristics. Finally, we inject the refined solutions into the iteration of the current problem to facilitate the convergence. To validate the efficiency of the proposed operator, comprehensive studies on extensive simulated signal reconstruction are conducted.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bader, J., Zitzler, E.: HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19(1), 45–76 (2011)

    Article  Google Scholar 

  2. Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)

    Article  Google Scholar 

  3. Chen, M., Xu, Z., Weinberger, K., Sha, F.: Marginalized denoising autoencoders for domain adaptation. In: Proceedings of the 29th International Conference on Machine Learning (2012)

    Google Scholar 

  4. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  5. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  6. Dong, Z., Zhu, W.: Homotopy methods based on \(l_0\)-norm for compressed sensing. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 1132–1146 (2018)

    Article  Google Scholar 

  7. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  8. Feng, L., Ong, Y.S., Jiang, S., Gupta, A.: Autoencoding evolutionary search with learning across heterogeneous problems. IEEE Trans. Evol. Comput. 21(5), 760–772 (2017)

    Article  Google Scholar 

  9. Gretton, A., Borgwardt, K.M., Rasch, M., Schölkopf, B., Smola, A.J.: A kernel method for the two-sample-problem. In: Proceedings of the Conference on Neural Information Processing System, pp. 513–520 (2007)

    Google Scholar 

  10. Gupta, A., Ong, Y.S., Feng, L.: Insights on transfer optimization: because experience is the best teacher. IEEE Trans. Emerg. Topics Comput. Intell. 2(1), 51–64 (2018)

    Article  Google Scholar 

  11. Jiang, M., Huang, Z., Liming, Q., Huang, W., et al.: Transfer learning based dynamic multiobjective optimization algorithms. IEEE Trans. Evol. Comput. 22(4), 501–514 (2017)

    Article  Google Scholar 

  12. Jiao, Y., Jin, B., Lu, X.: Iterative soft/hard thresholding with homotopy continuation for sparse recovery. IEEE Signal Process. Lett. 24(6), 784–788 (2017)

    Article  Google Scholar 

  13. Li, H., Zhang, Q., Deng, J., Xu, Z.B.: A preference-based multiobjective evolutionary approach for sparse optimization. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1716–1731 (2018)

    Article  MathSciNet  Google Scholar 

  14. Li, L., Yao, X., Stolkin, R., Gong, M., He, S.: An evolutionary multiobjective approach to sparse reconstruction. IEEE Trans. Evol. Comput. 18(6), 827–845 (2014)

    Article  Google Scholar 

  15. Liu, C., Zhao, Q., Yan, B., Elsayed, S., Ray, T., Sarker, R.: Adaptive sorting-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. (in press). https://doi.org/10.1109/TEVC20182848254

  16. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  17. Sentelle, C.G., Anagnostopoulos, G.C., Georgiopoulos, M.: A simple method for solving the SVM regularization path for semidefinite kernels. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 709–722 (2016)

    Article  MathSciNet  Google Scholar 

  18. Steinwart, I.: On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2(Nov), 67–93 (2001)

    MathSciNet  MATH  Google Scholar 

  19. Wei, P., Ke, Y., Goh, C.K.: Deep nonlinear feature coding for unsupervised domain adaptation. In: IJCAI, pp. 2189–2195 (2016)

    Google Scholar 

  20. Yan, B., Zhao, Q., Wang, Z., Zhang, J.A.: Adaptive decomposition-based evolutionary approach for multiobjective sparse reconstruction. Inf. Sci. 462, 141–159 (2018)

    Article  MathSciNet  Google Scholar 

  21. Yan, B., Zhao, Q., Wang, Z., Zhao, X.: A hybrid evolutionary algorithm for multiobjective sparse reconstruction. Signal Image Video P. 11, 993–1000 (2017)

    Article  Google Scholar 

  22. Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007)

    Article  Google Scholar 

  23. Zhou, Y., Kwong, S., Guo, H., Zhang, X., Zhang, Q.: A two-phase evolutionary approach for compressive sensing reconstruction. IEEE Trans. Cybern. 47(9), 2651–2663 (2017)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the China Scholarship Council under Grant 201706540025.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bai Yan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yan, B., Zhao, Q., Zhang, J.A., Li, Y., Wang, Z. (2019). Convergence Acceleration for Multiobjective Sparse Reconstruction via Knowledge Transfer. In: Deb, K., et al. Evolutionary Multi-Criterion Optimization. EMO 2019. Lecture Notes in Computer Science(), vol 11411. Springer, Cham. https://doi.org/10.1007/978-3-030-12598-1_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-12598-1_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-12597-4

  • Online ISBN: 978-3-030-12598-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics