Skip to main content

Improving Deep Neural Networks by Adding Auxiliary Information

  • Conference paper
  • First Online:
  • 920 Accesses

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 751))

Abstract

As the recent success of deep neural networks solved many single domain tasks, next generation problems should be on multi-domain tasks. To its previous stage, we investigated how auxiliary information can affect the deep learning model. By setting the primary class and auxiliary classes, characteristics of deep learning models can be studied when the additional task is added to original tasks. In this paper, we provide a theoretical consideration on additional information and concluded that at least random information should not affect deep learning models. Then, we propose an architecture which is capable of ignoring redundant information and show this architecture practically copes well with auxiliary information. Finally, we propose some examples of auxiliary information which can improve the performance of our architecture.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Cover, T.M., Thomas, J.A.: Elements of information theory. John Wiley & Sons (2012)

    Google Scholar 

  2. Forgy, E.W.: Cluster analysis of multivariate data: efficiency versus interpretability of classifications. Biometrics 21, 768–769 (1965)

    Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  4. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  5. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    Article  MathSciNet  Google Scholar 

  6. Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2983–2991 (2015)

    Google Scholar 

  7. Kingma, D., Ba, J.: Adam: a method for stochastic optimization (2014). arXiv:1412.6980

  8. Li, Z., Hoiem, D.: Learning without forgetting. In: European Conference on Computer Vision, pp. 614–629. Springer (2016)

    Chapter  Google Scholar 

  9. MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. Oakland, CA, USA (1967)

    Google Scholar 

  10. Ruder, S.: An overview of multi-task learning in deep neural networks (2017). arXiv:1706.05098

  11. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization (2016). arXiv:1611.03530

  12. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the ICCTDP (No. 10063172) funded by MOTIE, Korea.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junmo Kim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seong, S., Lee, C., Kim, J. (2019). Improving Deep Neural Networks by Adding Auxiliary Information. In: Kim, JH., et al. Robot Intelligence Technology and Applications 5. RiTA 2017. Advances in Intelligent Systems and Computing, vol 751. Springer, Cham. https://doi.org/10.1007/978-3-319-78452-6_4

Download citation

Publish with us

Policies and ethics