Skip to main content

Active Framework by Sparsity Exploitation for Constructing a Training Set

  • Conference paper
  • First Online:
Intelligent Computing Theories and Application (ICIC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10954))

Included in the following conference series:

  • 2778 Accesses

Abstract

This paper addresses the problem of actively constructing a training set for the linear model with sparse structure. This problem usually occurs in the scenario that no nonlinear mappings give similar performance for large-scale learning data, but it has to train a linear model quickly. In this paper, an active framework is proposed to reduce the time expense further in constructing the training set. The training examples are iteratively selected by matching partial components and their weights given by the classifier in pairs, in order to exploit model’s sparsity to precisely separate out more informative examples from others in a short time. The proposed framework is evaluated on a group of classification tasks, including the texts and images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.qwone.com/jason/20Newsgroups/.

References

  1. Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., Lin, C.-J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)

    MATH  Google Scholar 

  2. Gavves, E., Mensink, T., Tommasi, T., Snoek, C., Tuytelaars, T.: Active transfer learning with zero-shot priors: reusing past datasets for future tasks. In: ICCV, pp. 2731–2739. IEEE Press, New York (2015)

    Google Scholar 

  3. Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2, 45–66 (2001)

    MATH  Google Scholar 

  4. Settles, B.: Active learning literature survey. Computer Sciences Technical report 1648, University of Wisconsin-Madison (2009)

    Google Scholar 

  5. Kapoor, A., Grauman, K., Urtasun, R., Darrell, T.: Gaussian processes for object categorization. Int. J. Comput. Vis. 88, 169–188 (2010)

    Article  Google Scholar 

  6. Huang, S.J., Jin, R., Zhou, Z.H.: Active learning by querying informative and representative examples. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 1936–1949 (2014)

    Article  Google Scholar 

  7. Lewis, D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learning. In: ICML, pp. 148–156. Morgan Kaufmann, San Francisco (1994)

    Chapter  Google Scholar 

  8. Jeon, Y., Kim, J.: Active convolution: learning the shape of convolution for image classification. In: CVPR, pp. 1846–1854. IEEE Press, New York (2017)

    Google Scholar 

  9. Huijser, M.W., Van Gemert, J.C.: Active decision boundary annotation with deep generative models. In: ICCV, pp. 5296–5305. IEEE Press, New York (2017)

    Google Scholar 

  10. Jayaraman, D., Grauman, K.: Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 489–505. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_30

    Chapter  Google Scholar 

  11. Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Computer Sciences Technical report 7694, California Institute of Technology (2007)

    Google Scholar 

  12. Torresani, L., Szummer, M., Fitzgibbon, A.: Efficient object category recognition using classemes. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 776–789. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_56

    Chapter  Google Scholar 

  13. Bergamo, A., Torresani, L., Fitzgibbon, A.: PiCoDes: learning a compact code for novel category recognition. In: NIPS, vol. 24, pp. 2088–2096 (2011)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Natural Science Foundation of China (Grant No. 61671188, 61571164, 61502122 and 61502117), the National Key Research and Development Plan Task of China (Grant No. 2016YFC0901902), and Natural Science Foundation of Heilongjiang Province QC2016084.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, M., Wu, W., Liu, Y. (2018). Active Framework by Sparsity Exploitation for Constructing a Training Set. In: Huang, DS., Bevilacqua, V., Premaratne, P., Gupta, P. (eds) Intelligent Computing Theories and Application. ICIC 2018. Lecture Notes in Computer Science(), vol 10954. Springer, Cham. https://doi.org/10.1007/978-3-319-95930-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-95930-6_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-95929-0

  • Online ISBN: 978-3-319-95930-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics