Advertisement

Transductive Classification via Dual Regularization

  • Quanquan Gu
  • Jie Zhou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5781)

Abstract

Semi-supervised learning has witnessed increasing interest in the past decade. One common assumption behind semi-supervised learning is that the data labels should be sufficiently smooth with respect to the intrinsic data manifold. Recent research has shown that the features also lie on a manifold. Moreover, there is a duality between data points and features, that is, data points can be classified based on their distribution on features, while features can be classified based on their distribution on the data points. However, existing semi-supervised learning methods neglect these points. Based on the above observations, in this paper, we present a dual regularization, which consists of two graph regularizers and a co-clustering type regularizer. In detail, the two graph regularizers consider the geometric structure of the data points and the features respectively, while the co-clustering type regularizer takes into account the duality between data points and features. Furthermore, we propose a novel transductive classification framework based on dual regularization, which can be solved by alternating minimization algorithm and its convergence is theoretically guaranteed. Experiments on benchmark semi-supervised learning data sets demonstrate that the proposed methods outperform many state of the art transductive classification methods.

Keywords

Nonnegative Matrix Factorization Feature Graph Local Linear Embedding Graph Regularizer Transductive Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)Google Scholar
  2. 2.
    Zhu, X.: Semi-supervised learning literature survey. Technical report, Computer Sciences, University of Wisconsin-Madison (2008)Google Scholar
  3. 3.
    Zhu, X., Ghahramani, Z., Lafferty, J.D.: Semi-supervised learning using gaussian fields and harmonic functions. In: ICML, pp. 912–919 (2003)Google Scholar
  4. 4.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: NIPS (2003)Google Scholar
  5. 5.
    Wang, F., Zhang, C.: Label propagation through linear neighborhoods. In: ICML, pp. 985–992 (2006)Google Scholar
  6. 6.
    Wu, M., Schölkopf, B.: Transductive classification via local learning regularization. In: AISTATS, pp. 628–635 (March 2007)Google Scholar
  7. 7.
    Wang, F., Li, T., Wang, G., Zhang, C.: Semi-supervised classification using local and global regularization. In: AAAI, pp. 726–731 (2008)Google Scholar
  8. 8.
    Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research 7, 2399–2434 (2006)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Smola, A.J., Kondor, R.: Kernels and regularization on graphs. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS (LNAI), vol. 2777, pp. 144–158. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  10. 10.
    Gu, Q., Zhou, J.: Local learning regularized nonnegative matrix factorization. In: IJCAI (2009)Google Scholar
  11. 11.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  12. 12.
    Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15, 1373–1396 (2003)CrossRefzbMATHGoogle Scholar
  13. 13.
    Sandler, T., Blitzer, J., Talukdar, P., Pereira, F.: Regularized learning with networks of features. In: NIPS (2008)Google Scholar
  14. 14.
    Gu, Q., Zhou, J.: Co-clustering on manifolds. In: KDD (2009)Google Scholar
  15. 15.
    Dhillon, I.S.: Co-clustering documents and words using bipartite spectral graph partitioning. In: KDD, pp. 269–274 (2001)Google Scholar
  16. 16.
    Dhillon, I.S., Mallela, S., Modha, D.S.: Information-theoretic co-clustering. In: KDD, pp. 89–98 (2003)Google Scholar
  17. 17.
    Ding, C.H.Q., Li, T., Peng, W., Park, H.: Orthogonal nonnegative matrix t-factorizations for clustering. In: KDD, pp. 126–135 (2006)Google Scholar
  18. 18.
    Sindhwani, V., Hu, J., Mojsilovic, A.: Regularized co-clustering with dual supervision. In: NIPS, pp. 556–562 (2008)Google Scholar
  19. 19.
    Sindhwani, V., Melville, P.: Document-word co-regularization for semi-supervised sentiment analysis. In: ICDM, pp. 1025–1030 (2008)Google Scholar
  20. 20.
    Li, T., Ding, C.H.Q., Zhang, Y., Shao, B.: Knowledge transformation from word space to document space. In: SIGIR, pp. 187–194 (2008)Google Scholar
  21. 21.
    Chung, F.R.K.: Spectral Graph Theory. American Mathematical Society (February 1997)Google Scholar
  22. 22.
    Boyd, S., Vandenberghe, L.: Convex optimization. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  23. 23.
    Ding, C.H., Li, T., Jordan, M.I.: Convex and semi-nonnegative matrix factorizations. IEEE Transactions on Pattern Analysis and Machine Intelligence 99(1) (2008)Google Scholar
  24. 24.
    Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS, pp. 556–562 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Quanquan Gu
    • 1
  • Jie Zhou
    • 1
  1. 1.State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology (TNList) Department of AutomationTsinghua UniversityBeijingChina

Personalised recommendations