Skip to main content
Log in

Parallel Training of An Improved Neural Network for Text Categorization

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

This paper studies parallel training of an improved neural network for text categorization. With the explosive growth on the amount of digital information available on the Internet, text categorization problem has become more and more important, especially when millions of mobile devices are now connecting to the Internet. Improved back-propagation neural network (IBPNN) is an efficient approach for classification problems which overcomes the limitations of traditional BPNN. In this paper, we utilize parallel computing to speedup the neural network training process of IBPNN. The parallel IBNPP algorithm for text categorization is implemented on a Sun Cluster with 34 nodes (processors). The communication time and speedup for the parallel IBPNN versus various number of nodes are studied. Experiments are conducted on various data sets and the results show that the parallel IBPNN together with SVD technique achieves fast computational speed and high text categorization correctness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. ai.mit. 20-news-18828 version: http://www.ai.mit.edu/jrennie/20Newsgroups (2010)

  2. Chen, G.A., Yu, X.H., Cheng, S.X.: Acceleration of backpropagation learning using optimized learning rate and momentum. Electron. Lett. 29(14), 1288–1289 (1993)

    Article  Google Scholar 

  3. Costa, M.A., Braga, A., de Menezes, B.R.: Improving neural networks generalization with new constructive and pruning methods. J. Intell. Fuzzy Syst. 13, 75–83 (2003)

    MATH  Google Scholar 

  4. Dahl, G., McAvinney, A., Newhall, T.: Parallelizing neural network training for cluster systems. In: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks (2008)

  5. daviddlewis: Reuters21578 data set. http://www.daviddlewis.com/resources/testcollections/reuters21578 (2010)

  6. Joachims, T.: Text categorization with support vector machines: learning with many relevant features. In: Nedellec, C., Rouveirol, C. (Eds.) Proceedings of the 10th European Conference on Machine Learning (ECML’98), pp. 137–142, Springer, Berlin (1998)

  7. Kontar, S.: Parallel training of neural network for speech recognition. In: Proceedings of the 12th International Conference on, Soft Computing (2006)

  8. Kramer, A.H., Sangiovanni-Vincentelli, A.: Efficient parallel learning algorithms for neural networks. In: Touretzky, S. (Ed.) Advances in Neural Information Processing Systems, pp. 40–48 (1989)

  9. Lai, K.K., Yu, L., Wang, S.: Neural network metalearning for parallel textual information retrieval. Int. Jo. Artif. Intell. 1(A08), 173–184 (2008)

    Google Scholar 

  10. Lewis, D.D.: Naive (Bayes) at forty. The independence assumption in information retrieval. In: Proceedings of the 10th European Conference on, Machine Learning (ECML’98), pp. 4–15 (1998)

  11. Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: SIGIR ’94 Proceedings of the 17th Annual International ACM SIGIR Conference, pp. 3–12 (1994)

  12. Li, C.H., Park, S.C.: Combination of modified bpnn algorithms and an efficient feature selection method for text categorization. Inf. Process. Manag. 45, 329–340 (2009)

    Article  Google Scholar 

  13. Li, C.H., Park, S.C.: An efficient document classification model using an improved back propagation neural network and singular value decomposition. Expert Syst. Appl. 36(2), 3208–3215 (2009)

    Article  MathSciNet  Google Scholar 

  14. Lin, M., Ding, C.: Parallel genetic algorithms for dvs scheduling of distributed embedded systems. High Perform. Comput. Commun. LNCS 4782, 180–191 (2007)

    Article  Google Scholar 

  15. McCallum, A., Nigam, K.: A comparison of event models for naive bayes text classification. In: AAAI’98 Workshop on Learning for Text Categorization, pp. 41–48 (1998)

  16. Porter, M.F.: An algorithm for suffix stripping. Program 14(3), 130–137 (1980)

    Article  Google Scholar 

  17. Skucas, I., Remeikis, N., Melninkaite, V.: A combined neural network and decision tree approach for text categorization. Inf. Syst. Dev. XXVII, 173–184 (2005)

    Google Scholar 

  18. Srinivasan, P., Ruiz, M.E.: Automatic text categorization using neural network. In: Proceedings of the 8th ASIS SIG/CR Workshop on Classification Research, pp. 59–72 (1998)

  19. Tamura, H., Ishii, M., Wang, X.G., Tang, Z., Sun, W.D.: An improved backpropagation algorithm to avoid the local minima problem. Neurocomputing 56, 455–460 (2004)

    Article  Google Scholar 

  20. Tan, S.B.: An effective refinement strategy for KNN text classifier. Expert Syst. Appl. 30(2), 290–298 (2006)

    Article  Google Scholar 

  21. Windheuser, U., Zick, F.K., Krahl, D.: Data Mining–Einsatz Inder Praxis. Addison Wesley/Longman, Bonn (1998)

    Google Scholar 

  22. Yam, J.Y.F., Chow, T.W.S.: A weight initialization method for improving training speed in feed forward neural network. IEEE Trans. Neural Netw. 2(30), 219–232 (2000)

    Google Scholar 

  23. Yang, L.T., Xu, L., Lin, M.: Integer factorization by a parallel gnfs algorithm for public key cryptosystems. Embed. Softw. Syst. LNCS 3820, 683–695 (2005)

    Article  Google Scholar 

  24. Zelikovitz, S., Hirsh, H.: Using lsi for text classification in the presence of background text. In: Proceedings of the Tenth International Conference on Information and Knowledge Management, pp. 113–118, ACM Press (2001)

  25. Zeng, H.J., Lu, Y.C., Shi, C.Y., Sun, J.T., Chen, Z., Ma, W.Y.: Supervised latent semantic indexing for document categorization. In: ICDM, pp. 535–538. IEEE Press (2004)

Download references

Acknowledgments

This work was supported by NSERC (Natural Sciences and Engineering Research Council, Canada) and CFI (Canadian Foundation of Innovation).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Man Lin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, C.H., Yang, L.T. & Lin, M. Parallel Training of An Improved Neural Network for Text Categorization. Int J Parallel Prog 42, 505–523 (2014). https://doi.org/10.1007/s10766-013-0245-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10766-013-0245-x

Keywords

Navigation