Advertisement

Pattern Analysis and Applications

, Volume 21, Issue 1, pp 233–247 | Cite as

A text representation model using Sequential Pattern-Growth method

  • Suraya Alias
  • Siti Khaotijah Mohammad
  • Gan Keng Hoon
  • Tan Tien Ping
Short paper
  • 198 Downloads

Abstract

Text representation is an essential task in transforming the input from text into features that can be later used for further Text Mining and Information Retrieval tasks. The commonly used text representation model is Bags-of-Words (BOW) and the N-gram model. Nevertheless, some known issues of these models, which are inaccurate semantic representation of text and high dimensionality of word size combination, should be investigated. A pattern-based model named Frequent Adjacent Sequential Pattern (FASP) is introduced to represent the text using a set of sequence adjacent words that are frequently used across the document collection. The purpose of this study is to discover the similarity of textual pattern between documents that can be later converted to a set of rules to describe the main news event. The FASP is based on the Pattern-Growth’s divide-and-conquer strategy where the main difference between FASP and the prior technique is in the Pattern Generation phase. This approach is tested against the BOW and N-gram text representation model using Malay and English language news dataset with different term weightings in the Vector Space Model (VSM). The findings demonstrate that the FASP model has a promising performance in finding similarities between documents with the average vector size reduction of 34% against the BOW and 77% against the N-gram model using the Malay dataset. Results using the English dataset is also consistent, indicating that the FASP approach is also language independent.

Keywords

Text representation Pattern-Growth Sequential Pattern Mining Document similarity Malay language 

Notes

Acknowledgement

This work is supported by Universiti Sains Malaysia (USM), Research University Grant (RU) by project number 1001/PKOMP/811295.

References

  1. 1.
    Baharudin B, Lee LH, Khan K (2010) A review of machine learning algorithms for text-documents classification. J Adv Inf Technol 1(1):4–20Google Scholar
  2. 2.
    Zhang W, Yoshida T, Tang X (2011) A comparative study of TF* IDF, LSI and multi-words for text classification. Expert Syst Appl 38(3):2758–2765CrossRefGoogle Scholar
  3. 3.
    Lewis DD (1992) Text representation for intelligent text retrieval: a classification-oriented view. Text-based intelligent systems: current research and practice in information extraction and retrieval. Lawrence Erlbaum, HillsdaleGoogle Scholar
  4. 4.
    Salton G, Wong A, Yang C-S (1975) A vector space model for automatic indexing. Commun ACM 18(11):613–620CrossRefMATHGoogle Scholar
  5. 5.
    Le QV, Mikolov T (2014) Distributed representations of sentences and documents. J Mach Learn Res 32Google Scholar
  6. 6.
    Kalogeratos A, Likas A (2012) Text document clustering using global term context vectors. Knowl Inf Syst 31(3):455–474CrossRefGoogle Scholar
  7. 7.
    Guthrie D, Allison B, Liu W, Guthrie L, Wilks Y (2006) A closer look at skip-gram modelling. In: Proceedings of the 5th international Conference on language resources and evaluation (LREC-2006), pp 1–4Google Scholar
  8. 8.
    Sidorov G, Velasquez F, Stamatatos E, Gelbukh A, Chanona-Hernández L (2014) Syntactic n-grams as machine learning features for natural language processing. Expert Syst Appl 41(3):853–860CrossRefGoogle Scholar
  9. 9.
    Tan C-M, Wang Y-F, Lee C-D (2002) The use of bigrams to enhance text categorization. Inf Process Manag 38(4):529–546CrossRefMATHGoogle Scholar
  10. 10.
    Hernández-Reyes E, García-Hernández RA, Carrasco-Ochoa JA, Martínez-Trinidad JF (2006) Document Clustering Based on Maximal Frequent Sequences. In: Salakoski T, Ginter F, Pyysalo S, Pahikkala T(eds) Advances in Natural Language Processing. Lecture Notes in Computer Science, vol 4139. Springer, Berlin, Heidelberg, pp 257–267.Google Scholar
  11. 11.
    Kim HD, Park DH, Lu Y, Zhai C (2012) Enriching text representation with frequent pattern mining for probabilistic topic modeling. Proc Am Soc Inf Sci Technol 49(1):1–10. doi: 10.1002/meet.14504901209 Google Scholar
  12. 12.
    Ning Z, Yuefeng L, Sheng-Tang W (2012) Effective pattern discovery for text mining. IEEE Trans Knowl Data Eng 24(1):30–44. doi: 10.1109/TKDE.2010.211 CrossRefGoogle Scholar
  13. 13.
    Chim H, Deng X (2008) Efficient phrase-based document similarity for clustering. IEEE Trans Knowl Data Eng 20(9):1217–1229CrossRefGoogle Scholar
  14. 14.
    Li Y, Chung SM, Holt JD (2008) Text document clustering based on frequent word meaning sequences. Data Knowl Eng 64(1):381–404CrossRefGoogle Scholar
  15. 15.
    Lewis DD (1992) An evaluation of phrasal and clustered representations on a text categorization task. In: Proceedings of the 15th annual international ACM SIGIR conference on research and development in information retrieval, 1992, ACM, pp 37–50Google Scholar
  16. 16.
    Fürnkranz J (1998) A study using n-gram features for text categorization. Austrian Res Inst Artif Intell 3(1998):1–10Google Scholar
  17. 17.
    Gupta M, Han J (2011) Applications of pattern discovery using sequential data mining. In: Kumar P, Krishna PR, Raju SB (eds) Pattern discovery using sequence data mining: applications and studies. IGI Global, Hershey, pp 1–23Google Scholar
  18. 18.
    Pei J, Han J, Mortazavi-Asl B, Wang J, Pinto H, Chen Q, Dayal U, Hsu M-C (2004) Mining sequential patterns by pattern-growth: the PrefixSpan approach. IEEE Trans Knowl Data Eng 16(11):1424–1440CrossRefGoogle Scholar
  19. 19.
    Landauer TK, Foltz PW, Laham D (1998) An introduction to latent semantic analysis. Discourse Process 25(2–3):259–284CrossRefGoogle Scholar
  20. 20.
    Torkkola K (2004) Discriminative features for text document classification. Pattern Anal Appl 6(4):301–308MathSciNetCrossRefGoogle Scholar
  21. 21.
    Steinberger J, Ježek K (2009) Text summarization: an old challenge and new approaches. In: Abraham A, Hassanien A-E, de Leon F, de Carvalho A, Snášel V (eds) Foundations of computational intelligence, vol 206. Springer, Berlin, pp 127–149. doi: 10.1007/978-3-642-01091-0_6 Google Scholar
  22. 22.
    Gong Y, Liu X (2001) Generic text summarization using relevance measure and latent semantic analysis. In: Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval. ACM, New Orleans, pp 19–25. doi: 10.1145/383952.383955
  23. 23.
    Wallach HM (2006) Topic modeling: beyond Bag-of-words. In: Proceedings of the 23rd international conference on machine learning, New York, ICML ‘06. ACM, pp 977–984. doi: 10.1145/1143844.1143967
  24. 24.
    Lent B, Agrawal R, Srikant R (1997) Discovering trends in text databases. In: Proceedings of the 3rd international conference on knowledge discovery and data mining (KDD’97), CA, pp 227–230Google Scholar
  25. 25.
    Baralis E, Cagliero L, Fiori A, Jabeen S (2011) PatTexSum: a pattern-based text summarizer. In: Proceedings of the workshop on mining complex patterns, pp 14–14Google Scholar
  26. 26.
    García-Hernández RA, Ledeneva Y (2009) Word sequence models for single text summarization. 2009 Second international conferences on advances in computer–human interactions: pp 44–48. doi: 10.1109/ACHI.2009.58
  27. 27.
    Ahonen-Myka H (1999) Finding all maximal frequent sequences in text. In: Proceedings of the ICML99 workshop on machine learning in text data analysis. Citeseer, pp 11–17Google Scholar
  28. 28.
    Ahonen-Myka H (2002) Discovery of frequent word sequences in text. In: Proceedings of the ESF exploratory workshop on pattern detection and discovery {LNCS} 24 (Teollisuuskatu 23): pp 180–189Google Scholar
  29. 29.
    Agrawal R, Srikant R (1995) Mining sequential patterns. In: 11th international conference on data engineering (ICDE’95), TaipeiGoogle Scholar
  30. 30.
    Mabroukeh N, Ezeife CI (2010) A taxonomy of Sequential Pattern Mining algorithms. ACM Comput Surv (CSUR) 43(1):1–41. doi: 10.1145/1824795.1824798 CrossRefGoogle Scholar
  31. 31.
    Han J, Cheng H, Xin D, Yan X (2007) Frequent pattern mining: current status and future directions. Data Min Knowl Disc 15(1):55–86. doi: 10.1007/s10618-006-0059-1 MathSciNetCrossRefGoogle Scholar
  32. 32.
    Mooney CH, Roddick JF (2013) Sequential Pattern Mining—approaches and algorithms. ACM Comput Surv 45(2):1–39. doi: 10.1145/2431211.2431218 CrossRefMATHGoogle Scholar
  33. 33.
    Srikant R, Agrawal R (1996) Mining sequential patterns: generalizations and performance improvements. In: Proceedings of the fifth international conference on extending database technology, AvignonGoogle Scholar
  34. 34.
    Zaki MJ (2001) SPADE: an efficient algorithm for mining frequent sequences. Mach Learn J 42(1):31–60CrossRefMATHGoogle Scholar
  35. 35.
    Han J, Pei J, Mortazavi-Asl B, Chen Q, Dayal U, Hsu M-C (2000) FreeSpan: frequent pattern-projected Sequential Pattern Mining. In: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 355–359Google Scholar
  36. 36.
    Han J, Pei J, Yin Y, Mao R (2004) Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min Knowl Disc 8(1):53–87MathSciNetCrossRefGoogle Scholar
  37. 37.
    Song F, Liu S, Yang J (2005) A comparative study on text representation schemes in text categorization. Pattern Anal Appl 8(1–2):199–209MathSciNetCrossRefGoogle Scholar
  38. 38.
    Nenkova A, McKeownK (2012) A survey of text summarization techniques. In Aggarwal CC, Zhai C (eds) Mining text data. Springer, pp 43–76.Google Scholar

Copyright information

© Springer-Verlag London 2017

Authors and Affiliations

  • Suraya Alias
    • 1
  • Siti Khaotijah Mohammad
    • 2
  • Gan Keng Hoon
    • 2
  • Tan Tien Ping
    • 2
  1. 1.Faculty of Computing and InformaticsUMSKota KinabaluMalaysia
  2. 2.School of Computer SciencesUSMGelugorMalaysia

Personalised recommendations