Abstract
Feature filtering aims to find useful and relevant features for improvement of machine learning performance, reduction of computation complexity, and disclosure of internal information interaction. We employ some popular filtering criteria as meta-dimensions for the construction of feature space, where a word or a document can be represented with significantly reduced dimensionality. The experiment results show that the meta-feature data representation we proposed requires no extra resources on pre-training to derive word embeddings, and outperforms other traditional frequency-based or learning-based embeddings in the task of sentiment analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mladenić, D., Grobelnik, M.: Feature Selection for Unbalanced Class Distribution and Naive Bayes, pp. 258–267 (1999)
Forman, G.: An extensive empirical study of feature selection metrics for text classification. J. Mach. Learn. Res. 3, 1289–1305 (2003)
Bengio, Y., et al.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)
Manning, C.D., Schütze, H.: Foundations of Statistical Natural Language Processing, 680 p. (1999)
Wiechmann, D.: On the computation of construction strength: testing measures of association as expressions of lexical bias. Corpus Linguist. Lingustic Theory 4, 253–290 (2008)
Chandrashekar, G., Sahin, F.: A survey on feature selection methods. Comput. Electr. Eng. 40(1), 16–28 (2014)
Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: ICML, pp. 412–420 (1997)
Mikolov, T., et al.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)
Collobert, R., et al.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)
Pennington, J., Socher, R., Manning, C.: Glove: Global Vectors for Word Representation, vol. 14, pp. 1532–1543 (2014)
Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: NIPS, pp. 2177–2185 (2014)
Baroni, M., Dinu, G., Kruszewski, G.: Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In: ACL, pp. 238–247 (2014)
Faruqui, M., et al.: Retrofitting Word Vectors to Semantic Lexicons (2014)
Yu, M., Dredze, M.: Improving lexical embeddings with semantic knowledge. In: ACL, pp. 545–550 (2014)
Tang, D., et al.: Learning sentiment-specific word embedding for Twitter sentiment classification. In: ACL, pp. 1555–1565 (2014)
Kim, Y.: Convolutional Neural Networks for Sentence Classification (2014)
Conneau, A., et al.: Very deep convolutional networks for text classification. In: EACL, pp. 1107–1116 (2017)
Barnes, J., Klinger, R., Walde, S.S.I.: Assessing State-of-the-Art Sentiment Models on State-of-the-Art Sentiment Datasets (2017)
Zhang, L., Wang, S., Liu, B.: Deep learning for sentiment analysis: a survey. Wiley Interdisc. Rev.: Data Mining Knowl. Discov. 8(4), e1253 (2018)
McCann, B., et al.: Learned in translation: contextualized word vectors. In: NIPS, pp. 6297–6308 (2017)
Peters, M.E., et al.: Deep contextualized word representations. In: NAACL 2018, pp. 2227–2237 (2018)
Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. In: ACL (2018)
Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2018)
Zheng, Z., Xiaoyun, W., Srihari, R.: Feature selection for text categorization on imbalanced data. SIGKDD Explor. Newsl. 6(1), 80–89 (2004)
Zipf, G.K.: Human behavior and the principle of least effort: an introduction to human ecology. Facsim. of 1949 ed, 573 p. (1965)
Jones, K.S.: Automatic summarising: factors and directions. In: Advances in Automatic Text Summarization (1998)
Deerwester, S.C., et al.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)
Socher, R., et al.: Parsing natural scenes and natural language with recursive neural networks. In: ICML, pp. 129–136 (2011)
Maas, A.L., et al.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 142–150 (2011)
Tang, D., Qin, B., Liu, T.: Document modeling with gated recurrent neural network for sentiment classification. In: EMNLP, pp. 1422–1432 (2015)
Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: HLT-EMNLP, pp. 347–354 (2005)
Mikolov, T., et al.: Efficient Estimation of Word Representations in Vector Space (2013)
Radford, A., et al.: Improving language understanding by generative pre-training (2018)
Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: ICML, pp. 1188–1196 (2014)
Johnson, R., Zhang, T.: Supervised and semi-supervised text categorization using LSTM for region embeddings. In: ICML, pp. 526–534 (2016)
Miyato, T., Dai, A.M., Goodfellow, I.: Adversarial Training Methods for Semi-Supervised Text Classification (2017)
Xie, Q., et al.: Unsupervised Data Augmentation for Consistency Training. eprint arXiv:190412848 (2019)
Liu, Y., Lapata, M.: Learning structured text representations. Trans. Assoc. Comput. Linguist. 6, 63–75 (2018)
Yang, Z., et al.: Hierarchical attention networks for document classification. In: HLT-NAACL, pp. 1480–1489 (2016)
Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. In: International Conference on Learning Representations (2018)
Zilly, J.G., et al.: Recurrent highway networks. In: ICML, pp. 4189–4198 (2017)
Zoph, B., Le, Q.V.: Neural Architecture Search with Reinforcement Learning. arXiv e-prints arXiv:161101578Z (2016)
Lipton, Z.C., Steinhardt, J.: Troubling Trends in Machine Learning Scholarship. arXiv e-prints arXiv:180703341L (2018)
Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: ICLR (2019)
Acknowledgement
This research was supported by the National Social Science Foundation of China (Grant No. 17BYY119) and the Humanity and Social Science Foundation of China Ministry of Education (Grant No. 15YJA740054).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, D., Yin, Y., Han, T., Ma, H. (2019). Using Feature Filtering Metrics as Meta-dimensions in Constructing Distributional Representations. In: Liu, J., Bailey, J. (eds) AI 2019: Advances in Artificial Intelligence. AI 2019. Lecture Notes in Computer Science(), vol 11919. Springer, Cham. https://doi.org/10.1007/978-3-030-35288-2_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-35288-2_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35287-5
Online ISBN: 978-3-030-35288-2
eBook Packages: Computer ScienceComputer Science (R0)