1 Introduction

Character n-grams are handcrafted features which widely serve as discriminative features in text categorization [2], authorship attribution [3] authorship verification [5], plagiarism detection [9, 19], spam filtering [6], native language identification of text author [8], discriminating language variety [11], and many other applications.

They also help in generating good word embeddings for unknown words, thus improving classification performance in tasks based on informal texts, where a large percentage of unknown words occurs, e.g., in sentiment analysis [1, 21]. Finally, character n-grams gave notion to character n-gram graphs [4], which found applications in topic categorization of news, blog, and twitter data, but also in automatic evaluation of document summaries.

The primary advantage of character n-grams is language independence [12], i.e., the effort of porting a feature extractor and a classifier from one language to another is negligible.

Character n-grams are recognized for their surprising degree of effectiveness in authorship attribution, outperforming content words on blog data and nearly reaching their effectiveness on email and classic literature corpora [7]. Character n-grams have also proven to be the single most effective type of feature in authorship attribution [7]. Moreover, introduction of typed character n-grams, categories and supercategories of character n-grams have contributed to improvements in authorship attribution, compared to traditional n-grams [16].

The aim of the paper is to extend research [16] and answer the question of whether typed n-grams may be effective features in author profiling and sentiment analysis as they are in authorship attribution.

Classification on the basis of character n-grams, either typed or untyped, typically introduces a very high number of features. The solution to this problem is their distributed processing, e.g., experiments in author profiling with a large number of word n-grams as features were performed in the framework of MapReduce [10]. In [18] documents from the English language Wikipedia corpus were classified according to their topic with the newer Apache Spark framework. While the authors claim their experiments to be the first implementation of a text categorization system on Apache Spark in Python using the NLTK framework, our experiments are performed with Spark on six corpora, including approximately 150 times larger PAN-AP-13 corpus [13] with up to 8464237 features and The Blog Authorship Corpus with up to 11334188 features.

By comparison, the largest work on author profiling [17] considered larger amount of data involving 15.4 million messages and 700 million instances of words, phrases, etc.

Thus, we also examine whether the distribution of preprocessing and profile classification into smaller subtasks executed on many cores and nodes is an efficient scheme in a scenario with a high number of features, larger corpora and with the application of Spark.

2 Typed n-grams

We briefly recall the notion of typed character n-grams (in short, typed n-grams) [16]. The category and supercategory of an n-gram depends on its content and position within a word or sentence. We can distinguish between affix, word and punct supercategories, reflecting morpho-syntax, document topic, and author’s style, respectively. Within each supercategory, we can further distinguish fine-grained categories. Within the affix supercategory, prefix and suffix categories denote n-grams as being the proper prefixes and proper suffixes of words, while the space-prefix and space-suffix categories denote n-grams beginning and ending with a space, respectively. Categories in the word supercategory (whole-word, mid-word, multi-word) are assigned to n-grams covering an entire word, the non-affix part of a word, or spanning multiple words, respectively. The specific category of the punct supercategory (beg-punct, mid-punct, end-punct) is assigned to n-grams containing one or more punctuation characters. Examples of some of typed n-grams araising from the sentence The actors wanted to see if the pact seemed like an old-fashioned one. are shown in Table 1 – their detailed description can be found in [16].

Table 1. Examples of typed character n-grams of different categories for \(n=3\). Character n-grams are in . Remaining characters (in black) denote their context. Character \(_\sqcup \) denotes space. Based on examples from [16]

3 Datasets

In our experiments with n-grams we examined three problems on six datasets: authorship attribution (CCAT_50), author profiling (PAN-AP-13, Blog author gender classification data set, The Blog Authorship Corpus) and sentiment analysis (Sentiment scale dataset v1.0, Stanford Sentiment Treebank). Table 2 briefly characterizes evaluated datasets.

Table 2. Comparison of evaluated datasets

Figure 1 shows the proportions of categories of typed n-grams in the English part of PAN-AP-13 corpus. We can observe that together, n-grams with multi-word and mid-punct categories constitute more than half of all typed n-grams in PAN-AP-13. Figure 2 presents the number of different ngrams depending on the n-gram length. By comparison, the number of n-gram tokens in the training, validation and test sets was approximately 1 030 960 000, 58 760 000 and 77 190 000, respectively.

Fig. 1.
figure 1

Proportions of n-gram categories in the English part of PAN-AP-13 corpus

Fig. 2.
figure 2

Number of different character n-grams in the English part of PAN-AP-13 corpus

4 Experiments and Results

In the experiments with PAN-AP-13, corpus preprocessing involved rejecting only a few texts due to unrecognized encoding, and removing html tags and superfluous white spaces. Unknown tokens in the validation or test set were omitted.

CCAT_50 preprocessing followed the procedure from [16] and consisted of removal of citations and authors’ signatures at the end of articles. Typed n-grams occurring at least five times in the dataset were taken into account as features.

Preprocessing of remaining datasets consisted of removing spurious white characters and URL addresses.

For PAN-AP-13 we adopted the predefined split into training, validation and test sets. Two classifiers were compared: multinomial Naïve Bayes (with and without feature normalization) and linear SVM based on OWLQN solver, both from Apache Spark library.

Remaining datasets were evaluated with nested cross-validation with \(k=5\) [14]. We compared three classifiers: decision trees, Naïve Bayes (multinomial and complement versions) and linear SVM, all from the scikit-learn library.

Fig. 3.
figure 3

Accuracy of age interval recognition depending on the length of typed n-grams and obtained on the PAN-AP-13 validation set

Fig. 4.
figure 4

Accuracy of sex recognition depending on the length of typed n-grams and obtained on the PAN-AP-13 validation set

Fig. 5.
figure 5

Accuracy of joint profile recognition depending on the length of typed n-grams and obtained on the PAN-AP-13 validation set

Table 3 presents accuracy of author profile predictions for age, sex and joint profile, evaluated on the PAN-AP-13 validation set. Parameter C denotes the regularization weight in the SVM cost function, k denotes the maximal number of iterations of the SVM solver and \(\alpha \) is the smoothing parameter in the Naïve Bayes classification. Naïve Bayes was used with n-gram normalization.

Table 4 shows corresponding accuracies of author profiling obtained on the PAN-AP-13 test set. The obtained results outperform all solutions within the PAN-AP’13 task, which often used sophisticated features of various kinds. It is interesting to compare our outcomes with the results obtained in [10]. On the same corpus, their Naïve Bayes classifier with word n-gram features achieved a profiling accuracy of 42.57%, while using conventional character n-grams as features gave only 31.20% accuracy.

Table 3. Prediction accuracy of sex and age of author on the PAN-AP-13 validation set, [%]
Table 4. Accuracy of best models on the PAN-AP-13 test set, [%]

Figures 3, 4 and 5 present the accuracy of age, sex and joint recognition using typed n-grams as features, as a function of the length of used n-grams. Typed n-gram features of all categories were included in classification.

Usually, n-grams with \(n=3\) are considered in literature [16]. Our studies show that it is beneficial to consider longer n-grams with \(n=4\) or even \(n=5\). Using vargrams (e.g., 2-grams and 3-grams as one feature, not shown in figures) is not beneficial as they gave averaged results over n-grams with fixed n.

If time is not an issue, the choice of SVM over Naïve Bayes is preferred – this stays consistent with [20], advising SVM for classification of longer texts and Naïve Bayes for shorter texts.

The impact of feature normalization on Naïve Bayes is not clear; thus, no recommendation can be formulated. While it improves accuracy of age and joint profile classification, its effect on sex classification is negative. For feature scaling with SVM, standardization is always preferred over normalization [15], and it is the way in which SVM implementation from MLLib works.

Impact of n-gram Categories. Results in this subsection are reported for multinomial Naïve Bayes with feature normalization and size 5 n-grams. Naïve Bayes was chosen due to its better time performance over SVM. The first experiment in this part examined the impact of n-gram categories on profiling accuracy. Figures 6 and 7 shows accuracies for each of 10 categories. Additionally, classification results are shown for n-grams with no distinguished categories (no categories, i.e. traditional, untyped n-grams) and for features, where n-grams of all categories are taken into account. We observe that compared to untyped ngrams, using whole context (all categories) increases accuracy, but the increase is tiny – 40.92% for typed n-grams vs 40.43% for untyped n-grams. Typed n-grams of any single category are worse profile predictors than untyped n-grams.

The next experiment, shown in Fig. 8, looked into the discriminative power of supercategories. Profiling accuracies obtained for all supercategories and all categories features are similar. The experiment confirms findings for categories: compared to using a single supercategory, accuracy gain achieved with all supercategories is tiny.

Because no single n-gram category outperformed untyped n-grams and n-grams of all categories achieved the highest accuracy, in the third experiment we considered custom categories (Fig. 9). The first custom category bundled the four most discriminative categories and the second custom category bundled the nine most discriminative categories (i.e., all 10 categories but whole-word). Bundling more categories successively increases accuracy.

Fig. 6.
figure 6

Impact of n-gram categories on profiling accuracy obtained on the PAN-AP-13 validation set (English)

Impact of Hyperparameters. Figure 10 presents the impact of SVM hyperparameters on author profiling accuracy. Forty-five evaluations of the SVM classifier for different settings of C and k were performed. We observe that the choice of hyperparameters may impact profiling accuracy dramatically and accuracy varies from 42.12% for (\(C=5\), \(k=5\)) to 21.07% for (\(C=15\), \(k=1000\)). Choosing a good set of hyperparameters is much more important than the choice between typed and untyped n-grams in the case of the SVM classifier.

Fig. 7.
figure 7

Impact of n-gram categories on profiling accuracy obtained on the PAN-AP-13 validation set (Spanish)

Fig. 8.
figure 8

Impact of n-gram supercategories on profiling accuracy obtained on the PAN-AP-13 validation set

Fig. 9.
figure 9

Impact of custom categories of n-grams on profiling accuracy obtained on the PAN-AP-13 validation set

Fig. 10.
figure 10

Impact of SVM hyperparameters on author profiling accuracy for PAN-AP-13

4.1 Further Experiments

We performed further experiments on five datasets from Table 2. First, we performed authorship attribution experiments on CCAT_50 following setup defined in [16] (Table 5).

Table 6 presents classification accuracy on five datasets performed with untyped n-grams and all-categories typed n-grams for \(n=4\) and \(n=5\).

Throughout all datasets, in most cases typed character n-grams improve classification accuracy in comparison to untyped character n-grams. The accuracy gain is however tiny – from 0.75% to 1.48%.

The choice of the classifier is significant for classification with character n-grams. For all examined problems and datasets, SVM achieved higher accuracy than Naïve Bayes, with the accuracy gap up to 18%.

We examined single-category and multiple-category n-grams. Single-category typed character n-grams differ in their predictive power w.r.t. category. Statistical tests on the Blog author gender dataset revealed that differences in accuracy are statistically significant for some pairs of categories but deeper research is needed in this area to confirm them and detect potential patterns.

Bundling more categories into typed n-grams usually results in increased accuracy. The exception was the Blog author gender classification data set, with the best results for affix+punct supercategory. Our experiments showed that information about document target label is distributed among character n-grams and their categories.

Table 5. Accuracy of authorship attribution on the CCAT_50 set, depending on used 3-gram features, [%], acc denotes accuracy, N is the number of features.
Table 6. Accuracy of untyped n-grams and all-categories typed n-grams on five datasets

The length of n-grams affects classification results and depends on the dataset and used classifier. The highest accuracy for CCAT_50 used in authorship attribution was achieved with typed 4-grams. For all remaining datasets, the best accuracy was achieved with typed 5-grams In particular, for the Blog author gender classification dataset the highest accuracy, 71.60%, was for typed 5-grams of affix+punct supercategory (not shown in Table 6). These findings are in line with results obtained for the PAN-AP-13 corpus. Our findings clearly contradict those of [16], where authors state: We chose \(n=3\) since our preliminary experiments found character 3-grams to be more effective than other higher level character n-grams.

When considering typed n-grams, the highest accuracy was achieved when bundling all categories, i.e., for all-categories typed n-grams. The only exception was the Blog author gender classification data set, where affix+punct typed n-grams achieved the highest accuracy. For all datasets, using typed character n-grams of single category results in an accuracy drop in comparison to untyped character n-grams. Except for the Blog author gender classification data set, using single-supercategory n-grams resulted in lower accuracy. The best results were achieved for categories space-prefix, space-suffix, prefix, and for supercategories affix and affix+punct.

Our experiments on the Blog author gender classification dataset show that character n-grams (whether typed or untyped) give higher accuracy than word n-grams by 1%–1.15%. The downside is a larger number of arising character n-gram features than word n-gram features.

Tf-idf weighting raises classification accuracy with n-grams from 2% to 4%. The exception is authorship attribution on the CCAT_50 dataset, where accuracy increased for n-grams with \(n=2\) and \(n=3\) while there was an accuracy drop for \(n=4\) and \(n=5\).

There is no clear pattern for impact of feature normalization on accuracy. The best results were obtained with normalization according to the \(L_2\) normFootnote 1. With the remaining methods - StandardScaler and MaxAbsScaler we observed suboptimal accuracy or even accuracy worse than with no normalization.

Finally, we performed qualitative analysis and looked for the most important n-grams by inspecting weights of SVM classifier. First, we analysed author profiling on the Blog author gender classification data set. For men, identified n-grams referred to wife, other men (guys) and games. The most important n-grams used by women are related to family (love, husband, mum). Found best n-grams do not suggest that text style (e.g. punctuation) is important for a classifier. Next, we analysed authorship attribution on one particular author chosen from CCAT_50: the identified n-grams were name fragments of cites, states or companies.

5 Conclusions

The paper has shown in three domains: authorship attribution, author profiling and sentiment analysis that the choice of typed n-grams results in only a tiny increase of classification accuracy over traditional n-grams. Information about the author profile is distributed throughout all n-gram categories. No single category can be advised for classification It is worth putting much more effort into effective hyperparameter optimization and model selection than to switching from n-grams to typed n-grams or a particular category of typed n-grams.

Apache Spark allows for efficient classification with a very high number of features on large text corpora. The memory footprint is the most prohibitive aspect of such classification, which precludes experiments with n-grams longer than 5.