Keywords

1 Introduction

The capability of obtaining large amounts of annotated training images has become the bottleneck for effective and efficient development and deployment of deep neural networks (DNN) in various computer vision tasks. The current practice relies heavily on manual annotations, ranging from in-house annotation of small amounts of images to crowdsourcing based annotation of large amounts of images. On the other hand, the manual annotation approach is usually expensive, time-consuming, prone to human errors and difficult to scale while data are collected under different conditions or within different environments.

Three approaches have been investigated to cope with the image annotation challenge in DNN training to the best of our knowledge. The first approach is probably the easiest and most widely adopted which augments training images by various label-preserving geometric transformations such as translation, rotation and flipping, as well as different intensity alternation operations such as blurring and histogram equalization [48]. The second approach is machine learning based which employs various semi-supervised and unsupervised learning techniques to create more annotated training images. For example, bootstrapping has been studied which combines the traditional self-training and co-training with the recent DNN training to search for more training samples from a large number of unannotated images [34, 45]. In recent years, unsupervised DNN models such as Generative Adversarial Networks (GAN) [6] have also been exploited to generate more annotated training images for DNN training [42].

The third approach is image synthesis based which has been widely investigated in the area of computer graphics for the purpose of education, design simulation, advertising, entertainment, etc. [9]. It creates new images by modelling the physical behaviors of light and energy in combination of different rendering techniques such as embedding objects of interest (OOI) into a set of “background images”. To make the synthesized images useful for DNN training, the OOI should be embedded in the way that it looks as natural as possible. At the same time, sufficient variations should be included to ensure that the learned representation is broad enough to capture most possible OOI appearances in real scenes.

We propose a novel image synthesis technique that aims to create a large amount of annotated scene text images for training accurate and robust scene text detection and recognition models. The proposed technique consists of three innovative designs as listed:

  1. 1.

    It enables “semantic coherent” image synthesis by embedding texts at semantically sensible regions within the background image as illustrated in Fig. 1, e.g. scene texts tend to appear over the wall or table surface instead of the food or plant leaves. We achieve the semantic coherence by leveraging the semantic annotations objects and image regions that have been created and are readily available in the semantic segmentation research, more details to be described in Sect. 3.1.

  2. 2.

    It exploits visual saliency to determine the embedding locations within each semantic coherent region as illustrated in Fig. 1. Specifically, texts are usually placed at homogeneous regions in scenes for better visibility and this can be perfectly captured using visual saliency. The exploitation of saliency guidance helps to synthesize more natural-looking scene text images, more details to be discussed in Sect. 3.2.

  3. 3.

    It designs a novel scene text appearance model that determines the color and brightness of source texts by learning from the feature of real scene text images adaptively. This is achieved by leveraging the similarity between the neighboring background of texts in scene images and the embedding locations within the background images, more details to be discussed in Sect. 3.3.

Fig. 1.
figure 1

The proposed scene text image synthesis technique: Given background images and source texts to be embedded into the background images as shown in the left-side box, a semantic map and a saliency map are first determined which are then combined to identify semantically sensible and apt locations for text embedding. The color, brightness, and orientation of the source texts are further determined adaptively according to the color, brightness, and contextual structures around the embedding locations within the background image. Pictures in the right-side box show scene text images synthesized by the proposed technique. (Color figure online)

2 Related Work

Image Synthesis. Photorealistically inserting objects into images has been studied extensively as one mean of image synthesis in the computer graphics research [4]. The target is to achieve insertion verisimilitude, i.e., the true likeness of the synthesized images by controlling object size, object perspective (or orientation), environmental lighting, etc. For example, Karsch et al. [24] develop a semi-automatic technique that inserts objects into legacy photographs with photorealistic lighting and perspective.

In recent years, image synthesis has been investigated as a data augmentation approach for training accurate and robust DNN models when only a limited number of annotated images are available. For example, Jaderberg et al. [17] create a word generator and use the synthetic images to train text recognition networks. Dosovitskiy et al. [5] use synthetic floating chair images to train optical flow networks. Aldrian et al. [1] propose an inverse rendering approach for synthesizing a 3D structure of faces. Yildirim et al. [55] use the CNN features trained on synthetic faces to regress face pose parameters. Gupta el al. [10] develop a fast and scalable engine to generate synthetic images of texts in scenes. On the other hand, most existing works do not fully consider semantic coherence, apt embedding locations and appearance of embedded objects which are critically important while applying the synthesized images to train DNN models.

Scene Text Detection. Scene text detection has been studied for years and it has attracted increasing interests in recent years as observed by a number of scene text reading competitions [22, 23, 36, 40]. Various detection techniques have been proposed from those using hand-crafted features and shallow models [15, 16, 21, 28, 32, 46, 52, 52] to the recent efforts that design different DNN models to learn text features automatically [10, 13, 19, 20, 47, 53, 53, 56, 58, 59]. At the other end, different detection approaches have been explored including character-based systems [13, 15, 16, 20, 33, 46, 59] that first detect characters and then link up the detected characters into words or text lines, word-based systems [10,11,12, 19, 26, 27, 60] that treat words as objects for detection, and very recent line-based systems [53, 57] that treat text lines as objects for detection. Some other approaches [37, 47] localize multiple fine-scale text proposals and group them into text lines, which also show excellent performances.

On the other hand, scene text detection remains a very open research challenge. This can be observed from the limited scene text detection performance over those large-scale benchmarking datasets such as coco-text [49] and RCTW-17 dataset [40], where the scene text detection performance is less affected by overfitting. One important factor that impedes the advance of the recent scene text detection research is very limited training data. In particular, the captured scene texts involve a tremendous amount of variation as texts may be printed in different fonts, colors and sizes and captured under different lightings, viewpoints, occlusion, background clutters, etc. A large amount of annotated scene text images are required to learn a comprehensive representation that captures the very different appearance of texts in scenes.

Scene Text Recognition. Scene text recognition has attracted increasing interests in recent years due to its numerous practical applications. Most existing systems aim to develop powerful character classifiers and some of them incorporate a language model, leading to state-of-the-art performance [2, 3, 7, 17, 18, 30, 35, 50, 54]. These systems perform character-level segmentation followed by character classification, and their performance is severely degraded by the character segmentation errors. Inspired by the great success of recurrent neural network (RNN) in handwriting recognition [8], RNN has been studied for scene text recognition which learns continuous sequential features from words or text lines without requiring character segmentation [38, 39, 43, 44]. On the other hand, most scene text image datasets such as ICDAR2013 [23] and ICDAR2015 [22] contain a few hundred/thousand training images only, which are too small to cover the very different text appearance in scenes.

3 Scene Text Image Synthesis

The proposed scene text image synthesis technique starts with two types of inputs including “Background Images” and “Source Texts” as illustrated in column 1 and 2 in Fig. 1. Given background images, the regions for text embedding can be determined by combining their “Semantic Maps” and “Saliency Maps” as illustrated in columns 3–4 in Fig. 1, where the “Semantic Maps” are available as ground truth in the semantic image segmentation research and the “Saliency Maps” can be determined using existing saliency models. The color and brightness of source texts can then be estimated adaptively according to the color and brightness of the determined text embedding regions as illustrated in column 5 in Fig. 1. Finally, “Synthesized Images” are produced by placing the rendered texts at the embedding locations as illustrated in column 6 in Fig. 1.

3.1 Semantic Coherence

Semantic coherence (SC) refers to the target that texts should be embedded at semantically sensible regions within the background images. For example, texts should be placed over the fence boards instead of sky or sheep head where texts are rarely spotted in real scenes as illustrated in Fig. 2. The SC thus helps to create more semantically sensible foreground-background pairing which is very important to the visual representations as well as object detection and recognition models that are learned/trained by using the synthesized images. To the best of our knowledge, SC is largely neglected in earlier works that synthesize images for better deep network model training, e.g. the recent work [10] that deals with a similar scene text image synthesis problem.

Fig. 2.
figure 2

Without semantic coherence (SC) as illustrated in (b), texts may be embedded at arbitrary regions such as sky and the head of sheep which are rarely spotted in scenes as illustrated in (c). SC helps to embed texts at semantically sensible regions as illustrated in (d).

We achieve the semantic coherence by exploiting a large amount of semantic annotations objects and image regions that have been created in the semantic image segmentation research. In particular, a number of semantic image segmentation datasets [14] have been created each of which comes with a set of “ground truth” images that have been “semantically” annotated. The ground truth annotation divides an image into a number of objects or regions at the pixel level where each object or region has a specific semantic annotation such as “cloud”, “tree”, “person”, “sheep”, etc. as illustrated in Fig. 2.

To exploit SC for semantically sensible image synthesis, all available semantic annotations within the semantic segmentation datasets [14] are first classified into two lists where one list consists of objects or image regions that are semantically sensible for text embedding and the other consists of objects or image regions not semantically sensible for text embedding. Given some source texts for embedding and background images with region semantics, the image regions that are suitable for text embedding can thus be determined by checking through the pre-defined list of region semantics.

3.2 Saliency Guidance

Not every location within the semantically coherent objects or image regions are suitable for scene text embedding. For example, it’s more suitable to embed scene texts over the surface of the yellow-color machine instead of across the two neighboring surfaces as illustrated in Figs. 3c and 3d. Certain mechanisms are needed to further determine the exact scene text embedding locations within semantically coherent objects or image regions.

We exploit the human visual attention and scene text placement principle to determine the exact scene text embedding locations. To attract the human attention and eye balls, scene texts are usually placed around homogeneous regions such as signboards to create good contrast and visibility. With such observations, we make use of visual saliency as a guidance to determine the exact scene text embedding locations. In particular, homogeneous regions usually have lower saliency as compared with those highly contrasted and cluttered. Scene texts can thus be place at locations that have low saliency within the semantically coherent objects or image regions as described in the last subsection.

Fig. 3.
figure 3

Without saliency guidance (SG) as illustrated in (b), texts may be embedded across the object boundary as illustrated in (c) which are rarely spotted in scenes. SG thus helps to embed texts at right locations within the semantically sensible regions as illustrated in (d)

Quite a number of saliency models have been reported in the literature [41]. We adopt the saliency model in [29] due to its good capture of local and global contrast. Given an image, the saliency model computes a saliency map as illustrated in Fig. 3, where homogeneous image regions usually have lower saliency. The locations that are suitable for text embedding can thus be determined by thresholding the computed saliency map. In our implemented system, a global threshold is used which is simply estimated by the mean of the computed saliency map. As Fig. 3 shows, the saliency guidance helps to embed texts at right locations within the semantically sensible regions. The use of saliency guidance further helps to improve the verisimilitude of the synthesized images as well as the learned visual representation of detection and recognition models.

3.3 Adaptive Text Appearance

Visual contrast as observed by low-level edges and corners is crucial feature while training object detection and recognition models. Texts in scenes are usually presented by linear strokes of different sizes and orientations which are rich in contrast-induced edges and corners. Effective control of the contrast between source texts and background images is thus very important to the usefulness of the synthesized images while applying them to train scene text detection and recognition models.

We design an adaptive contrast technique that controls the color and brightness of source texts according to what they look like in real scenes. The idea is to search for scene text image patches (readily available in a large amount of scene text annotations within existing datasets) whose background has similar color and brightness to the determined background regions as described in Sects. 3.1 and 3.2. The color and brightness of the source texts can then be determined by referring to the color and brightness of text pixels within the searched scene text image patches.

The scene text image patches are derived from the scene text annotations as readily available in existing datasets such as ICDAR2013 [23]. For each text annotation, a HoG (histogram of oriented gradient) feature \(H_b\) is first built by using the background region surrounding the text annotation under study. The mean and standard deviation of the color and brightness of the text pixels within the annotation box are also determined in the Lab color space, as denoted by (\(\mu _L\), \(\sigma _L\)), (\(\mu _a\), \(\sigma _a\)) and (\(\mu _b\), \(\sigma _b\)). The background HoG \(H_b\) and the text color and brightness statistics (\(\mu _L\), \(\sigma _L\)), (\(\mu _a\), \(\sigma _a\)) and (\(\mu _b\), \(\sigma _b\)) of a large amount of scene text patches thus form a list of pairs as follows:

$$\begin{aligned} P = \big \{H_{b_1} : (\mu _{L_1}, \sigma _{L_1}, \mu _{a_1}, \sigma _{a_1}, \mu _{b_1}, \sigma _{b_1}), \cdots \, H_{b_i} : (\mu _{L_i}, \sigma _{L_i}, \mu _{a_i}, \sigma _{a_i}, \mu _{b_i}, \sigma _{b_i}), \cdots \big \} \end{aligned}$$
(1)

The \(H_b\) in Eq. 1 will be used as the index of the annotated scene text image patch, and (\(\mu _L\), \(\sigma _L\)), (\(\mu _a\), \(\sigma _a\)) and (\(\mu _b\), \(\sigma _b\)) will be used as a guidance to set the color and brightness of the source text. For each determined background patch (suitable for text embedding) as illustrated in Fig. 4, its HoG feature \(H_s\) can be extracted and the scene text image patch that has the most similar background can thus be determined based on the similarity between \(H_s\) and \(H_b\). The color and brightness of the source text can thus be determined by taking the corresponding (\(\mu _L\), \(\mu _a\), \(\mu _b\)) plus random variations around (\(\sigma _L\), \(\sigma _a\), \(\sigma _b\)).

The proposed technique also controls the orientation of the source texts adaptively according to certain contextual structures lying around the embedding locations within the background image. In particular, certain major structures (such as the table borders and the boundary between two connected wall surfaces as illustrated in Fig. 4) as well as their orientation can be estimated from the image gradient. The orientation of the source texts can then be determined by aligning with the major structures detected around the scene text embedding locations as illustrated in Fig. 4. Beyond the text alignment, the proposed technique also controls the font of the source texts by randomly selecting from a pre-defined font list as illustrated in Fig. 4.

Fig. 4.
figure 4

Adaptive text appearance (ATA): The color and brightness of source texts are determined adaptively according to the color and brightness of the background image around the embedding locations as illustrated. The orientations of source texts are also adaptively determined according to the orientation of the contextual structures around the embedding locations. The ATA thus helps to produce more verisimilar text appearance as compared random setting of text color, brightness, and orientation. (Color figure online)

4 Implementations

4.1 Scene Text Detection

We use an adapted version of EAST [60] to train all scene text detection models to be discussed in Sect. 5.2. EAST is a simple but powerful detection model that yields fast and accurate scene text detection in scene images. The model directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in the images. It utilizes the fully convolutional network (FCN) model that directly produces words or text-line level predictions, excluding unnecessary and redundant intermediate steps. Since the implementation of the original EAST is not available, we adopt an adapted implementation that uses ResNet-152 instead of PVANET [25] as the backbone network.

4.2 Scene Text Recognition

For the scene text recognition, we use the CRNN model [38] to train all scene text recognition models to be described in Sect. 5.3. The CRNN model consists of the convolutional layers, the recurrent layers and a transcription layer which integrates feature extraction, sequence modelling and transcription into a unified framework. Different from most existing recognition models, the architecture in CRNN is end-to-end trainable and can handles sequences in arbitrary lengths, involving no character segmentation. Moreover, it is not confined to any predefined lexicon and can reach superior recognition performances in both lexicon-free and lexicon-based scene text recognition tasks.

5 Experiments

We evaluate the effectiveness of the proposed image synthesis technique on a scene text detection task and a scene text recognition task. The evaluations are performed over 5 public datasets to be discussed in the following subsections.

5.1 Datasets and Evaluation Metrics

The proposed technique is evaluated over five public datasets including ICDAR 2013 [23], ICDAR 2015 [22], MSRA-TD500 [52], IIIT5K [31] and SVT [50].

ICDAR 2013 dataset is obtained from the Robust Reading Challenges 2013. It consists of 229 training images and 233 test images that capture text on sign boards, posters, etc. with word-level annotations. For recognition task, there are 848 word images for training recognition models and 1095 word images for recognition model evaluation. We use this dataset for both scene text detection and scene text recognition evaluations.

ICDAR 2015 is a dataset of incidental scene text and consists of 1,670 images (17,548 annotated text regions) acquired using the Google Glass. Incidental scene text refers to text that appears in the scene without the user taking any prior action in capturing. We use this dataset for the scene text detection evaluation.

MSRA-TD500 dataset consists of 500 natural images (300 for training, 200 for test), which are taken from indoor and outdoor scenes using a pocket camera. The indoor images mainly capture signs, doorplates and caution plates while the outdoor images mostly capture guide boards and billboards with complex background. We use this dataset for the scene text detection evaluation.

IIIT5K dataset consists of 2000 training images and 3000 test images that are cropped from scene texts and born-digital images. For each image, there is a 50-word lexicon and a 1000-word lexicon. All lexicons consist of a ground truth word and some randomly picked words. We use this dataset for scene text recognition evaluation only.

SVT dataset consists of 249 street view images from which 647 words images are cropped. Each word image has a 50 word lexicon. We use this dataset for scene text recognition evaluation only.

For the scene text detection task, we use the evaluation algorithm by Wolf et al. [51]. For the scene text recognition task, we perform evaluations based on the correctly recognized words (CRW) which can be calculated according to the ground truth transcription.

Table 1. Scene text detection recall (R), precision (P) and f-score (F) on the ICDAR2013, ICDAR2015 and MSRA-TD500 datasets, where “EAST” denotes the adapted EAST model as described in Sect. 4.1, “Real” denotes the original training images within the respective datasets, “Synth 1K” and “Synth 10K” denote 1K and 10K synthesized images by our method.
Table 2. Scene text detection performance on the ICDAR2013 dataset by using the adapted EAST model as described in Sect. 4.1, where “Synth” and “Gupta” denote images synthesized by our method and Gupta et al. [10] respectively, “1K” and “10K” denote the number of synthetic images used, “Random” means embedding texts at random locations, SC, SG and ATA refer to semantic coherence, saliency guidance, and adaptive text appearance.

5.2 Scene Text Detection

For the scene text detection task, the proposed image synthesis technique is evaluated over three public datasets ICDAR2013, ICDAR2015 and MSRA-TD500. We synthesize images by catering to specific characteristics of training images within each dataset in term of text transcripts, text languages, text annotation methods, etc. Take the ICDAR2013 dataset as an example. The source texts are all in English and the embedding is at word level because almost all texts in the ICDAR2013 are in English and annotated at word level. For the MSRA-TD500, the source texts are instead in a mixture of English and Chinese and the embedding is at text line level because MSRA-TD500 contains both English and Chinese texts with text line level annotations. In addition, the source texts are a mixture of texts from the respective training images and publicly available corpses. The number of embedded words or text lines is limited at the maximum of 5 for each background image since we have sufficient background images with semantic segmentation.

Table 1 shows experimental results by using the adapted EAST (denoted by EAST) model as described in Sect. 4.1. For each dataset, we train a baseline model “EAST (Real)” by using the original training images only as well as two augmented models “EAST (Real+Synth 1K)” and “EAST (Real+Synth 10K)” that further include 1K and 10K our synthesized images in training, respectively. As Table 1 shows, the scene text detection performance is improved consistently for all three datasets when synthesized images are included in training. In addition, the performance improvements become more significant when the number of synthesis images increases from 1K to 10K. In fact, the trained models outperform most state-of-the-art models when 10K synthesis images are used, and we can foresee further performance improvements when a larger amount of synthesis images are included in training. Furthermore, we observe that the performance improvements for the ICDAR2015 dataset are not as significant as the other two datasets. The major reason is that the ICDAR2015 images are videos frames as captured by Google glass cameras many of which suffer from motion and/or out-of-focus blur, whereas our image synthesis pipeline does not include image blurring function. We conjecture that the scene text detection models will perform better for the ICDAR2015 dataset if we incorporate the image blurring into the image synthesis pipeline.

In particular, a f-score of 83.0 is obtained for the ICDAR2013 dataset when the model is trained using the original training images. The f-score is improved to 86.2 when 1K synthetic images are included, and further to 88.3 when 10K synthetic images are included in training. Similar improvements are observed for the ICDAR2015 dataset, where the f-score is improved from the baseline 79.7 to 80.5 and 81.9 when 1K and 10K synthetic images are included in training. For the MSRA-TD500, a f-score of 73.4 is obtained when only the original 300 training images are used in model training. The f-score is improved to 75.4 and 78.6 respectively, when 1K and 10K synthetic images are included in training. This further verifies the effectiveness of the synthesized scene text images that are produced by our proposed technique.

We also perform ablation study of the three proposed image synthesis designs including semantic coherence (SC), saliency guidance (SG) and adaptive text appearance (ATA). Table 2 shows the experimental results over the ICDAR2013 dataset. As Table 2 shows, the inclusion of synthesized images (including random embedding in “ICDAR2013 + 1k Synth (Random)”) consistently improves the scene text detection performance as compared with the baseline model “ICDAR2013 (Baseline)” that is trained by using the original training images only. In addition, the inclusion of either one of our three designs help to improve the scene text detection performance beyond the random embedding, where SG improves the most as followed by SC and AC. When all three designs are included, the f-score reaches 86.26 which is much higher than 83.09 by random embedding. Furthermore, the f-score reaches 88.25 when 10K synthesized images are included in training. We also compared our synthesized images with those created by Gupta et al. [10] as shown in Table 2, where the scene text detection models using our synthesized training images show superior performance consistently.

5.3 Scene Text Recognition

For the scene text recognition task, the proposed image synthesis technique is evaluated over three public datasets ICDAR2013, IIIT5K and SVT as shown in Table 3 where the CRNN is used as the recognition model as described in Sect. 4.2. The baseline model “CRNN (Real)” is trained by combining all annotated word images within the training images of the three datasets. As Table 3 shows, the baseline recognition accuracy is very low because the three datasets contain around 3100 word images only. As a comparison, the recognition model “CRNN (Real+Ours 5M)” achieves state-of-the-art performance, where the 5 million word images are directly cropped from our synthesized scene text images as described in the last subsection. The significant recognition accuracy improvement demonstrates the effectiveness of the proposed scene text image synthesis technique.

Table 3. Scene text recognition performance over the ICDAR2013, IIIT5K and SVT datasets, where “50” and “1K” in the second row denote the lexicon size and “None” means no lexicon used. CRNN denotes the model as described in Sect. 4.2, “Real” denote the original training images, “Ours 5M”, “Jaderberg 5M” and “Gupta 5M” denote the 5 million images synthesized by our method, Jaderberg et al. [17] and Gupta et al. [10] respectively.

In particular, the correctly recognized words (CRW) increases to 87.1% for the ICDAR2013 dataset (without using lexicon) when 5 million synthetic images (synthesized by our proposed method) are included in training. This CRW is significantly higher than the baseline 31.2% when only the original 3100 word images are used in training. For the IIIT5K, the CRW is increased to 79.3% (no lexicon) when the same 5 million word images are included in training. The CRW is further improved to 95.3% and 98.1%, respectively, when the lexicon size is 1K and 50. Similar CRW improvements are also observed on the SVT dataset as shown in Table 3.

We also benchmark our synthesized images with those created by Jaderberg et al. [17] and Gupta et al. [10]. In particular, we take the same amounts of synthesized images (5 million) and train the scene text recognition model “CRNN (Real+Jaderberg 5M [17])” and “CRNN (Real+Gupta 5M [10])” by using the same CRNN network. As Table 3 shows, the model trained by using our synthesized images outperforms the models trained by using the “Jaderberg 5M” and “Gupta 5M” across all three datasets. Note that the model by Shi et al. [38] achieves similar accuracy as the “CRNN (Real+Ours 5M)”, but it uses 8 million synthesized images as created by Jaderberg et al. [17].

The superior scene text recognition accuracy as well as the significant improvement in the scene text detection task as described in the last subsection is largely due to the three novel image synthesis designs which help to generate verisimilar scene text images as illustrated in Fig. 5. As Fig. 5 shows, the proposed scene text image synthesis technique is capable of embedding source texts at semantically sensible and apt locations within the background image. At the same time, it is also capable of setting the color, brightness and orientation of the embedded texts adaptively according to the color, brightness, and contextual structures around the embedding locations within the background image.

Fig. 5.
figure 5

Several sample images from our synthesis dataset that show how the pro-posed semantic coherence, saliency guidance and adaptive text appearance work together for verisimilar text embedding in scene images automatically.

6 Conclusions

This paper presents a scene text image synthesis technique that aims to train accurate and robust scene text detection and recognition models. The proposed technique achieves verisimilar scene text image synthesis by combining three novel designs including semantic coherence, visual attention, and adaptive text appearance. Experiments over 5 public benchmarking datasets show that the proposed image synthesis technique helps to achieve state-of-the-art scene text detection and recognition performance.

A possible extension to our work is to further improve the appearance of source texts. We currently make use of the color and brightness statistics of real scene texts to guide the color and brightness of the embedded texts. The generated text appearance still has a gap as compared with the real scene texts because the color and brightness statistics do not capture the spatial distribution information. One possible improvement is to directly learn the text appearance of the dataset under study and use the learned model to determine the appearance of the source texts automatically.