1 Introduction

Signature verification is one of the oldest biometric verification methods with strong legal support and wide usage including bank checks, writer identification, face recognition, medical detection, and a wide range of other applications [6, 24, 31]. Therefore, automated signature verification has been in the focus of researchers for several decades.

Today, we can create systems that reach impressive error rates on different benchmark databases; however, the possibilities for further improving these systems are always an open question [4]. In this paper, we discuss a specific aspect of online signatures that may allow researchers to make better choices when designing new verification systems or to improve existing ones.

In online signature verification, a device (like a tablet or a camera) is used to acquire the signature as a function of time for each feature captured. The capturing typically happens at frequencies between 75 and 200 Hz. The aim of this work was to study the direct effect of the sampling frequency and the number of sample points on a simple signature verification system. One of the expectations was that the error rate would decrease when the sampling frequency increases because more points should provide more information about the signatures. It was also expected that the error rate might reach a minimum level at a sufficiently high frequency.

As discussed in Sect. 3, while most resampling studies in signature verification are focused on single frequencies or composite approaches, this study provides a detailed and extensive landscape of the isolated effects of signature resampling and shows the importance of choosing the optimal sampling frequency on the accuracy of the signature verification system. Furthermore, this study proposes a novel approach for automatically calibrating a signer-dependent sampling frequency-based online signature verification system that achieved promising results. These unique insights and the proposed system should aid researchers in the field in choosing their resampling approach in future signature verification systems.

We conducted thousands of measurements on five different public signature databases, and the results of our experiments showed a different behavior from the mentioned expectations. The relation between the sampling frequency and the error rate was not monotonous in the majority of the cases; however, the error rate had a local minimum. Moreover, this local minimum was achieved in a similar range for several databases. These results will be explained in detail in the subsequent sections.

Further, the individual sampling frequency or signature point count for each signer is studied and used to build an online signature verification system that relies on the signer-dependent sampling frequency. A total of 500 tests applied in this work using several online signature verification systems assure the quality of our results. The signatures were down-sampled and tested for different sample rates in each verification test. The best sampling frequency for each signer was assigned in a testing verification system using only the reference signatures (provided by the signer) before using them later in the verification process. The results proved the strength of the proposed method, where the majority of the tests provided better results, as will be discussed in the results section.

2 Related work

Sampling is applied when a digital device is used to acquire an analog signal by recording it at a specific frequency. If this sampling frequency is sufficiently large, human perception cannot notice the difference between digitized and analog information.

2.1 Online signature verification

An online signature verification system typically consists of four main steps (Fig. 1). The first step is data acquisition, where the signatures are acquired using dedicated devices (typically digitizing tablets).

The next step is preprocessing. Even when the signer provides the signatures under similar circumstances, there will always be some differences in size or location that may hinder their comparison. Thus, in the preprocessing step, methods such as scaling [14], alignment [1, 45], rotation [23], or z-normalization [3] can be applied to enhance the similarity measurement in the later steps. After that, feature extraction is applied, where several features can be collected, such as the position, speed, pressure, and azimuth.

In the verification step, a verification method is applied to decide whether a signature is genuine or forged. There are several approaches used for this purpose, such as dynamic time warping (DTW), neural networks [16], or hidden Markov models (HMMs) [7]. Among these methods, DTW has shown the most promising results [20].

Fig. 1
figure 1

The main steps of online signature verification systems

2.2 Dynamic time warping

DTW is one of the most common algorithms used for signature matching [4], it has shown promising results for signature verification both in scientific experiments [44] and competitions, where the top winning systems used it for verification [19, 21]. Thus, we used DTW in our work to ensure that our results will have the widest applicability. DTW measures the distance between two (possibly multidimensional) time series; it calculates the distance between one point from the first series and several points from the other series and stores the minimum distance. In the classical DTW implementation, an \(m\times n\) cost matrix is created whose elements (kth, jth) are the calculated Euclidean distances between the elements of the compared feature. In a comparison between two signatures (Q and R) for a certain feature F, the best alignment between the two signatures will be calculated by applying the following equation:

$$\begin{aligned} \psi (k,j)={\rm dist}(F(r^k_i), F(q^k_i))+\mathrm{min} \left\{ \begin{array}{l} \psi (k,j-1)\\ \psi (k-1,j-1)\\ \psi (k-1,j) \end{array}\right\} \end{aligned}$$
(1)

Where the first part of the equation represents the Euclidean distance between the points, and \(\psi (k,j)\) is the cumulative distance up to the (kth, jth) element [29], \(r_i\) and \(q_i\) are the sampled points of the current reference (\(R_i\)) and questioned (\(Q_i\)) signatures respectively. \(F(r^k_i)\) and \(F(q^k_i)\) represent a feature of the current point of the reference and questioned signatures, respectively. Then, the minimum warping path between the two matrices is measured, which represents the chosen distance. Figure 2 shows an example of a DTW calculation for two time series [38]. Several modifications to DTW have been introduced in the past, which usually aim to capture the relevant information content of the compared time series. Feng and Wah proposed a new warping technique called extreme points warping, which only warps the extreme points of the signals [10]. Sharma and Sundaram used an enhanced DTW by utilizing the code vectors generated from a vector-quantization (VQ) strategy [35]. Faundez-Zanuy proposed a method using a combination of VQ and DTW [8], whereas Parziale et al. used stability-modulated DTW for their verification system [29].

Fig. 2
figure 2

Two time series (left), the DTW cumulative cost matrix (center), and the optimal alignment between the series (right) [38]

2.3 Resampling in signature verification

Although the input devices used for signature acquisition have sampling rates as high as 200 Hz, it does not mean that they will provide better verification performance. The first study on signature frequencies [30] stated that signature signals have a maximum frequency (\(f_s\)) of 20–30 Hz. Throughout the past three decades, several papers have been published that dealt with the subject. Another study suggested that, as the number of harmonics in handwriting is low, 5 Hz is sufficient to provide the most important frequency components and 10 Hz for all of them, whereas to be able to apply some filters for noisy data, one needs a frequency range of 10–37 Hz [39].

The Nyquist rate or frequency [17] is the minimum rate at which a finite bandwidth \(B\) signal needs to be sampled to retain all of the information. The sampling frequency should be at least twice the highest frequency contained in the signal. The Shannon–Nyquist sampling theorem [13] guarantees that any signal whose Fourier transform is supported on this bandwidth limit can be entirely reconstructed from the discrete-time signal as long as the frequency rate is at least twice this bandwidth limit, as illustrated by the following equation [32]:

$$\begin{aligned} f_s > 2 B \end{aligned}$$
(2)

In our case, a frequency range of 40–60 Hz should be sufficient to contain all the signature information without providing redundant information. Although this is a general theory for signal sampling, it helps us understand and analyze the results and explain some behaviors.

Martinez-Diaz et al. studied the effects of sampling rate and interpolation in an HMM-based verification system on a single database [22]. The signatures were down-sampled to 25 and 50 Hz and then upsampled to 100 Hz (the original frequency of the input device) using Catmull-Rom [5] and linear interpolation schemes. Their results showed that the accuracy could be improved by using resampling and interpolation together.

The above results show the benefits of using resampling and interpolation but do not show the direct effect of the frequency itself, and they cannot be generalized to other databases and verification systems. That is why, in this work, we used different approaches for sampling and preprocessing and many different sampling rates on five different databases.

Vivaracho-Pascual et al. proposed a low-cost approach to online signature recognition based on length normalization. Although their work was about signature recognition, not verification, it showed that, for the Spanish Ministry of Science and Technology database (MCYT) database, it is possible to reduce the number of signature points without performance loss [42] as we will also show in our results. In offline signatures, image resolution is similar to sampling frequency for online signatures. Vargas et al. studied the effect of image resolution on the verification performance; they used images with a resolution of as high as 600 dpi to as low as 45 dpi. Their results showed that a resolution of 150 dpi offers a good trade-off between performance and image resolution [40].

Bin Li et al. proposed a new signature matching algorithm for online signatures. One of the work steps was to resample the signatures to a fixed number of points using equidistance spacing [18].

Vatavu studied the effect of sampling rate on the performance of template-based gesture recognizers using down-sampling and a DTW approach. His results showed that six sampling points are sufficient for Euclidean and angular recognizers to provide high-performance [41].

We need to note here that part of this work is an extension of our previous work [34] where the signer-dependent sampling frequency approach was introduced.

3 Methodology

The previous results were mostly obtained by using some kind of interpolation approach. Therefore, it cannot be clearly stated whether the changes in the verification or recognition accuracy should be attributed to the resampling or to the interpolation itself. In this work, we did not use interpolation to avoid its effect on the results. Therefore all results (and improvements) introduced here can be directly attributed to the resampling itself. Interpolation techniques could still improve the resulting accuracies, but this is outside the scope of this paper. In this section, our verification system and experimental protocol will be discussed in detail.

3.1 Proposed verification system

We created a simple signature verification system and evaluated it with different preprocessing approaches on several databases to support our conclusions. This will provide a large variation in the experimental work and eliminate some other factors that may affect the system’s accuracy.

3.1.1 Databases

Five different databases were used in this work to avoid any data-dependent results: The Signature Verification Competition 2004 database (SVC2004) [47], the MCYT-100 [27], the Dutch and Chinese subsets of the Signature Verification Competition 2011 database (SigComp’11) [19], and the German database of the Signature Verification Competition 2015 (SigComp’15) [20].

Each database consists of a specific number of signatures from different signers. They are divided into groups of genuine and forged signatures. These databases may differ in the sampling rate of the capturing device, resolution, or features captured. Table 1 summarizes and compares these properties.

Table 1 Databases used and their statistics

3.1.2 Preprocessing

Preprocessing is important to improve the similarity measurement accuracy. We chose scaling, translation, and z-normalization methods for preprocessing purposes. Signature scaling may be used to resize all signatures by multiplying all the points by a certain ratio to keep the signature in a specific range (see Eq. 3).

$$\begin{aligned} \hat{x}(i)= x_\mathrm{newMin} + \frac{x(i)-x_\mathrm{oldMin}}{x_\mathrm{oldMax}-x_\mathrm{oldMin}} * ({x_\mathrm{newMax}-x_\mathrm{newMin}}) \end{aligned}$$
(3)

where x(i) is the old feature value, \(\hat{x}(i)\) is the signature feature value after signature preprocessing, \(x_\mathrm{newMin}\) and \(x_\mathrm{newMax}\) are the new minimum and maximum values of the preprocessed feature, \(x_\mathrm{oldMin}\) and \(x_\mathrm{oldMax}\) are the old minimum and maximum values of the feature respectively.

In the case of translation, all signature points were shifted by a given vector. In this work, we used translation to move the center of gravity of the signatures to the origin using the following equation:

$$\begin{aligned} \hat{x}(i)=x(i)-\mu _\mathrm{x} \end{aligned}$$
(4)

where \(\mu _\mathrm{x}\) is the mean of the old values of the feature.

In our previous works, we studied the effect of scaling and translation preprocessing methods which showed that both are strong methods, especially when applied to both horizontal and vertical axes [33].

Normalization to zero mean and unit of energy (Z-normalization) aims to transform all elements of the input vector into an output vector where its mean (\(\mu \)) is approximately 0, and the standard deviation (\(\sigma \)) is around 1. Here is the formula used in this work for the z-normalization:

$$\begin{aligned} \hat{x}(i)=\frac{x(i)-\mu }{\sigma }, \mathrm{where} \ \ i \in N \end{aligned}$$
(5)

The three preprocessing methods and their combinations were used in our verification systems.

3.1.3 Feature extraction

The optimal selection of a set of features is the key to an effective and more accurate online signature verification system. Among several features available, we chose a compilation of the horizontal (X) and vertical positions (Y) and the pressure (P) as they were available in all the databases used. Five different combinations of features were tested: X, Y, P, XY, and XYP.

3.1.4 Similarity measurement and verification

After the previous steps, the signatures were ready to be verified. For each signer, 10 genuine signatures were chosen to act as references. These signatures were used to calculate a similarity threshold for the verification using DTW. DTW can be used with different distance measurement algorithms; here, the Euclidean distance was used. According to the Euclidean distance formula, the distance (dist) between two points in the plane with the coordinates (\(a_1\), \(b_1\)), and (\(a_2\), \(b_2\)) is given by

$$\begin{aligned} {\rm dist}((a_1,b_1),(a_2,b_2)))=\sqrt{(a_1-a_2)^2+(b_1-b_2)^2} \end{aligned}$$
(6)

After doing the preprocessing for all signatures, we calculated a threshold for each signer. When a new signature was being tested, the average distance from the reference signatures was calculated and then compared with the threshold, and if it was equal to or lower than the threshold, the signature was classified as genuine; otherwise, it was classified as forged. Two types of errors may occur during this process. The first one occurs when a genuine signature is classified as forged (false rejection rate (FRR)), and the second one occurs when a forged signature is classified as genuine (false acceptance rate (FAR)). In the testing phase, ten genuine signatures (different from the signatures used as references) are used to calculate the FRR as follows:

$$\begin{aligned} \mathrm{FRR}=\frac{\text{ Genuine } \text{ signatures } \text{ classified } \text{ as } \text{ forged } }{\text{ Genuine } \text{ signatures } \text{ in } \text{ the } \text{ testing } \text{ set }} \end{aligned}$$
(7)

while 20 forged signatures used to calculate the FAR as follows:

$$\begin{aligned} \mathrm{FAR}=\frac{\text{ Forged } \text{ signatures } \text{ classified } \text{ as } \text{ genuine } }{\text{ Forged } \text{ signatures } \text{ in } \text{ the } \text{ testing } \text{ set }} \end{aligned}$$
(8)

Changing the threshold will have opposite effects on FAR and FRR. FRR will rise, and FAR will drop in case the sensitivity of the system increases. Contrariwise, if the sensitivity is reduced, FRR will drop, and FAR will rise. Therefore, there will be a point where the two values meet, which is called the equal error rate (EER), which is widely used and accepted in the field of signature verification to achieve a trade-off between both error types Impedovo and Pirlo [12]. In our benchmarks, we tuned the initial threshold until we reached this point.

3.2 Effect of sampling frequency

In this section, we focus on evaluating and analyzing the effect of using different sampling frequencies of each database on the accuracy of the online signature verification system. Several tests were applied to assure more accurate comparisons regardless of the effect of any other factor of using a specific verification system.

3.2.1 Signature sampling

To test the effect of the sampling rate and the number of sample points on the verification accuracy, we applied the verification steps using different sampling rates in each test. Thus, in each test, some points were skipped to reduce the sampling rate and signature points. We began the test with the initial sampling rate, and then, in each iteration, we skipped some of the points. The number of iterations depends on the average number of sample points in the database. For SVC2004, MCYT-100, and SigComp’15, 20 iterations were used, whereas for SigComp’11 (Dutch and Chinese), 40 iterations were used. These iterations provided the tests for sampling rates between 5 and 200 Hz. The ranges of the sampling rates and the average number of signature points tested were as follows: SVC2004, 5–100 Hz (10–208 points); MCYT-100, 5–100 Hz (21–440 points); SigComp’15, 3–75 Hz (6–125 points); SigComp’11 (Dutch), 5–200 Hz (24–978 points); and SigComp’11 (Chinese), 5–200 Hz (19–792 points).

3.2.2 Experimental protocol

Our signature verification system uses different databases, features, preprocessing, and verification methods, which provide us with several results that can help generalize the results regardless of the effect of the specific methods chosen. With five databases, four preprocessing methods, and five combinations of features, we were able to test the results using 100 different configurations (see Fig. 3).

Fig. 3
figure 3

Combinations of the applied verifiers

The 20 different combinations of the verification system (for each database) were then applied using different sampling rates. As discussed earlier, the number of iterations used was different for the databases. Overall, 2800 different tests were applied in these iterations. For each configuration, the results were evaluated and visualized to study the exact effect in all cases. Algorithm 1 pseudo-code describes the experimental protocol of the work.

figure a

3.3 Signer dependent sampling frequency system

In this section, we focus on the results of using a signer-dependent sampling frequency rather than using the same sampling frequency for the entire database.

Fig. 4
figure 4

Proposed method

The proposed technique is based on choosing the best sampling frequency for each signer before starting the verification process. In real-life situations, the signer provides some signatures as references. These signatures are used to compare the tested signature and check if it is forged or genuine. Therefore, the proposed technique will use only these references for choosing the signer-dependent sampling frequency to evaluate the system in a real-life situation where no more genuine or forged signatures are available. The references were divided into training and testing sets, and only the false rejection rate (FRR) was calculated and used to choose the best sampling frequency. To evaluate the efficiency of the proposed technique, more genuine (not used in the previous step) and forged signatures are used later to calculate the error rate in the evaluation step.

Although the first step contains all the main stages of a signature verification process, it is used as a preprocessing step to improve the full system’s quality later. The same verifier used in the first step, where we choose the best sampling frequency for each signer, is used in the system evaluation step to obtain the actual improvement under the same circumstances.

Several inputs for each stage of the online signature verification process were used to obtain more results that help better understanding the effect of using a signer-dependent sampling frequency.

The variant verification steps were combined to provide many tests. These combinations were tested using 3–7 samples (n) as references. The total number of tests was 500. After choosing the current verifier, it is first applied using different sampling rates (f) from a group of sampling frequencies (F). In each iteration, the sampling rate is changed, and the average error rate (AER) is calculated for each signer, where AER is calculated as shown in Eq. (9). The number of iterations depends on the initial sampling frequency, 20–40 iterations for 100–200 Hz databases.

$$\begin{aligned} \mathrm{AER}=\frac{\text{ FRR } \text{+ } \text{ FAR }}{2} \end{aligned}$$
(9)

In each iteration, the sampling frequency is changed, and the system is applied using it. Since n samples are used as references to calculate the threshold in the verification process, the rest of the signatures \(S_t\) (10-n) are used to test the accuracy when using the current sampling frequency. Only references are used in this phase, so only FRR of the (10-n) signatures are used to compare the results and assign the best sampling frequency to each signer.

After assigning each signer with his/her best sampling frequency (\(F_\mathrm{optimal}\)), the same verification system with the same preprocessing methods, features, and classification algorithm is applied to each signer individually, and all error rates are calculated for them. The optimal sampling frequency for each signer s is calculated as the following:

$$\begin{aligned} F_\mathrm{optimal}= \mathop {\min }_{\forall f\in F}Frr(S_t) \end{aligned}$$
(10)

Later the average error rate for the signers is calculated. In order to evaluate the current verification system and its efficiency when using signer-dependent frequencies, we need to apply the same verification system for the same database but using the initial sampling frequency and then compare the results of both methods and calculate the accuracy improvement. Figure 4 provides a brief description of the procedure.

4 Experimental results

Table 2 Best sampling frequencies, sample counts, and EER for each combination for each database

In this section, the results of the experiments are discussed and evaluated. Section 4.1 discusses the effect of using different sampling frequencies and Sect. 4.2 discusses the results of the proposed signer-dependent online verification system.

4.1 Sampling rate impact on EER

After applying all the previous combinations at different sampling rates and numbers of signature points, we analyzed and studied the results to determine the effect of using different sampling rates on the accuracy of a verification system. The best results of the experiments are shown in Table 2. We selected the sampling rate for each configuration where the lowest EER was achieved.

The results showed that we could obtain better results by decreasing the sampling rate and the average number of sample points of the databases in most of the combinations. The expected behavior was that the accuracy would increase when the sampling rate (and, thus, the sample count) increases because providing more information about the signatures will help differentiate them.

Our experiments showed that, in most cases, the accuracy started to increase until some point, and then it started to decrease. Figure 5 shows two examples for both cases of the effect of the average number of sample points on the EER from SVC2004. In fact, it follows the second behavior in 92.5% of the configurations of all databases acquired between 100 and 200 Hz, 100% for SigComp’11 (Dutch), in 95% for SigComp’11 (Chinese), 100% for SVC-2004, and 75% for MCYT-100; only a few combinations provided better results when using the initial sampling rate. Moreover, 91.25% of the best results were obtained in these databases using a sampling frequency of less than or equal to 50 Hz.

In the case of SigComp’15, the database was acquired using a relatively low frequency of 75 Hz; thus, even the initial sampling frequency provided good results without down-sampling, which also means that, for all the databases, 93% of the best results of the configuration were obtained using a sampling frequency of less than or equal to 75 Hz. Figure 6 shows some examples of the effect of sampling frequency on the EER.

Fig. 5
figure 5

Expected (left) versus typical (right) behavior of the EER as a function of sample points on SVC2004

Fig. 6
figure 6

Examples from other databases of the effect of sampling frequency on the EER

4.1.1 The best frequency ranges

Our observations showed that, in most cases, the accuracy increased until some point and then started decreasing or stagnating. However, we also observed that there was a specific range where we could obtain the best results. As an example, Table 2 shows that, for the SigComp’11 (Dutch) database, 95% of the best results were obtained when using an average sampling frequency of less than or equal to 50 Hz. Similar observations were made with the other databases. In general, we can say that the majority of the best results were obtained using a range of around 15–50 Hz. These ranges are shown in Fig. 7.

4.1.2 The best sample count ranges

The effect of sample count followed the same behavior as that of the frequency rate as they are related. Table 1 shows the best results of the sample counts. We can see that the sample count ranges between 30 and 104 for SVC2004 and between around 60 and 240 for the other databases that provided the lowest error rate. Figure 8 shows the best sample counts for all the tests.

Fig. 7
figure 7

Sampling frequencies of the best results for all the tests

Fig. 8
figure 8

Sample counts of the best results for all the tests

4.1.3 Sampling restrictions

Our results showed that there are three cases for the frequency range. The first one is where the frequency rate is low or under a certain range (undersampling). In this case, the error rate is high because it does not provide sufficient information about the signature that makes it unique and allows it to be distinguished from other signatures. In the second case, the best results can be obtained when the sampling rate and the signature points are in a specific range where they are neither low nor high. This makes sense because it was shown that the maximum frequency band limit for online signatures is 20–30 Hz and that a Nyquist rate of 40–60 Hz will be sufficient to provide adequate information about the signal without any redundant information. The third case is where the frequency is above a certain range where the accuracy of the results decreases again. We believe that the redundant data not only may provide redundant information that will not help in obtaining better results but may also worsen it (oversampling). These cases are related to the situation with analog signal sampling, where choosing the wrong sampling frequency may produce undersampling or oversampling issues.

4.2 Signer dependent sampling frequency system results

A comparison study has been conducted to check whether the proposed method has improved the existing online signature verification system. Since sometimes other factors may affect the accuracy of the results, several tests will provide a more accurate evaluation by investigating the behavior of the majority of these systems to avoid any other factors that may impact the results. It also helps in choosing the best or optimal methods that can be used to provide the most accurate verification systems. In this section, all cases are discussed and evaluated to show the improvement in both randomly chosen systems and optimal systems.

Testing all scenarios using scaling, translation, and z-normalization for preprocessing and using n: 3–7 (500 tests) showed that the accuracy improved in 72% of the tests. The improvement in the accuracy reached up to 8.38%. However, 20 tests show a decrease in accuracy, and eight tests resulted in the same accuracy. Overall, 80% of the tests provided better or at least equal verification accuracy regardless of the preprocessing techniques applied or features selected or any number of samples used. Figure 9 shows the accuracy improvement for all cases.

Fig. 9
figure 9

Percentage of experiments (vertical axis) where the accuracy improved when using the signer-dependent sampling frequency approach (all the tests)

Preprocessing methods have a significant impact on verification accuracy. The experiment showed that z-normalization provides the most accurate results. Therefore, the results of the test where z-normalization was used showed accuracy improvement in 88% of the cases, there was no change in 1% of the cases, and the accuracy decreased in 11% of the cases (see Fig. 10).

Fig. 10
figure 10

Percentage of experiments (vertical axis) where the accuracy improved when using the signer-dependent sampling frequency approach (using z-normalization)

We have shown that choosing the preprocessing method will affect the accuracy. Also, the number of samples can significantly influence the results. The best results were achieved when using 3 or 6 samples. Combining these facts into one verification system will provide the most accurate system. Choosing six samples as references and z-normalization for preprocessing while using five different databases and five different features (25 tests) has led to only one negative result, one result with no change and 23 results (92%) with accuracy improvement up to around 8.4% (see Fig. 11).

Fig. 11
figure 11

Percentage of experiments (vertical axis) where the accuracy improved when using the signer-dependent sampling frequency approach (using z-normalization and 6 samples)

4.3 Comparison

Although this study aimed to measure the effect of the sampling frequency on the accuracy, it is also worth mentioning that some of the verification systems applied here achieved competitive results to state-of-the-art systems. In Table 3, we show some of the best results achieved with down-sampling compared to other results for different databases.

Table 3 A comparison with recent results in the field

5 Conclusion

In this work, we studied the effect of the sampling rate of the input devices used for signature acquisition and the number of sample points on the accuracy of online signature verification systems. We proposed an online signature verification based on signer-dependent sampling frequency and DTW. Several configurations of a DTW-based verification system were used to assess the achievable EER at different sampling rates. Altogether, we conducted 2800 different experiments, which helped generalize the results regardless of the effect of other factors that may affect the system’s accuracy. To our knowledge, these properties have never been studied within the scope of online signature verification.

The results showed that the majority of the best results could be obtained using a sampling frequency between 15 and 50 Hz and a sample count between 60 and 240 points. Using frequencies lower than these ranges greatly decreased the accuracy, whereas using higher frequencies decreased or did not affect the accuracy in 92.5% of the configurations of all databases acquired between 100 and 200 Hz. For these databases, 91.25% of the best results were obtained using a sampling frequency of less than or equal to 50Hz and 93% of less than or equal to 75 Hz.

As the sampling rate and the sample count are strongly correlated, it is too early to conclude which one of the two plays a more significant role in the observed relation; therefore, we presented our results by including both of them. Regardless of this fact, we can state that, in classic DTW-based signature verification, using sampling frequencies higher than 100 Hz will not improve the accuracy of the systems but will instead increase the computational cost of the verification. The results of the proposed system of using signer-dependent sampling frequencies also showed that in 80% of the 500 tests, the accuracy improved or at least did not change. Moreover, the ratio of improved results reached 92% when chosen the optimal preprocessing methods and number of samples. The results showed that using the optimal frequency provides competitive systems for online signature verification. These results are auspicious and suggest that DTW-based online signature verifiers can be improved in the future by using different criteria for choosing the best sampling frequency for each signer.