Keywords

1 Introduction

In recent years, biometric identification technology has received extensive attention in research and application. Some examples of biometric identifiers are fingerprint, facial, iris, voice and palm print features. Identification systems based on these bio-metric features have been applied to mobile phones, personal computers and access control. As these identifiers are exposed to the environment and likely to be damaged or forged, palm vein pattern as an internal identifier is more reliable than these externals. Palm vein image are taken with an infrared camera by means of contactless way, the image has poor contrast and nonuniform brightness. Since palm vein image quality have a considerable influence on recognition performance, the primary work of image preprocessing is quality assessment of palm vein image [1,2,3].

Many scholars and institutes have proposed several hand vein image quality evaluation methods. Ming et al. [4, 5] used mean gray value and gray variance as quality index of dorsal hand vein image. In [6, 7],four parameters were selected as the reference for image quality, including gray variance, information entropy, cross point and area of effect. They integrated these parameters with weights to get image quality score. Wang et al. [8] focus on the impact of the palm vein preprocessing and match when palm distance change from far to near. They used TenenGrad to evaluate image clarity and used SSIM to show structure difference of the same palm with the changes of distance increasing. Wang et al. [9] proposed relative contrast and relative definition for dorsal hand vein image quality assessment.

The above studies have not considered the influence of uneven illumination on the palm vein image. In this paper, statistics of Harr-Like features are introduced. We use natural scene statistics (NSS) features presented in [10] to estimate image quality. In addition, the effects of uneven illumination caused by palm tilt are taken into ac-count.

2 Quantitative Evaluation of Palm Vein

In palm vein identification system, palm vein images are taken by contactless way with near-infrared camera. These images usually have poor contrast and strong noise, some of them have uneven illumination. Our work is to quantify the quality of palm vein images to reject low-quality images.

2.1 Illumination Uniformity of ROI Image

Firstly, the vein ROI image is partitioned into several patches and the gray mean of each patch of the image is calculated. Then, we use the difference between the minimum and the maximum gray mean among all patches of ROI image to evaluate the uniformity of image brightness. The larger the difference, the more uneven the brightness of ROI image.

We get ROI image F from palm vein image and partition it into several patches equally. If the matrix of \(\mathrm{{M}} \times \mathrm{{N}}\) represents gray ROI image F and F(i, j) denotes the gray value of the pixel point at the \(i^{th}\) line and the \(j^{th}\) row, we compute mean gray M[n] of each part:

$$\begin{aligned} M[n] = \frac{1}{{k*k}}\sum \limits _{i = 0}^{k - 1} {\sum \limits _{j = 0}^{k - 1} {{F_n}(i,j)} } \end{aligned}$$
(1)

Where k denotes the size of each patch, n is the index of each patch, \(F_n\) stands for the patch of the ROI image whose index is n.

The score of brightness uniformity can be defined as:

$$\begin{aligned} {Q_m} = Min(\frac{{{M_{\max }} - {M_{\min }}}}{{M(F)}},1) \end{aligned}$$
(2)

Where \({{M_{\max }}}\) and \({{M_{\min }}}\) are the maximum and minimum gray mean M[n] among all patches of the ROI image, M(F) is the gray mean of the ROI image.

Figure 1 shows brightness nonuniformity of 30 palm vein images from the same person that tilt palm from zero to 60\(^\circ \). The size of ROI image is \(128\times 128\) pixels, we partition ROI image into 64 patches equally. Thus k equals 16 and n is defined as a variable set from domain \(N = \{0, 1, 2 \dots 63\}\). The vertical axis represents the magnitude of \(Q_m\), and Horizontal axis represents the tilt angle of the palm. Brightness nonuniformity gradually increased with the tilt angle of palm gradually increased, so we can use \(Q_m\) as an index to evaluate brightness uniformity.

Fig. 1.
figure 1

Trend of brightness nonuniformity with different tilt angles of palm.

2.2 Statistics of Pixel Intensities and Their Products

Anish Mittal et al. [10] proposed a natural image quality evaluator (NIQE). They selected 90 natural pictures with high quality and learned the distribution models of pixel intensities and their products of these nature pictures, the parameters of distribution models were regarded as quality aware NSS features. Then, they fitted these NSS features with a multivariate Gaussian (MVG) model and called this model natural MVG model. A MVG model of the image to be quality analyzed is fitted with the same method, the quality of the distorted image is expressed as the distance between the natural MVG model and the MVG model of the distorted image.

NIQE does not need distorted images or human subjective scores on them to train the model and has a better performance on several natural image databases, so it is very suitable for realtime identification system. We put NIQE into our quality assessment algorithm, 120 palm vein images with high quality are chosen from PolyU database [11] to train the natural MVG model.

In our algorithm, the size of patch is \(16\times 16\) pixels and \(Q_n\) stands for the result of NIQE.

2.3 Statistics of Harr-Like Features

Haar-like features are digital image features used in object recognition [12]. A simple rectangular Haar-like feature can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position and scale within the original image. In gray image, palm vein pattern is composed of dark lines with different orientations, we can use 3-rectangular Harr-Like feature to detect palm vein. Figure 2 illustrates the process flow when computing the quality index based on statistics of Harr-Like features.

Fig. 2.
figure 2

Flow chat of computation of quality index based on statistics of Harr-Like features.

Fig. 3.
figure 3

Four kinds of Harr-Like feature windows we have used in this paper.

The Harr-Like feature windows we have used are showing in Fig. 2. Since the width of palm vein is about 6 pixels, windows with size of \(12\times 12\) pixels are chosen to ensure the width of black parts of windows are 6 pixels or so. We use four kinds of windows showing in Fig. 3 to compute Harr-Like features, the formulation is as follows:

$$\begin{aligned} h = \frac{{SumW}}{{NumW}} - \frac{{SumB}}{{NumB}} \end{aligned}$$
(3)

Where h is Harr-Like feature, and stand for the sum of pixel intensities of white and black part of window respectively and denote the number of white and black pixels in the window respectively.

Fig. 4.
figure 4

Four kinds of Harr-Like feature windows we have used in this paper.

Then, we move the window with a step of one pixel over the palm vein image, and for each subsection of the image the Haar-like feature is calculated. We can get four vectors of Harr-Like features with four kinds of windows, and use H1, H2, H3 and H4 to denote this four vectors. The distribution of H1, H2, H3, H4 are showing in Fig. 4. From Fig. 4, we can see that the distribution of Harr-like features can be well-modeled as following a zero mode asymmetric generalized Gaussian distribution (AGGD) [13]:

$$\begin{aligned} g(x;\gamma ,{\beta _l},{\beta _\gamma }) = \left\{ {\begin{array}{*{20}{c}} {\frac{\gamma }{{({\beta _l} + {\beta _\gamma })\varGamma ({\textstyle {1 \over \gamma }})}}\exp ( - {{(\frac{{ - x}}{{{\beta _l}}})}^\gamma }),\forall x \le 0}\\ {\frac{\gamma }{{({\beta _l} + {\beta _\gamma })\varGamma ({\textstyle {1 \over \gamma }})}}\exp ( - {{(\frac{x}{{{\beta _\gamma }}})}^\gamma }),\forall x > 0} \end{array}} \right. \end{aligned}$$
(4)

The mean of the distribution is defined as follows:

$$\begin{aligned} \eta = ({\beta _r} - {\beta _l})\frac{{\varGamma ({\textstyle {2 \over \gamma }})}}{{\varGamma ({\textstyle {1 \over \gamma }})}} \end{aligned}$$
(5)

Different palm vein images have different AGGD, we can use the parameters \(x({\gamma _1},{\beta _{l1}},{\beta _{\gamma 1}},{\eta _1},...{\gamma _4},{\beta _{l4}},{\beta _{\gamma 4}},{\eta _4})\) as NSS feature to quantify quality.

We choose 120 palm vein image with high quality from the database of PolyU [11] to learn the Multivariate Gaussian model:

(6)

Where \(({x_1},...,{x_k})\) are the NSS features computed in (4)-(5), and \(\mu \) and \(\varSigma \) denote the mean and covariance matrix of the MVG model, which are estimated using a standard maximum likelihood estimation procedure. We use Mahalanobis distance to measure the distortion level of palm vein image, the formulation as:

$$\begin{aligned} {Q_h} = \sqrt{{{(y - \mu )}^T}{\varSigma ^{ - 1}}(y - \mu )} \end{aligned}$$
(7)

Where y stand for the NSS feature of distorted palm vein image. The greater the distance, the worse the image quality.

2.4 Fusion of Quality Indexes

We combine \({Q_m}\), \({Q_n}\) and \({Q_h}\) to quantify the quality of palm vein image, the Q can be defined as follows:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {Q = {Q_m} * ({Q_n} * \alpha + {Q_h} * \beta )}\\ {\alpha + \beta = 1} \end{array}} \right. \end{aligned}$$
(8)

Where \(\alpha \) and \(\beta \) are weights of \({Q_n}\) and \({Q_h}\) respectively, we choose \(\alpha = \beta = 0.5\). Since \({Q_m}\), \({Q_n}\) and \({Q_h}\) are inversely proportional to the image quality, Q is inversely proportional to the image quality.

3 Experimental Analysis

We use a vein image acquisition system to obtain palm vein image, the system is consists of three modules: near infrared light source, the cameral, and the filter. The palm vein image of twenty one colleagues are obtained, 60 palm vein images are taken in different situations for each person. The proposed algorithm has been successfully tested on a variety of test images and only a few of the results are shown in this paper. Please note that the score is inversely proportional to the image quality.

In order to learn the effect of uneven illumination caused by palm tilt, a set of ROI images with different tilt angles from same person are chosen for experiment. We compute every index of quality Q, the results are showing in Table 1.

From Table 1 we can find that the Q and \(Q_m\) are decrease with the tilt angle of palm increase, while \(Q_n\) and \(Q_h\) are fluctuating. The results prove that \(Q_m\) can well reflect the effect of uneven illumination on image quality caused by palm tilt.

Table 1. Results with different tilt angles of palm.
Table 2. Results with different kinds and level of distortion.

We choose a series of palm vein images with different qualities from same person, these images have different kinds and level of distortion. We have some experiments with these images to test our algorithm, the results are showing in Table 2.

From Table 2, we can find that the results based on proposed algorithm are accordance with human subjective assessment. Form \(Q_m\) of image 2 and image 3, we can see that the illumination uniformity of image 2 and image 3 are similar to each other. But image 2 include more useful details than image 3, so the quality of image 1 is better than image 3. Image 3 and image 4 have similar \(Q_h\) which means they have similar detail information, but they have a great difference on illumination uniformity, so the quality of image 3 is better than image 4.

From [10] we can see that NIQE has a good performance on nature scene pictures. However, it is not suitable for palm vein images. The method presented in this paper combines NIQE and two other indexes which based on characteristics of palm vein images, thus it has a better performance on palm vein images.

4 Conclusion

In this paper, an image quality assessment algorithm for palm vein is presented based on natural scene statistics features. In addition, we use gray mean difference between different parts of palm vein image to estimate the brightness uniformity. Some experiments are given at the end. From the results of experiments we can see that the image quality assessment score based on proposed algorithm is accordance with human subjective assessment.