# SAR moving object imaging using sparsity imposing priors

- 706 Downloads
- 3 Citations

**Part of the following topical collections:**

## Abstract

Synthetic aperture radar (SAR) returns from a scene with motion can be viewed as data from a stationary scene, but with phase errors due to motion. Based on this perspective, we formulate the problem of SAR imaging of motion-containing scenes as one of joint imaging and phase error compensation. The proposed method is based on the minimization of a cost function which involves sparsity-imposing regularization terms on the reflectivity field to be imaged, considering that it admits a sparse representation as well as on the spatial structure of the motion-related phase errors, reflecting the assumption that only a small percentage of the entire scene contains moving objects. To incorporate the spatial structure of the phase errors into the problem, we provide three different sparsity-enforcing prior terms. In order to achieve computational gains, we also present a two-step version of our approach, which first determines regions of interest that are likely to contain the moving objects and then applies our sparsity-driven approach for joint image reconstruction and autofocusing in such a spatially constrained setting. Our preliminary experiments demonstrate the effectiveness of this new moving target SAR imaging approach.

## Keywords

SAR Moving object Sparsity Group sparsity Low-rank sparse decomposition## 1 Introduction

Moving object tracking and imaging is an important problem in a wide-range of radar systems and applications including emerging systems such as radars with commercial off-the-shelf (COTS) components or software-defined radio (SDR)-based radars, which have attracted interest in recent years. Imaging of moving objects is a challenging problem for synthetic aperture radar (SAR) as an imaging radar. Moving objects in the scene cause phase errors in the SAR data and subsequently defocusing in images reconstructed based on a stationary scene assumption. The defocusing caused by moving objects exhibits space-variant characteristics, i.e., the defocusing arises only in the parts of the image containing the moving objects, whereas the stationary background is not defocused. This type of defocusing can be removed by estimating two sets of unknowns, which are the locations of the moving objects in the scene and the velocities of the objects or the corresponding phase errors.

For monostatic spotlight mode SAR which is the modality of interest in this paper, most of the published pieces of work aim first to form the smeared imagery of moving objects and then to focus the smeared parts of the image [1, 2, 3, 4]. These kinds of approaches are based on post-processing of an image reconstructed conventionally, e.g., by the polar-format algorithm [5]. As in many other imaging problems, sparsity-based approaches have recently been considered in the context of SAR moving object imaging as well. In [6, 7, 8], compressed sensing (CS) techniques are used to search for a solution over an overcomplete dictionary which consists of basis elements for several velocity-position combinations. The method proposed in [6] for multistatic radar imaging of moving objects facilitates linearization of the nonlinear problem of target scattering and motion estimation and subsequently solves the problem as a larger, unified regularized inversion problem subject to sparsity constraints. Focusing on scenarios with low signal-to-clutter ratio, the approach in [7] first applies a clutter cancellation procedure and then solves an optimization problem similar to the one in [6]. In [8], which concentrates on targets with micro-motions such as rotation or vibration, generalized Gaussian and Student’s *t* prior models are used to enforce sparsity. The approach in [9] deals with the problem using a radon transform and a CS-based method to achieve motion parameter estimation of moving targets with Doppler spectrum ambiguity and Doppler centroid frequency ambiguity encountered in SAR systems with low pulse repetition frequency (PRF). The sparsity information is used in [10] within a Bayesian framework to determine velocities in a multi-target scenario for low PRF wideband radar. In [11], which also uses a Bayesian approach, not only the target signature is estimated using a prior distribution on the target trajectory but also parameters related to nuisances such as clutter and antenna miscalibration are estimated.

We handle the problem in the context of sparsity-driven imaging as well. Our method is based on simultaneous imaging and phase error compensation. Considering that in SAR imaging, the underlying field usually exhibits a sparse structure, we previously proposed a sparsity-driven technique for joint SAR imaging of stationary scenes and space-invariant focusing by using a nonquadratic regularization-based framework [12]. That work was motivated by defocusing due to, e.g., platform position uncertainties. Here, through a significant extension of that framework, we propose a method for joint sparsity-driven imaging and *space-variant* focusing for correction of phase errors caused by moving objects. Preliminary pieces of this work have been presented at [13, 14]. We formulate an optimization problem over the reflectivities and potential phase errors due to motion over the scene and solve it iteratively. In this formulation, we not only exploit the sparsity of the reflectivity field but we also impose a constraint on the spatial sparsity of the phase errors based on the assumption that motion in the scene will be limited to a small number of spatial locations. This constraint on phase errors helps to automatically determine and focus the moving points in the scene. We also discuss two possible extensions of this primary approach, through two alternate choices for the regularization term on the motion field. In the first extension, we use group-sparsity enforcing regularization term to impose the sparse structure. The second extension is based on low-rank sparse decomposition of the phase error matrix. More importantly, to reduce the computational complexity of this problem, we propose a second approach within the same framework to make these ideas practically applicable on relatively large scenes. In this second approach, our aim is to improve the computational efficiency of the phase error estimation procedure by first determining regions of interest (ROI) for potential motion using a fast procedure and then performing phase error estimation only in these regions. Here, note that both of these two approaches provide advantages such as increased resolution, reduced sidelobes, and reduced speckle in the imaging side thanks to the regularization-based image formation, which can alleviate challenges caused by incomplete data or sparse apertures [15] as well.

In Section 2, the observation model used for SAR moving object imaging is presented. In Section 3, the proposed method is described in detail for our primary approach, its extensions based on group sparsity and low-rank sparse decomposition, and the ROI-based approach. After providing some additional remarks on the practical implementation of the proposed approaches in Section 3, we present our experimental results in Section 5. We conclude the paper in Section 6.

## 2 SAR imaging model

Here, *r* _{ m } is the vector of observed samples, *C* _{ m } is a discretized approximation to the continuous observation kernel at the *m*th aperture position, **f** is a vector representing the unknown sampled reflectivity image, and *M* is the total number of aperture positions. The vector **r** is the SAR phase history data of all points in the scene. It is also possible to view **r** as the sum of the SAR data corresponding to each point in the scene.

Here, *C* _{ cl−i } is the *i*th column of the model matrix **C** and **f**(*i*) and *p* _{ i } represent the complex reflectivity at the *i*th point of the scene and the corresponding SAR data it produces, respectively. *I* is the total number of points in the scene.

*i*th point in the scene as a point target having a motion which results in defocusing along the cross-range direction. In this paper, we particularly focus on motions which result in cross-range defocusing. The SAR data of this target can be expressed as [1, 2]:

**ϕ**_{ i }represents the phase error caused by the motion of the target and \(\mathbf {p}_{\mathbf {i}_{m}}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}\mathbf {p}_{\mathbf {i}_{\mathbf {m}_{e}}}\) are the phase history data for the stationary and moving point target, respectively, at aperture position

*m*. Similarly, this relation can be expressed in terms of the model matrix

**C**as follows:

**C**

_{ c l − i }(

*ϕ*) is the

*i*th column of the model matrix

**C**(

*ϕ*) that takes the movement of the objects into account and \(\mathbf {C}_{\mathbf {cl-i_{m}}}(\phi)\) is the part of

**C**

_{ c l − i }(

*ϕ*) for the

*m*th aperture position. In the presence of additional observation noise, the observation model for the overall system becomes

where **v** is the observation noise. In this way, we have turned the moving object imaging problem into the problem of imaging a stationary scene with phase corrupted data. Here, the aim is to estimate **f** and * ϕ* from the noisy observation

**g**.

## 3 Sparsity-driven moving target SAR imaging

*of size*

**β***K*×1 where

*K*=

*MI*. The vector

*includes phase errors corresponding to all points in the scene, for all aperture positions as follows:*

**β**

**β**_{ m }is the vector of phase errors for the

*m*th aperture position and has the following form:

**f**and the phase error vector

*is as follows:*

**β**Here, **1** is a *K*×1 vector of ones and * β*(

*k*) denotes the

*k*th element of

*. Since the number of moving points is usually much smaller than the total number of points in the scene, most of the*

**β***ϕ*values in the vector

*are zero. Since the elements of*

**β***are in the form of*

**β***e*

^{ j ϕ }s, the elements of the vector

*corresponding to the stationary scene points become 1, whereas the elements corresponding to the moving points take various values depending on the amount of the phase error. Therefore, this sparsity on the phase errors is incorporated into the problem by using the regularization term ∥*

**β***−*

**β****1**∥

_{1}.

*n*+1)st iteration, the cost function

*J*(

**f**,

*) is minimized with respect to the field*

**β****f**.

*l*

_{1}−

*norm*at the origin, a smooth approximation is used [15]:

*σ*is a small positive constant. In each iteration, the field estimate is updated as follows:

**1**is a

*I*×1 vector of ones and

**T**is a diagonal matrix, with the entries \(\hat {\mathbf {f}}(i)\) on its main diagonal, as follows:

**β**_{ m }(

*i*)s.

*m*th aperture position.

After these phase estimation and model matrix update procedures have been completed for all aperture positions, the algorithm moves on to the next iteration.

### 3.1 Extensions

Within the same framework, we present two additional methods for the phase estimation step. Both of them can be regarded as extensions of our main method. One of the methods is based on the idea of using group-sparsity constraints whereas the other is based on using a low-rank sparse matrix decomposition for the phase error matrix.

#### 3.1.1 Group-sparsity based regularization

*in the previous section to a matrix so that the columns of this matrix are the*

**β**

**β**_{ m }vectors as follows:

Here, **Q** is the matrix of phase errors and each row of the matrix **Q** consists of the phase error values for all aperture positions, for a particular point in the scene. We expect each column of **Q** to exhibit sparse nature across the rows, indicating the expectation that there are small number of moving pixels in the scene. However, no such sparsity is expected in general across the columns. This structure motivates imposing sparsity in a group-wise fashion, where groups in our setting corresponds to rows of **Q**.

Since the number of moving points is much smaller than the total number of points in the scene, most of the *ϕ* values in the vector * β* and subsequently in the matrix

**Q**are zero. Since the elements of

**Q**are in the form of

*e*

^{ j ϕ }s, the elements of the rows corresponding to the stationary scene points become 1, whereas the elements of the rows corresponding to the moving points take various values depending on the amount of the phase error. Therefore, this group sparsity nature on the phase errors is incorporated into the problem by using the regularization term \(\sum ^{I}_{i=1}\left (\sum ^{M}_{m=1}\left |\mathbf {Q}(i,m)-1 \right |^{2}\right)^{1/2}\).

**H**and

**D**are matrices having the following forms

*C*

_{ m }denotes the submatrix for the part of the model matrix corresponding to the

*m*th aperture position.

**T**is a diagonal matrix, with the entries \(\hat {\mathbf {f}}(i)\) on its main diagonal, as follows:

*to be 1. Consequently, since in this step, we want to use only the phase information and to suppress the effect of the magnitudes, the estimate \(\hat {\boldsymbol {\beta }}\) is first normalized and then for every aperture position the following matrix is created,*

**β**#### 3.1.2 Regularization via low-rank sparse decomposition

**Q**we have defined in (19) can be formulated as the sum of a low-rank matrix and a sparse matrix. Let us explain with an example. If the

*n*th and

*k*th (

*n*<

*k*) points in the scene have motions and the rest of the scene is stationary, then

**Q**could be expressed as the sum of a low-rank matrix

**L**and a sparse matrix

**S**as follows:

**Q**as a constraint to the optimization problem, we obtain the following cost function:

*is the vector created by stacking the columns of the matrix*

**β****Q**.

*λ*

_{ L }and

*λ*

_{ S }are the regularization parameters. ∥

**L**∥

_{∗}denotes the nuclear norm (trace norm) of the low-rank matrix

**L**. Using field estimate \(\hat {\mathbf {f}}\) from the first step, we estimate the phase errors by minimizing the following cost function:

where **A** is the Lagrange multiplier and *γ*>0 penalizes the violation of the constraint. To solve this minimization problem, we use alternating direction method of multipliers (ADMM) [18]. This problem is solved similarly to the optimization problem in [19].

### 3.2 \thelikesubsection Motion compensation using ROI

The approach we have described in the previous section looks for potential motion everywhere in the scene, e.g., it handles each point in the scene separately considering each point may potentially have a different motion. However, moving points usually exist together in limited regions of a scene. Let us consider a scene containing a few linearly moving vehicles. In this case, all the points belonging to a particular vehicle will have the same motion. In order to exploit such a structure both for computational gains and for improved robustness, we present a modified version of our method. First, we determine the range lines ^{1} that are likely to contain moving objects. This generates regions of interest which we use to estimate the phase errors. Assuming that the moving points in each of these regions have the same motion, we perform space-invariant phase error estimation and compensation for each region. Now let us describe the overall phase error estimation step in detail.

Let **F** be the 2D conventional image (reconstructed by the polar-format algorithm). Since we assume that the field to be imaged has a sparse structure (strong scatterers on a background of weak reflectivities), range lines, having much higher reflectivity values than the others, are likely to contain strong scatterers (belonging to moving and/or stationary objects). To find the range lines with strong scatterers, we first calculate the mean and standard deviation of reflectivities throughout the conventional image. The range lines in the image domain having at least one pixel with a reflectivity greater than the mean plus one standard deviation are selected as potential range lines including objects. To decide which of these range lines include *moving* objects, we use the idea of [20] which is based on the mapdrift autofocus technique [21]. First, two images are reconstructed from data corresponding to each of the two half apertures. While stationary objects lie in the same position in both images, moving objects will not appear in the same position since phase errors caused by moving objects will be different for each sub-aperture data. Therefore, if we compute the correlation coefficient for these range lines between the two sub-aperture images, we obtain small correlation coefficients for range lines including moving objects. Consequently, range lines having a correlation coefficient less than a pre-determined threshold are declared to be range lines with potential moving objects, i.e., the ROI. We have empirically chosen this threshold to be 0.7. When one is not sure how to choose this threshold, using a large value erring on the side of declaring more range lines as potentially containing moving objects would be the safe approach with a less reduction in computational cost with respect to the original version of our approach.

After this simple region determination process, the framework constructed earlier in this paper can be used. While the field estimation step remains the same, phase error estimation is performed region-wise. We assume that there is a single object in each distinct ROI and adjacent range lines correspond to the same object. Accordingly, we apply space-invariant focusing [12] for each distinct ROI. This reduces the number of unknown phase error terms significantly as compared to our original approach and leads to improved robustness in cases where the assumption that there is a single motion in each ROI is valid. Here, a questionable assumption can be the background clutter level. Note that the single motion assumption in each ROI applies to all pixels in that region. In order for this model to be accurate, the clutter reflectivities in the ROI must be small enough. However, in many cases, the clutter does not affect the phase error correction performance.

*C*

_{ roi }and

*f*

_{ roi }and the parts of model matrix and the field corresponding to the outside of this region be

*C*

_{ out }and

*f*

_{ out }, respectively. Then, the phase error

**ϕ**_{ r o i }is estimated by minimizing the following cost function for every aperture position:

*g*

_{ roi }is the phase history data corresponding to the ROI and is given by:

If there are multiple moving objects in the scene, then this procedure is implemented for all regions with a potentially moving object, sequentially. After the model matrix has been updated, the algorithm passes to the next iteration, by incrementing *n* and returning to the field estimation step.

## 4 Additional remarks

Before presenting experimental results, we find it valuable to mention some issues related to the proposed algorithms. The proposed algorithms are insensitive to constant and linear phase errors (as a function of the aperture position) like other existing autofocus techniques. Actually, a constant phase on the data has no effect on the reconstructed image [22]. However, a linear phase causes a spatial shift in the reconstructed image without blurring the image. Although the proposed method can remove all types of phase errors (parametric or random) which cause blurring, it cannot handle the shifts arising due to linearly varying phase terms. Such a phase error can be compensated by appropriate spatial operations on the scene [23]. For our ROI-based approach, we have applied such a spatial operation to move the focused but shifted objects to their true positions. This operation is based on determining the weighted centroid of the binarized reflectivities in each distinct ROI of the conventionally reconstructed defocused image. Here, we have two assumptions. The first assumption is that each ROI involves only one moving object and the second assumption is that the motion of the object causes a slowly varying phase error, e.g., a quadratic phase error, which causes a smearing-like blurring. Quadratic phase errors are very common: a constant velocity in the cross-range direction induces a quadratic phase error function in the data. Non-constant velocities can also be handled reasonably well by this operation if the data collection duration is relatively small. Note that the object centroid estimation procedure does not give always the exact true position of the object but quite a good approximation. In the next section, we demonstrate examples of the application of this procedure.

## 5 Experimental results

SAR system parameters for the experiments in Fig. 1

Range resolution | 1 m |

Wavelength | 0.02 m |

Distance between the SAR platform and patch center | 30000 m |

Platform velocity | 300 m/s |

Aperture time | 1 s |

*t*

_{ s }is the slow-time variable (continuous variable along the aperture) and

*v*

_{ cr }is the constant cross-range velocity of the object. According to this relationship, the object with velocity 2m/s and the object with velocity 4m/s will induce a quadratic phase error defined over an aperture −

*T*/2≤

*t*

_{ s }≤

*T*/2 with a center to edge amplitude of

*π*radians and 2

*π*radians, respectively. In Fig. 1, the results for this experiment are displayed. In the results for conventional imaging and sparsity-driven imaging without any phase error correction, the defocusing and artifacts in the reconstructed images caused by the moving objects can be clearly observed. On the other hand, images reconstructed by the proposed method are well focused and exhibit the advantages of sparsity-driven imaging such as high resolution, reduced speckle, and sidelobes. For the images of the first experiment, we provide the corresponding colormap as well. For improved visibility, the logarithm of the intensities are used. Therefore, the interval of the colormap is chosen as [−40,0]. All images in the paper are displayed using the same colormap.

*π*radians. Figure 4 a shows the original image. The target is marked with a red rectangle. In Fig. 4 b, c, the conventional image with defocused target and the image produced by sparsity-driven imaging without any motion compensation are displayed respectively. As it is seen from the image in Fig. 4 c, sparsity-driven imaging itself cannot handle the motion-induced smearing in the image. Figure 4 d demonstrates the result of our ROI-based method. We observe that with the ROI-based method, the artifacts caused by the moving object are completely removed and a focused image of the object is obtained.

*π*radians. Figure 5 a shows the original image. In Fig. 5 b, c, the conventional image with defocused target and the image produced by sparsity-driven imaging are displayed. As it is seen from the image in Fig. 5 d, the ROI-based method produces promising results even on larger scenes containing extended targets.

*π*, and the phase history data of the third one are corrupted with a quadratic phase error function of amplitude 1.5

*π*. The results show that both approaches are capable of correcting the phase errors. In this example, the only visual difference between the images reconstructed by both approaches is that the target-background ratio of the image obtained by the group-sparsity approach is better than the target-background ratio of the image obtained by the low-rank sparse decomposition approach. This may have resulted from non-optimal parameter selection.

## 6 Conclusions

We have presented a sparsity-driven framework for SAR moving target imaging. In this framework, the sparsity information about both the field and the phase errors are incorporated into the problem. To enforce the sparsity of the phase errors three different regularization terms are proposed within the same framework. The method produces high-resolution images thanks to its sparsity-driven nature and simultaneously removes phase errors causing defocusing in the cross-range direction. Additionally, we provide an ROI-based variation of the method as well for a reduced computational cost and an efficient phase error estimation.

## Notes

### Acknowledgements

This work was partially supported by the Scientific and Technological Research Council of Turkey under Grant 105E090, and by a Turkish Academy of Sciences Distinguished Young Scientist Award.

### Competing interests

The authors declare that they have no competing interests.

## References

- 1.CV Jr. Jakowatz, DE Wahl, PH Eichel, Refocus of constant-velocity moving targets in synthetic aperture radar imagery. Proc. SPIE 3370. Algoritm. Synth. Aperture Radar Imagery V.
**3370:**, 85–95 (1998).Google Scholar - 2.JR Fienup, Detecting moving targets in SAR imagery by focusing. IEEE Trans. Aerosp. Electron. Syst.
**37**(3), 794–809 (2001).CrossRefGoogle Scholar - 3.JK Jao, Theory of synthetic aperture radar imaging of a moving target. IEEE Trans. Geosci. Remote Sens.
**39**(9), 1984–1992 (2001).CrossRefGoogle Scholar - 4.MJ Minardi, LA Gorham, EG Zelnio, Ground moving target detection and tracking based on generalized SAR processing and change detection (Invited Paper). Proc. SPIE 5808, Algoritm. Synth. Aperture Radar Imagery XII.
**5808:**, 156–165 (2005).CrossRefGoogle Scholar - 5.WG Carrara, RM Majewski, RS Goodman,
*Spotlight synthetic aperture radar: signal processing algorithms*(Artech House, Norwood, MA, 1995).MATHGoogle Scholar - 6.I Stojanovic, WC Karl, Imaging of moving targets with multi-static SAR using an overcomplete dictionary. IEEE J. Sel. Topics Signal Process.
**4**(1), 164–176 (2010).CrossRefGoogle Scholar - 7.AS Khwaja, J Ma, Applications of compressed sensing for SAR moving-target velocity estimation and image compression. IEEE Trans. Instrum. Meas.
**60**(8), 2848–2860 (2011).CrossRefGoogle Scholar - 8.S Zhu, A Mohammad-Djafari, H Wang, B Deng, X Li, J Mao, Parameter estimation for SAR micromotion target based on sparse signal representation. EURASIP J. Adv. Signal Process.
**2012**(1) (2012).Google Scholar - 9.Q Wu, M Xing, C Qiu, B Liu, Z Bao, TS Yeo, Motion parameter estimation in the SAR system with low PRF sampling. IEEE Geosci. Remote Sens. Lett.
**7**(3), 450–454 (2010).CrossRefGoogle Scholar - 10.S Bidon, JY Tourneret, L Savy, Sparse representation of migrating targets in low PRF wideband radar. IEEE Radar Conference (RADAR), 0314–0319 (2012).Google Scholar
- 11.G Newstadt, E Zelnio, A Hero, Moving target inference with bayesian models in SAR imagery. IEEE Trans. Aerosp. Electron. Syst.
**50**(3), 2004–2018 (2014).CrossRefGoogle Scholar - 12.NO Önhon, M Çetin, A sparsity-driven approach for joint SAR imaging and phase error correction. IEEE Trans. Image Process.
**21**(4), 2075–2088 (2012).MathSciNetCrossRefGoogle Scholar - 13.NO Önhon, M Çetin, SAR moving target imaging in a sparsity-driven framework. Optics+Photonics, SPIE, Wavelets and Sparsity XIV.
**8138:**, 813806–1-813806-9 (2011).Google Scholar - 14.NO Önhon, M Çetin, Sparsity-driven image formation and space-variant focusing for SAR. IEEE Int. Conf. on Image Processing (ICIP), 173–176 (2011).Google Scholar
- 15.M Çetin, WC Karl, Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. Image Process.
**10**(4), 623–631 (2001).CrossRefMATHGoogle Scholar - 16.S Samadi, M Çetin, MA Masnadi-Shirazi, Sparse representation-based synthetic aperture radar imaging. IET Radar, Sonar & Navig.
**5**(2), 182–193 (2011).CrossRefGoogle Scholar - 17.S Boyd, L Vandenberghe,
*Convex optimization*(Cambridge University Press, Cambridge, 2004).CrossRefMATHGoogle Scholar - 18.S Boyd, N Parikh, BP E. Chu, J Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn.
**3**(1), 1–122 (2011).CrossRefMATHGoogle Scholar - 19.A Soganli, M Çetin, Low-rank sparse matrix decomposition for sparsity-driven SAR image reconstruction. 3rd Int. Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 239–243 (2015).Google Scholar
- 20.JR Moreira, W Keydel, A new MTI-SAR approach using the reflectivity displacement method. IEEE Trans. Geosci. Remote Sens.
**33**(5), 1238–1244 (1995).CrossRefGoogle Scholar - 21.TM Calloway, GW Donohoe, Subaperture autofocus for synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst.
**30**(2), 617–621 (1994).CrossRefGoogle Scholar - 22.CV Jr. Jakowatz, DE Wahl, PH Eichel, DC Ghiglia, PA Thompson,
*Spotlight-mode synthetic aperture radar: a signal processing approach*(Springer, New York, USA, 1996).CrossRefGoogle Scholar - 23.JR Fienup, Synthetic-aperture radar autofocus by maximizing sharpness. OSA Optics Letters.
**25**(4), 221–223 (2000).CrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.