Nanoscale Photonic Imaging pp 35-70 | Cite as

# Coherent X-ray Imaging

- 1k Downloads

## Abstract

This chapter briefly summarizes some main concepts of coherent X-ray imaging. More specifically, we consider lensless X-ray imaging based on free-space propagation. It is meant as primer and tutorial which should help to understand later chapters of this book devoted to X-ray imaging, phase contrast methods, and optical inverse problems. We start by an introduction to scalar wave propagation, first in free space, followed by propagation of short wavelength radiation within matter. This provides the basic tools to consider the mechanisms of coherent image formation in a lensless X-ray microscope. The recorded intensities are inline holograms created by self-interference behind the object. We then present single-step and iterative fixed-point techniques based on alternating projections onto constraint sets as tools to decode the measured intensities (phase retrieval). The chapter closes with a brief generalization of two dimensional coherent imaging to three dimensional imaging by tomography.

## 2.1 X-ray Propagation

Coherent X-ray imaging is based on wave-optical propagation of electromagnetic waves, including free-space propagation and the interaction of short wavelength light with matter. Here we present an overview of fundamental principles of X-ray imaging and field propagation, with references to relevant literature. We first justify the use of scalar wave theory and approximations of paraxial (parabolic) wave equations. Then we show how to compute the wavefield at a distance *d* along the optical axis *z* with respect to a known field distribution in a plane at \(z=0\), assuming free space between planes \(z=0\) and \(z=d\). Next, we address the projection approximation which is ubiquitous in X-ray imaging to describe the complex transmission function of an optically thin object. Finally, we present finite difference equations as a more general tool to treat X-ray propagation in matter and objects which cannot be approximated as thin.

### 2.1.1 Scalar Diffraction Theory and Wave Equations

*Helmholtz equation*(HE)

*c*, angular frequency \(\omega \), and the complex refractive index

*n*of the propagation medium. Here, \({\mathbf {E}}(\mathbf {r}, \omega )\) denotes the time-domain Fourier transform of the electric vector field \(\mathbf { \mathcal {E} } (\mathbf {r}, t)\)

*Z*is the atomic number, \(r_e= {2.82 \cdot 10^{-15}}\mathrm{\,m}\) the Thomson scattering length; \(f^{\prime }(\omega ) \ll Z\) and \(f^{\prime \prime }(\omega ) \ll Z\) are the dispersion and absorption corrections. For mixed elemental composition, the indices are weighted averages according to the local stoichiometry of elements.

*z*, we have

*optical intensity*. For time-harmonic fields and requiring Hermitian symmetry \({\mathbf {E}}(-\omega ) = {\mathbf {E}}^*(\omega )\), one can write the field based on a discrete sum of frequencies \(\omega _i\) for \(i \in \mathbb {N}\)

*I*, can be written as [1]

*z*

*u*

*z*can be neglected, given \( \left| \frac{\partial ^2 u}{\partial z^2} \right| \ll \left| k \frac{\partial u}{\partial z} \right| \) since \(\partial _z^2 u \ll k^2 u\), leading to the paraxial (or parabolic) wave equation

*z*along with lateral boundary conditions, but no values at the high

*z*boundary of the computational domain are required. For these reasons, paraxial approximations have become an important tool in X-ray optics [2, 3, 4], including generalizations to time-dependent propagation problems via the spectral approach [1]. As in Schrödinger’s equation, the parabolic wave equation can be rearranged to

*u*, defined by \(\psi = u \, \exp (- i k z)\) [1]

### 2.1.2 Propagation in Free Space

*angular spectrum approach*as presented in the textbook of Paganin [6]. Again, we assume a time independent, monochromatic wave, i.e. we treat a single component \(\psi _\omega (\mathbf {r})\) of the spectrum with angular frequency \(\omega \) and corresponding wavelength \(\lambda \). A general time-dependent field of finite bandwidth is then computed as superposition of its monochromatic components by

*z*-dependent part of the plane wave can be separated by

*z*can be calculated by a simple multiplication with the so-called “free-space propagator”\(~\exp \left[ iz \sqrt{k^2-k_x^2-k_y^2} \right] \). To propagate an arbitrary wave field \(\psi \), we express \(\psi \) given in plane \(z=0\) by its Fourier transform \(\hat{\psi }( k_x, k_y, z=0 )\)

*z*from the given field in the

*xy*-plane at \(z=0\). This allows interpreting an electromagnetic disturbance in a plane at \(z=0\) as a superposition of plane waves of fixed modulus of the wavevector leaving the plane of interest under different angles

*z*. In this case, \(k_x\) and \(k_y\) are much smaller compared to \(k_z\). As a consequence, we can approximate the free-space propagator by

*z*is

*xy*-plane. Application of the Fourier convolution theorem to (2.32) and expansion of the exponent in the integrand leads to the Fresnel diffraction integral

*a*be the smallest such structure of interest, so that the argument of the exponential, the so-called chirp, is governed by the dimensionless ratio \(\frac{a^2}{\lambda z} =:F\) called the Fresnel number. For large propagation distances (compared to wavelength and

*a*), the Fresnel number approaches zero and hence the chirp function is close to unity: The Fresnel diffraction integral becomes the Fourier transform of the wavefield. This limiting case is known as the Fraunhofer far field approximation. Fresnel numbers close to one indicate the optical near field: The chirp function cannot be neglected. Indeed, as we will see below, it is the chirp function that makes numerical free-space propagation challenging. Importantly, paraxial propagation is governed only by a single parameter

*F*, so that all simulations can be carried out in natural units of pixel size and with a single unitless parameter

*F*.

*a*is reasonable. A natural choice in the discrete setting of numerical calculations is the pixel size \(a=\varDelta x\). The Fraunhofer regime \(F\ll 1 \) is then quickly reached with increasing

*z*. These ‘far field’ holograms, however, do not look like far field diffraction patterns (squared Fourier transform of the object), which is usually associated with the Fraunhofer approximation. The reason is that there is always another length scale

*a*to be considered, namely the beam size. Only when the Fresnel number—computed for all object length scales

*and*the length scale of the beam—is much smaller than one, we get the conventional Fraunhofer diffraction pattern without mixing of object wave and primary wave.

### 2.1.3 The Fresnel Scaling Theorem

*Fresnel Scaling Theorem*(FST). Following the FST, numerical propagation which will be discussed in the next section, is always performed in the effective parallel beam coordinate system. But before doing so, we note down an immediate consequence of spherical beam propagation, which is also illustrated in Fig. 2.3. While the Fresnel number

*F*decreases with increasing distance between object and detector, the opposite is true for the divergent beam setting. As \(z_{01}\) is reduced (at constant \(z_{02}\)), the imaging regime becomes more and more holographic (decrease in

*F*) since the decrease in effective pixels size \(\varDelta x_\text {eff}=\varDelta x/M\) enters quadratically and outweighs the effect of \(z_\text {eff}\). For the minima of the contrast transfer function, which will be discussed further below, this means that their number increases as \(z_{01}\) is decreased (or equivalently

*M*is increased), see the dashed lines in Fig. 2.3c. In order to plot the divergent beam geometry in unitless variables, the intensity of a Gaussian illumination can serve as a model for the diffraction limited beam

*W*as a function of 1/

*M*on double-logarithmic scale, see Fig. 2.3c. Accordingly, the blue line shows the increase in beam width, while the orange line designates the effective pixel size \(\varDelta x_\text {eff}= (\varDelta x/z_{02}) z_{01}\), i.e. the demagnification size of a single detector pixel. In unitless variables this line reaches \(W=\varDelta x/w_\text {max}\) for \(Z=1\) and crosses the dashed red horizontal line at \(Z=w_0/\varDelta x\). The dashed red line separates lateral length scales which can be resolved (above) from those which are unresolved (below), based on the limited numerical aperture (N.A.).

### 2.1.4 Numerical Implementation of Free-Space Propagation

The numerical implementation of free-space propagation is not trivial. In particular, sufficient sampling of chirp functions has to be guaranteed. Extensive literature can be found at [7, 10, 11]. There are two main options to implement propagation. The first uses two Fourier transformations and has equal coordinate systems in source and destination plane, the second is based on a single transformation and can work with differently sized and sampled planes. We start with the first option, which is based on (2.29), according to which propagation can be designed as a filter operation of the wavefield in Fourier space. The corresponding filter is the free-space propagator, which is given by (2.31) in paraxial approximation \(G_k(k_x, k_y; z)\) as a function in reciprocal space. The propagator function can also be written down analytically in real space (called the impulse response function), and then be numerically Fourier transformed. Importantly, the coordinate system of input and output field are identical, and the propagation is based on two fast Fourier transform (FFT) operations. For the second approach, the propagation can be directly calculated based on (2.33), involving a single Fourier transform, either by a single FFT or by other numerical solutions of the Fourier integral, e.g. for non-equidistant sampling. This approach is well suited for cases where the pixel sizes between input and output must vary, e.g. to cover the field of view in a detector after diffraction broadening.

*L*is the field of view consisting of

*N*pixels: \(L = N \varDelta x\). Hence, only a large number

*N*of pixels or a short propagation distance result in aliasing-free sampling of the propagated field. To find a remedy, one can artificially increase the number of pixels. Yet, this drastically decreases the computational speed. Alternatively, one can also write down the impulse response function, i.e. the real space counterpart of the reciprocal space chirp:

*plane*, the output field should be regarded as a function of spatial frequencies, rather than spatial coordinates of the detector. In this way one can dispense of the observation chirp altogether.

### 2.1.5 X-ray Propagation in Matter

*volume effects*) can be neglected, the propagation effects can be described by a simple multiplication, as shown below. For a more general case, (2.16) or (2.17) have to be solved for the paraxial case. This is a formidable task and analytical solutions only exist for a few very special cases (sphere, slab waveguide). Neglecting atomic-scale variations which would become important only for perfect crystals, we can write the index of refraction in continuum approximation as in (2.3) with the real-valued decrement or dispersion term \(\delta \) given by [13]

*Z*the number of electrons in the atom and \(f^{\prime }(E)\) the real part of the atomic form factor correction at energy

*E*. For the imaginary part (absorption term) \(\beta \) we have

*E*are tabulated for each element in the International Tables for Crystallography and are also available online.

^{1}Away from absorption edges, the real part of \(n(\mathbf {r})\) only depends on the total electron density (summed over all elements)

*E*. The projection approximation is valid for sufficiently small spatial frequencies, fulfilling

### 2.1.6 Propagation by Finite Difference Equations

*z*plus a single lateral direction

*x*), i.e. if the index of refraction is independent of

*y*, following (2.19), the parabolic wave equation can be written as

*x*, and \(N_z+1\) grid points along the optical axis. Importantly, and contrarily to elliptical partial differential equations, no values have to be set for \(u_0(x,N_z+1)\). The initial-boundary-value problem given in (2.63) and (2.66) can be solved numerically using finite-difference schemes [3, 19]. Here, the crucial point is the update scheme (‘stencil’), see Fig. 2.4b. For the two dimensional case (one dimension for the optical axis, one lateral dimension), the stencil introduced by Crank and Nicolson [20] gives second order accuracy in steps \(\varDelta z\) along the optical axis and \(\varDelta x\) perpendicular to the optical axis [21]. Accordingly, (2.63) is approximated by the following finite-difference expressions [3]

*n*, by solving a system of \(N_z-1\) linear equations. Furthermore, because \(M^n\) is tridiagonal, this can be carried out with \(\mathcal O(N_x)\) operations [21]. Finally, the process is repeated sequentially for all \(N_z\) grid points, resulting in numerical complexity \(\mathcal O(N_x\times N_z)\). The update scheme for propagation in three dimensions (3d) (i.e. 2d \(+\) 1d) is described in [1, 22, 23].

## 2.2 Coherent Image Formation

While in electron microscopy the lens problems were eventually overcome, a similar challenge appeared in X-ray microscopy: X-ray lenses (diffractive or refractive) lack the efficiency and numerical aperture which we are used to from visible light lenses. At the same time, X-rays are attractive as a microscopy probe owing to short wavelength and hence potentially high resolution as well as high penetration power. Therefore, Gabor’s idea of coherent lens-less imaging re-emerged in X-ray optics and microscopy, once that radiation of high brilliance and sufficient coherence became available by synchrotron sources, in particular after the invention of undulators. In contrast to coherent diffractive imaging (CDI), where the primary beam is blocked behind the sample and the diffracted radiation is recorded without interference with the primary or reference wave, and in direct analogy to Gabor’s setup, X-ray holography is based on the interference between primary and scattered waves on the detector (see Fig. 2.5).

### 2.2.1 Holographic Imaging in Full Field Setting

Inline holography is a full field imaging technique, in which many resolution elements of object and detector are illuminated in parallel, typically employing a mega-pixel detector with sufficiently small pixel size to record the fine interference fringes between scattered and primary waves. While the field of view (FOV) can of course be further increased by lateral scanning, an image of large FOV can be acquired in a single exposure. This is in contrast to coherent techniques which are based on scanning the object in a focused beam, or which require a fully coherent illumination and are therefore limited to a correspondingly small field of view. For this reason, holographic imaging is of significant advantage in particular for tomography, where compared to the net counting time, motor overhead and detector readout for three degrees of freedom become a dominant time factor in recording data. Furthermore, time-resolved imaging is more easily implemented in parallel than serial acquisition.

Two geometries are commonly used for inline X-ray holography: (quasi-) parallel and (quasi-)spherical illumination. The first case is implemented at synchrotron beamlines using almost the full undulator beam without focusing. The divergence is then small enough so that in the object and detector plane the beam is almost of the same lateral size. The FOV is adjusted by slits upstream from the object. This geometry is used for holography/tomography of large objects with resolution elements at the size of the detector pixels.

*defocus*distance. Resolution elements are of the detector pixel size scaled by the inverse geometric magnification. Figure 2.6 illustrates magnification and contrast evolution for a phantom consisting of an assembly of spheres. The Fresnel scaling theorem was used to compute the contrast in an effective parallel beam setup which is numerically more convenient.

### 2.2.2 Contrast in X-ray Holograms

*exit wave*\(\psi _\lambda (x,y)\) directly behind the object, becomes apparent by considering a monochromatic plane wave (wavelength \(\lambda \)) passing through an object which is homogeneous in

*z*and has a thickness \(\varDelta T\):

*z*. More generally, for inhomogeneous (thin) objects, the phase is proportional to the projected electron density, as \(\int \rho (x,y,z) dz\).

*z*the distance between sample and detector and \(k=2\pi \nu \) the angular spatial frequency in the object plane. To obtain this linear relationship, the signal is normalized to the incident plane, subtracted by one. If the diffraction of the illuminating wave is neglected, the expression can be generalized to an inhomogeneous illumination field \( I_0 \rightarrow I(x,y,0)\), see [30, 31] for a discussion of empty beam division. The structure of (2.78) suggests to define linear filter functions describing the evolution of contrast. For the case of weak phase objects (i.e. with weakly varying phase and negligible absorption), the contrast transfer function (CTF) can be defined as

*a*the pixel size. In this way, the dependence on the Fresnel number \(F = \frac{a^2}{\lambda z}\) is highlighted. The spacing of discrete sampling points in reciprocal space is \(\varDelta \nu = 1/N\), with

*N*being the number of pixels in horizontal or vertical direction. An illustration is provided in Fig. 2.6, showing holograms simulated for different distances and magnification in cone-beam geometry (a), with corresponding phase \(\text {CTF}_p\) (only the sine part of the CTF) for the three indicated positions or equivalently

*F*(b). Holograms for the three object positions are shown in (c–e). When the Fresnel number is large, the image contrast is dominated by edges (edge enhancement). For example, the cyan curve in (b) is maximal at high spatial frequencies and hence edges in (e) are enhanced with respect to areas. Shifting the sample closer to the source (light blue and dark blue curves in (b) and holograms in (c–d) shows oscillating contrast for lower image frequencies than in (e).

## 2.3 Solving the Phase Problem in the Holographic Regime

Consider holographic intensity distributions as shown in Fig. 2.6. If phase and amplitude were directly measurable by a detector, the complex wavefield could be numerically back-propagated to obtain the wavefield in the exit plane of the object, where the phase represents a sharp image of the projected electron density. Measurement of the phase is of course completely impossible as the frequency of hard X-rays is on the order of \(10^{18}\) Hz. However, the intensity pattern in the detection plane is directly related to the phase in the exit plane. Unfortunately, the corresponding set of equations is way too large to be solved directly, as it equals to the number of pixels. Furthermore, the interference between primary and scattered wave and the self-interference of the scattered wave render the equations non-linear. Finally, given a single detector image, the system of equations is under-determined, because the amplitude and phase maps in the exit plane contain twice as many unknowns as the available (real-valued) intensity map in the detection plane. The first concern of holographic imaging must therefore be to design the experiment such that the measured data is sufficient, i.e. to achieve uniqueness. Several detector images with sufficient diversity in the data can be generated by variation of the Fresnel number, but in practice a second strategy is more common: The number of unknowns is reduced when sufficient constraints can be formulated, restricting the solution. For example when the phase map can be set be zero outside a known support (support constraint), or when absorption in the object is negligible and the amplitude can be assumed to be one everywhere (pure phase contrast constraint), or finally when the object is approximated to have identical stoichiometry resulting in a coupling of phase and absorption (single material constraint), the phase problem becomes manageable.

Once this first concern of sufficient data and constraints is met, the second concern is to decode the holographic images and retrieve the phase map, i.e. the solution of the phase problem as an inverse problem. This phase retrieval process requires knowledge about image formation (i.e. the forward problem based on the wave equation), including all experimental parameters, as well as about the constraints which can be formulated based on properties of the object. In the following, two basic approaches of phase retrieval will be introduced: Deterministic single step and iterative algorithms using alternating projections onto constraint sets.

### 2.3.1 Single-Step Phase Retrieval

### 2.3.2 Iterative Phase Retrieval

Analytic, single step reconstruction techniques are quick but come at a price: They require very restrictive assumptions or approximations, as detailed above. The applicability of phase retrieval can be significantly extended by iterative methods which are computationally expensive but compatible with a wider range of constraints and valid for more general X-ray optical properties of the object. Iterative algorithms cycle between object and detector plane, and are alternatively subjected to an object constraint and the constraint that the solution has to satisfy the measured data. In the following, some of the most common constraints will be introduced.

**magnitude constraint**: A solution to the phase problem must satisfy the measured intensity distribution of the hologram

*I*(

*x*,

*y*). To this end, the wavefield \(\psi (x,y)\) is modified such that

**range constraint**, which sets the magnitude behind the object strictly equal to one (after normalization to incoming intensity). In other words, one assumes the object to be of pure phase contrast. In a more general setting one solely requires the amplitude to be smaller than one (i.e. thereby fixing the range of the amplitude). This is justified, since right behind the object, no interference effects have yet evolved, the wavefield amplitudes can only be smaller (absorbing components in the object) than or equal (transparent components) to unity

**support constraint**. The reconstructed sample is only allowed to cover a limited part

*D*of the field of view

A compact support can be regarded as a special case of sparsity. Obviously, the pixels with non-zero density are sparse, if the support is small. More generally, the object may be sparse in very different ways, i.e. the object or its projection may be specified by a set of independent values much smaller than the number of pixels (voxels). **Sparsity constraints** enforce image properties to be sparse in some sense, without being too restrictive and specific. A suitable way to enforce sparsity for X-ray holography is the **shearlet constraint** [34]. Shearlets are deformed (scaled, translated and sheared) wavelet-type basis functions [35, 36]. They are particularly useful to represent so-called cartoon-like images (compactly supported and twice continuously differentiable functions) [37, 38, 39]. Hence, in case that amplitude and phase of the object’s transmission function can be categorized as cartoon-like, a sparse representation by a linear combination of shearlets is possible, and this information can be used as an additional constraint for phase retrieval [34]. Yet the applicability of the shearlet constraint is not limited to cartoon-like objects, but the set of shearlets required is much smaller in this case, and the constraint can therefore be formulated more ‘strictly’, and will thus be more powerful for phase retrieval.

If no reasonable constraint can be formulated for the object, i.e. if the object is extended, exhibits uncoupled variations in phase and amplitude and is not sparse, additional data has to be acquired and used as input for phase retrieval. One way to do this is by translations of the object or detector (either longitudinal or lateral) and successive image acquisitions. Alternating projections on such multi-measurements by various update schemes are denoted as **multi-magnitude projection** (mmp) [30, 40, 41]. In mmp, the illumination (probe) has to be perfect or known beforehand, for example by a complete mmp series acquired at different detector positions [40]. This is often difficult to accomplish.

**separability constraint**enables simultaneous phase retrieval of object and probe, and can thereby account for the fact that aberrations inherent in the illumination interfere with the modulations imposed by the sample, which otherwise result in degradation of image quality and resolution [30, 31]. The separability constraint is key in all ptychographic algorithms [42, 43, 44], and was introduced in X-ray holography in several different ways, based on using either a wavefront diffuser and lateral translations [45], or longitudinal translations [46], or a combination of lateral and longitudinal translations [47]. Note that the last case is least restrictive in terms of probe properties, see also [48] for a detailed comparison and discussion.

Figure 2.7 presents a schematic of an iterative phase retrieval algorithm. It is composed of alternating projections and reflections onto sets of functions that fulfill two or more constrains. The final goal is to ultimately decode the holographic intensities and reveal a solution proportional to the electron density distribution of the sample. The precise update scheme of a specific reconstruction algorithm (step 6 in Fig. 2.7), has significant effect on the convergence. As an illustration, three common update techniques will be briefly introduced.

**Gerchberg-Saxton-type (GS)**algorithms [49] and consist of alternating projections onto sets of functions fulfilling magnitude or range-constraints as given in (2.84). Holograms \(M_1(x,y)\) and \(M_2(x,y)\) recorded at two different defocus positions provide the required data, but a single measurement can be sufficient, for example in case of pure phase contrast. The complete algorithm can be written as

**Error Reduction Algorithm**which alternates the magnitude (measurement) constraint and a support constraint.

**Hybrid-Input-Output**algorithm, which was originally formulated for far field coherent diffractive imaging and objects limited to a finite support [50]. The main idea is to formulate the update as a linear combination of the wavefield of the previous iterate (the input), and the wavefield resulting from projection onto the measurement and application of a support constraint (the output). This linear combination is governed by the parameter \(\beta \in \left[ 0,1 \right] \), gradually pulling the wavefield outside

*D*to zero

**Relaxed-Averaged-Alternating-Reflections (RAAR)**algorithm [51] combines projections \(\mathcal {P}\) and reflections \(\mathcal {R}=2\cdot \mathcal {P} - 1\). It is designed such that

*M*refers to projections/reflections on the set of functions reproducing the measured intensities, whereas the subscript

*O*indicates operations fulfilling constraints in the object domain. These can be range constraints, sparsity/shearlet constraints, separability constraints or support constraints. For more details on this algorithm see Chaps. 6 and 23.

Figure 2.8 depicts the results of three different phase retrieval methods applied to the simulated holographic intensities in (a). The phantom shown in (b) consists of spheres with radii between 50 and 200 nm of different materials (Al, Al\(_2\)O\(_3\), Ca) inside a volume. The incoming illumination was simulated as a plane wave with photon energy of 7keV. The fact that the spheres are not made of a single material violates the assumptions required to derive single step CTF-based phase retrieval (2.79). Hence, the reconstruction shown in (d) depicts blurry regions (absorbing and phase shifting components \(\beta \) and \(\delta \) were set to the mean values of the given materials). The iterative modified Hybrid-Input-Output (e) and the Relaxed-Averaged-Alternating-Reflections algorithm can reveal features of the projected phase more distinctly and with less blur. A detailed comparison between iterative and analytic phase retrieval for experimental data can be found in [52].

## 2.4 From Two to Three Dimensions: Tomography and Phase Retrieval

*f*(

*x*,

*y*,

*z*), we would also like to find the 3d structure as given by the refractive index \(n(\mathbf {r})\). For this purpose, many projections of the sample have to be recorded under different angles, see Fig. 2.9a. Projected gray values onto a single line

*s*under an angle \(\theta \) shown in Fig. 2.9b correspond to

*s*. The operator \(\mathcal {R}\left[ f(\mathbf {r})\right] (s)\) is denoted as the Radon transform. It results in the 1d projection of the sample onto the line s. Figure 2.9b illustrates the discrete version of (2.90) (integral is replaced by sum). It shows the top view of a selected slice (

*xy*-plane) through the sample

*f*(gray shaded region). The operator \(\mathcal {R}\left[ f(\mathbf {r})\right] (s)\) computes line integrals through the object: Whenever the scalar product \(\mathbf {r} \cdot \mathbf {n_\theta }\) equals a distinct value

*s*(here: \(s_a\) and \(s_b\)) corresponding to the projection of \(\mathbf {r}\) (here illustrated by \(\mathbf {a_n}\), \(\mathbf {a_{n+m}}\), \(\mathbf {b_n}\), \(\mathbf {b_{n+m}}\)) onto \(\mathbf {n_\theta }\), the value of the object \(f(\mathbf {r})\) contributes to the projected gray value in a specific bin of the detector (here: the gray values at \(s_a\) and \(s_b\)).

A different approach is to combine phase retrieval and tomographic reconstruction. Instead of performing phase retrieval of all projections first and in a second step the inverse Radon transform (e.g. filtered back-projection), these two steps can be intertwined iteratively: **Iterative Reprojection Phase retrieval** (IRP) [55]. In this way, an iterate of the full 3d object exists at all times during the process, which ensures tomographic consistency of all projections. This was found to facilitate phase retrieval, effectively acting as a constraint of its own.

Since IRP is computationally very involved, another simultaneous operation of tomographic reconstruction and phase retrieval by CTF was proposed, which also couples the two previously sequential operations [56]. It relies on propagation of the transmission function of an entire 3d object—the **propagated object**—and requires linearization of the object’s optical properties. If this approximation is justified, the concept of propagating the entire 3d object in parallel is extremely useful for holo-tomography, an shall be briefly introduced here, following [56].

*z*impinging on a 3d object, parameterized by the spatial distribution of the index of refraction \(n(\mathbf {r})\). If the projection approximation holds, the exit wave \(\psi (x,y,z=0)\) is determined by the projection of the index of refraction onto a plane perpendicular to the optical axis, which can be written in terms of the Radon operator \(\mathcal {R}(n-1) := \int (n-1)\) d

*z*as

*z*(using the propagator \(P_z\)) can be formulated as

*z*, which can be rewritten as

**Fourier slice theorem**. Here we use it to show that the order of projection and propagation \(\mathcal {D}\) can be inverted [56]. We consider the 2d propagation \(\mathcal {D}_{2d}\) of the projection \(\mathcal {R} (n-1)\) [56]

## Footnotes

## Notes

### Acknowledgements

We thank present and past group members for insights and discussions, in particular Lars Melchior, Aike Ruhlandt and Johannes Hagemann, for the progress they have provided in FD propagation (L.M.), numerical free-space propagators and 3d propagation (A.R.), and in iterative methods of holographic phase retrieval (J.H.). We also acknowledge the fruitful exchange with our colleagues from mathematics, in particular Russell Luke, Thorsten Hohage, Carolin Homann, Gerlind Plonka-Hoch, Stefan Look, and Simon Maretzke.

## References

- 1.Melchior, L., Salditt, T.: Finite difference methods for stationary and time-dependent X-ray propagation. Opt. Express
**25**, 32090–32109 (2017)ADSCrossRefGoogle Scholar - 2.Bergemann, C., Keymeulen, H., van der Veen, J.F.: Focusing x-ray beams to nanometer dimensions. Phys. Rev. Lett.
**91**(20), 204801 (2003)ADSCrossRefGoogle Scholar - 3.Fuhse, C., Salditt, T.: Finite-difference field calculations for one-dimensionally confined X-ray waveguides. Phys. B
**357**(1–2), 57–60 (2005)ADSCrossRefGoogle Scholar - 4.Kopylov, Y.V., Popov, A.V., Vinogradov, A.V.: Application of the parabolic wave equation to X-ray diffraction optics. Opt. Commun.
**118**(5–6), 619–636 (1995)ADSCrossRefGoogle Scholar - 5.Husakou, A.: Nonlinear phenomena of ultrabroadband radiation in photonic crystal fibers and hollow waveguides. Ph.D. thesis, Freie Universität Berlin (2002)Google Scholar
- 6.Paganin, D.M.: Coherent X-ray Optics. Oxford University, New York (2006)zbMATHCrossRefGoogle Scholar
- 7.Ruhlandt, A.: Time-resolved x-ray phase-contrast tomography. Ph.D. thesis, Universität Göttingen (2018)Google Scholar
- 8.Bartels, M., Krenkel, M., Haber, J., Wilke, R.N., Salditt, T.: X-ray holographic imaging of hydrated biological cells in solution. Phys. Rev. Lett.
**114**, 048103 (2015)ADSCrossRefGoogle Scholar - 9.Döring, F., Robisch, A.L., Eberl, C., Osterhoff, M., Ruhlandt, A., Liese, T., Schlenkrich, F., Hoffmann, S., Bartels, M., Salditt, T., Krebs, H.U.: Sub-5 nm hard x-ray point focusing by a combined Kirkpatrick-Baez mirror and multilayer zone plate. Opt. Express
**21**(16), 19311–19323 (2013)ADSCrossRefGoogle Scholar - 10.Voelz, D.G.: Computational Fourier Optics: A MATLAB Tutorial (SPIE Tutorial Texts Vol. TT89). SPIE press (2011)Google Scholar
- 11.Voelz, D.G., Roggemann, M.C.: Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences. Appl. Opt.
**48**(32), 6132–6142 (2009)ADSCrossRefGoogle Scholar - 12.Matsushima, K., Shimobaba, T.: Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields. Opt. Express
**17**(22), 19662–19673 (2009)ADSCrossRefGoogle Scholar - 13.Als-Nielsen, J., McMorrow, D.: Elements of Modern X-ray Physics, 2nd edn. Wiley (2011)Google Scholar
- 14.Salditt, T., Aspelmeier, T., Aeffner, S.: Biomedical Imaging: Principles of Radiography, Tomography and Medical Physics. Walter de Gruyter GmbH & Co KG (2017)Google Scholar
- 15.Davis, T.J.: Dynamical X-ray diffraction from imperfect crystals: a solution based on the Fokker-Planck equation. Acta Crystallogr. Sec. A
**50**(2), 224–231 (1994)CrossRefGoogle Scholar - 16.Gureyev, T.E., Davis, T.J., Pogany, A., Mayo, S.C., Wilkins, S.W.: Optical phase retrieval by use of first Born- and Rytov-type approximations. Appl. Opt.
**43**(12), 2418–2430 (2004)ADSCrossRefGoogle Scholar - 17.Sung, Y., Barbastathis, G.: Rytov approximation for x-ray phase imaging. Opt. Express
**21**(3), 2674–2682 (2013)ADSCrossRefGoogle Scholar - 18.Li, K., Wojcik, M., Jacobsen, C.: Multislice does it all-calculating the performance of nanofocusing x-ray optics. Opt. Express
**25**(3), 1831–1846 (2017)ADSCrossRefGoogle Scholar - 19.Scarmozzino, R., Osgood, R.M.J.: Comparison of finite-difference and Fourier-transform solutions of the parabolic wave equation with emphasis on integrated-optics applications. J. Opt. Soc. Am. A
**8**(5), 724–731 (1991)ADSCrossRefGoogle Scholar - 20.Crank, J., Nicolson, P.: A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type. Proc. Camb. Philos. Soc.
**43**, 55–67 (1947)ADSMathSciNetzbMATHCrossRefGoogle Scholar - 21.Thomas, J.W.: Numerical Partial Differential Equations: Finite Difference Methods, vol. 22. Springer Science & Business Media (2013)Google Scholar
- 22.Fuhse, C.: X-ray waveguides and waveguide-based lensless imaging. Ph.D. thesis (2006)Google Scholar
- 23.Fuhse, C., Salditt, T.: Finite-difference field calculations for two-dimensionally confined x-ray waveguides. Appl. Opt.
**45**(19), 4603–4608 (2006)ADSCrossRefGoogle Scholar - 24.Spence, J.C.: Lawrence Bragg, microdiffraction and X-ray lasers. Acta Crystallogr. Sect. A Found. Crystallogr.
**69**(1), 25–33 (2013)CrossRefGoogle Scholar - 25.Gabor, D.: A new microscopic principle. Nature
**161**, 777–778 (1948)ADSCrossRefGoogle Scholar - 26.Cloetens, P., Ludwig, W., Baruchel, J., Guigay, J.P., Pernot-Rejmankova, P., Salome-Pateyron, M., Schlenker, M., Buffiere, J.Y., Maire, E., Peix, G.: Hard x-ray phase imaging using simple propagation of a coherent synchrotron radiation beam. J. Phys. D
**32**(10A), A145–A151 (1999)CrossRefGoogle Scholar - 27.Guigay, J.P.: Fourier transform analysis of Fresnel diffraction patterns and in-line holograms. Optik
**49**(1), 121–125 (1977)Google Scholar - 28.Turner, L.D., Dhal, B.B., Hayes, J.P., Mancuso, A.P., Nugent, K.A., Paterson, D., Scholten, R.E., Tran, C.Q., Peele, A.G.: X-ray phase imaging: demonstration of extended conditions for homogeneous objects. Opt. Express
**12**(13), 2960–2965 (2004)ADSCrossRefGoogle Scholar - 29.Zabler, S., Cloetens, P., Guigay, J.P., Baruchel, J., Schlenker, M.: Optimization of phase contrast imaging using hard x rays. Rev. Sci. Instrum.
**76**(7), 073705 (2005)ADSCrossRefGoogle Scholar - 30.Hagemann, J., Robisch, A.-L., Luke, D.R., Homann, C., Hohage, T., Cloetens, P., Suhonen, H., Salditt, T.: Reconstruction of wave front and object for inline holography from a set of detection planes. Opt. Express
**22**(10), 11552–11569 (2014)ADSCrossRefGoogle Scholar - 31.Homann, C., Hohage, T., Hagemann, J., Robisch, A.-L., Salditt, T.: Validity of the empty-beam correction in near-field imaging. Phys. Rev. A
**91**, 013821 (2015)ADSCrossRefGoogle Scholar - 32.Krenkel, M.: Cone-beam x-ray phase-contrast tomography for the observation of single cells in whole organs. Ph.D. thesis, Universität Göttingen (2015)Google Scholar
- 33.Giewekemeyer, K., Krüger, S.P., Kalbfleisch, S., Bartels, M., Beta, C., Salditt, T.: X-ray propagation microscopy of biological cells using waveguides as a quasipoint source. Phys. Rev. A
**83**(2), 023804 (2011)ADSCrossRefGoogle Scholar - 34.Loock, S., Plonka, G.: Phase retrieval for Fresnel measurements using a shearlet sparsity constraint. Inverse Probl.
**30**(5), 055005 (2014)ADSMathSciNetzbMATHCrossRefGoogle Scholar - 35.Guo, K., Kutyniok, G., Labate, D.: Sparse multidimensional representations using anisotropic dilation and shear operators. In: Chen, G., Lai, M. (eds.) Wavelets and Splines. Nashboro Press, pp. 189–201 (2006)Google Scholar
- 36.Labate, D., Lim, W.Q., Kutyniok, G., Weiss, G.: Sparse multidimensional representation using shearlets. In: Papadakis, M., Laine, A.F., Unser M.A. (eds.) Wavelets XI, Proceedings of the SPIE, vol. 5914, pp. 254–262 (2005)Google Scholar
- 37.Donoho, D.L.: Sparse components of images and optimal atomic decomposition. Constr. Approx.
**17**, 353–382 (2001)MathSciNetzbMATHCrossRefGoogle Scholar - 38.Kutyniok, G., Labate, D. (eds.): Shearlets: Multiscale Analysis for Multivariate Data. Birkhäuser (2012)Google Scholar
- 39.Pein, A., Loock, S., Plonka, G., Salditt, T.: Using sparsity information for iterative phase retrieval in x-ray propagation imaging. Opt. Express
**24**(8), 8332–8343 (2016)ADSCrossRefGoogle Scholar - 40.Hagemann, J., Robisch, A.-L., Osterhoff, M., Salditt, T.: Probe reconstruction for holographic X-ray imaging. J. Synchrotron Rad.
**24**(2), 498–505 (2017)CrossRefGoogle Scholar - 41.Hagemann, J., Salditt, T.: Divide and update: towards single-shot object and probe retrieval for near-field holography. Opt. Express
**25**(18), 20953–20968 (2017)ADSCrossRefGoogle Scholar - 42.Guizar-Sicairos, M., Fienup, J.R.: Phase retrieval with transverse translation diversity: a nonlinear optimization approach. Opt. Express
**16**(10), 7264–7278 (2008)ADSCrossRefGoogle Scholar - 43.Maiden, A.M., Rodenburg, J.M.: An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy
**109**(10), 1256–1262 (2009)CrossRefGoogle Scholar - 44.Thibault, P., Dierolf, M., Menzel, A., Bunk, O., David, C., Pfeiffer, F.: High-resolution scanning x-ray diffraction microscopy. Science
**321**(5887), 379–382 (2008)ADSCrossRefGoogle Scholar - 45.Stockmar, M., Cloetens, P., Zanette, I., Enders, B., Dierolf, M., Pfeiffer, F., Thibault, P.: Near-field ptychography: phase retrieval for inline holography using a structured illumination. Sci. Rep.
**3**, 1927 (2013)ADSCrossRefGoogle Scholar - 46.Robisch, A.-L., Salditt, T.: Phase retrieval for object and probe using a series of defocus near-field images. Opt. Express
**21**(20), 23345–23357 (2013)ADSCrossRefGoogle Scholar - 47.Robisch, A.-L., Kröger, K., Rack, A., Salditt, T.: Near-field ptychography using lateral and longitudinal shifts. New J. Phys.
**17**(7), 073033 (2015)ADSCrossRefGoogle Scholar - 48.Robisch, A.-L.: Phase retrieval for object and probe in the optical near-field. Ph.D. thesis, Universität Göttingen (2016)Google Scholar
- 49.Gerchberg, R.W., Saxton, W.O.: A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik
**35**(2), 237–246 (1972)Google Scholar - 50.Fienup, J.R.: Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett.
**3**(1), 27–29 (1978)ADSCrossRefGoogle Scholar - 51.Luke, D.R.: Relaxed averaged alternating reflections for diffraction imaging. Inverse Probl.
**21**(1), 37 (2005)ADSMathSciNetzbMATHCrossRefGoogle Scholar - 52.Krenkel, M., Toepperwien, M., Alves, F., Salditt, T.: Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime. Acta Crystallogr. Sec. A
**73**(4), 282–292 (2017)MathSciNetCrossRefGoogle Scholar - 53.Buzug, T.: Computed Tomography: From Photon Statistics to Modern Cone-Beam CT. Springer (2008)Google Scholar
- 54.Natterer, F.: The Mathematics of Computerized Tomography. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (2001)Google Scholar
- 55.Ruhlandt, A., Krenkel, M., Bartels, M., Salditt, T.: Three-dimensional phase retrieval in propagation-based phase-contrast imaging. Phys. Rev. A
**89**, 033847 (2014)ADSCrossRefGoogle Scholar - 56.Ruhlandt, A., Salditt, T.: Three-dimensional propagation in near-field tomographic X-ray phase retrieval. Acta Crystallogr. Sec. A
**72**(2), 215–221 (2016)MathSciNetzbMATHCrossRefGoogle Scholar

## Copyright information

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.