1 Introduction

In fusion devices, the X-ray plasma emissivity contains essential information on the magnetohydrodynamic activity, the magnetic equilibrium and on the transport of impurities, in particular for tokamaks in the soft X-ray (SXR) energy range of 0.1–20 keV. In this context, tomography diagnostics are a key method to estimate the local plasma emissivity from a given set of line-integrated measurements. Unfortunately, the reconstruction problem is mathematically ill-posed, due to very sparse and noisy measurements, requiring an adequate regularization procedure. In practice, adapted reconstruction methods have to be chosen in order to perform the inversion task and select meaningful solutions. Though a great variety of different inversion techniques have been developed for plasma tomography in different energy ranges such as VUV, SXR, hard X-ray or gamma-rays [1,2,3,4,5,6,7,8,9], this paper is intended to focus on one of the most common methods used for SXR tomography in tokamaks, based on the Tikhonov approach [10].

The crucial issue of heavy impurity radiation has been raised by modern tokamaks like ITER—International Thermonuclear Experimental Reactor, which selected tungsten (W) instead of traditional carbon as the main plasma facing material for its divertor, to minimize tritium retention in the walls. In future reactors, W core concentration should be kept below 0.01% to prevent an unmanageable cooling of the plasma by radiation [11]. Besides, it has been shown that the poloidal distribution of impurities can affect their central accumulation [12], highlighting the fact that 2D tomographic tools are crucial to estimate the W radial profile in the plasma, quantify its poloidal distribution and identify relevant impurity mitigation strategies.

This paper is intended for any scientist interested in the field of plasma tomography or any student wishing to develop his/her own tomography algorithm. The goal is to introduce some methodology and tools to develop a tomography algorithm for fusion devices. In the second section, the tomography problem is introduced with a focus on the Tikhonov regularization. In the third section, we study the optimal values of the reconstruction parameters, such as the emissivity spatial resolution and the regularization parameter, based on a simple 1D tomography problem. A methodology is also proposed to perform an in situ sensitivity cross-calibration and position correction of the detectors with an iterative procedure, by using the information redundancy and data variability in a given set of reconstructed profiles. Finally, in the last section, the basic steps to build a synthetic soft X-ray tomography diagnostics in a more realistic tokamak environment are introduced, together with some tools to assess the capabilities of the 2D tomography algorithm.

2 Defining the tomography problem

2.1 Generalities and main assumptions

The incident power \(\varphi\) (in W) measured by a detector–pinhole camera looking at the plasma can be expressed in general as:

$$ \varphi = \int\limits_{X} {\int\limits_{Y} {\int\limits_{Z} {K\left( {x,y,z} \right)} } } \;\varepsilon^{\eta } \left( {x,y,z} \right){\text{d}}x\;{\text{d}}y\;{\text{d}}z $$
(1)

with \(\varepsilon^{\eta }\) the plasma emissivity (in W·m−3) filtered by the spectral response \(\eta \left( {h\nu } \right)\) of the camera—where \(h\nu\) is the energy of the incident radiation and \(K\left( {x,y,z} \right)\) the geometric function of the detector–pinhole system in the \(\left( {X,Y,Z} \right)\) space.

As pictured in Fig. 1, the incident power \(\varphi\) onto the detector D is the sum of the contributions from the infinitesimal emissivity volumes \(dV_{p}\) in the plasma volume \(V_{p}\):

$$ \varphi = \int\limits_{{V_{p} }} {d\varphi } = \int\limits_{{V_{p} }} {\varepsilon^{\eta } } \frac{\Omega }{4\pi }dV_{p} , $$
(2)
Fig. 1
figure 1

Representation of a detector–pinhole system in a vertical cross section of the plasma

where \(\Omega \left( {dV_{p} } \right) = S_{D}^{eff} /r^{2}\) is the solid angle under which D is viewed by the volume \(dV_{p}\) through the pinhole P, with r the distance between \(dV_{p}\) and D. Considering that the spatial variation of the emissivity \(\varepsilon^{\eta }\) is small in each section \(S_{p}\) of the plasma perpendicular to the pinhole–detector axis (and contained in the volume-of-sight), it is possible to write:

$$ \varphi = \int\limits_{r} {\int\limits_{{S_{p} }} {\varepsilon^{\eta } } } \left( {dV_{p} } \right)\frac{{\Omega \left( {dV_{p} } \right)}}{4\pi }{\text{d}}S_{p} {\text{d}}r = \int\limits_{r} {\varepsilon^{\eta } } \left( r \right)\underbrace {{\int\limits_{{S_{p} }} {\frac{{\Omega \left( {dV_{p} } \right)}}{4\pi }} {\text{d}}S_{p} }}_{E}{\text{d}}r, $$
(3)

where E is called the geometrical etendue (in m2) of the detector–pinhole system. This simplification is referred to as the Line-of-Sight (LoS) approximation. Furthermore, it is possible to show [13] that the geometrical etendue is conserved along the LoS, since the plasma section \(S_{p}\) seen by the system increases as \(r^{2}\) while the solid angle of the detection system as seen by the plasma decreases as \(1/r^{2}\), such that:

$$ \varphi = E\int\limits_{r} {\varepsilon^{\eta } } \left( r \right){\text{d}}r. $$
(4)

Provided that the pinhole is small as compared to the detector–pinhole distance L, it is possible to obtain a simplified formula for the geometrical etendue: \(E \simeq S_{P} S_{D} /\left( {4\pi L^{2} } \right)\).

2.2 The inverse problem

Based on Eq. (4), the line-integrated measurements \(f_{i}\) of the ith detector, also referred to as brightness (in W·m−2), can be given, in the line-of-sight (LoS) approximation, by:

$$ f_{i} = \varphi_{i} /E_{i} = \int\limits_{{{\text{LoS}}}} {\varepsilon^{\eta } \left( {x,y} \right)dr_{i} + \tilde{f}_{i} ,} $$
(5)

where \(\varphi_{i}\) denotes the incident power on the ith channel (in W), \(E_{i}\) is the geometrical etendue (in m2) of the detector-aperture system, \(\varepsilon^{\eta } \left( {x,y} \right)\) is the 2D filtered emissivity field (in W·m−3) and \(\tilde{f}_{i}\) represents any perturbative noise in the ith measurements (e.g., electronic noise). Discretization of the space containing plasma in \(N_{p}\) emissivity pixels leads to the reformulation of Eq. (5) as follows:

$$ f_{i} = \mathop \sum \limits_{j = 1}^{{N_{p} }} T_{ij} { }\varepsilon_{j} + \tilde{f}_{i} , $$
(6)

with \(\varepsilon_{j}\) the emissivity (hereafter, the upper index \(\eta\) associated with the detector spectral response will be omitted for simplicity) in the jth plasma pixel. The transfer matrix elements \(T_{ij}\) correspond to the response function of the detection system. In our case, \(T_{ij}\) will simply denote the length of the ith LoS in the jth pixel, as depicted in Fig. 2.

Fig. 2
figure 2

Layout of a line-of-sight in a tomographic system, with pixel discretization of the plasma region

It is worth noting that several tokamak plasma tomography methods were first developed using global basis functions (e.g., Bessel and Fourier expansion of the plasma emissivity profile), see for instance the use of the Cormack technique on Alcator C [2], rather than using local basis functions such as square pixels. However, the latter approach gave superior results and has become dominant nowadays in this topic.

For ideal measurements (i.e., without any systematic or statistical error), denoted \(f_{i,0}\), the previous equation can be rewritten as:

$$ f_{i,0} = \mathop \sum \limits_{j} T_{ij,0} { }\varepsilon_{j} . $$
(7)

In experimental applications, the real measurements \(f_{i}\) will result from the experimental deviation \(\delta f_{i}\) from ideal measurements \(f_{i,0}\) such that \(f_{i} = f_{i,0} + \delta f_{i}\). In the same way, the transfer matrix can be expressed as: \(T_{ij} = T_{ij,0} + \delta T_{ij} = C_{i} \left( {T_{ij,0} + \tilde{T}_{ij} } \right) = T_{ij,0} + \left( {\delta C_{i} T_{ij,0} + C_{i} \tilde{T}_{ij} } \right)\), in the small perturbation approximation, where \(C_{i} = 1 + \delta C_{i}\) depicts the ith detector imperfect calibration (with \(\delta C_{i} \ll 1\)) and \(\tilde{T}_{ij}\) results from detector and pinhole positioning errors. Therefore, Eqs. (6) and (7) lead to:

$$ \delta f_{i} \simeq \mathop \sum \limits_{j} \left[ {\delta C_{i} T_{ij,0} + \tilde{T}_{ij} } \right]{ }\varepsilon_{j} + \tilde{f}_{i} , $$
(8)

where the second-order term \(\delta C_{i} \tilde{T}_{ij}\) has been neglected here (it should be kept if \(\delta C_{i} \ll 1\) is not fulfilled). It is clear from Eq. (8) that the total measurement error \(\delta f_{i}\) results from:

  • the detector miscalibration \(\delta C_{i}\),

  • the detector–pinhole mispositioning \(\tilde{T}_{ij}\),

  • the statistical (e.g., electronic) noise \(\tilde{f}_{i}\).

We will assess in the next sections the effect of measurement errors on the tomographic reconstructions, and the possibility and limitations of correcting in situ, thanks to some tomography procedure, the calibration and positioning errors.

2.3 Tikhonov regularization

The intuitive approach of obtaining a solution of the tomographic problem would be to find a solution \({\varvec{\varepsilon}}_{{{\varvec{rec}}}}\) which minimizes the measurement error \(\chi^{2}\) between the retrofits \({\varvec{f}}_{{{\varvec{rec}}}} = {\varvec{T}} \cdot {\varvec{\varepsilon}}_{{{\varvec{rec}}}}\) of the reconstruction and the actual measurements \({\varvec{f}}_{{{\varvec{meas}}}}\):

$$ \chi^{2} = \left| {{\varvec{f}}_{{{\varvec{meas}}}} - {\varvec{T}} \cdot {\varvec{\varepsilon}}_{{{\varvec{rec}}}} } \right|^{2} = \left| {{\varvec{f}}_{{{\varvec{meas}}}} - {\varvec{f}}_{{{\varvec{rec}}}} } \right|^{2} . $$
(9)

Unfortunately, reconstructing the local emissivity is not a straightforward task as it is an ill-posed inverse problem by nature. As we have seen previously, the presence of statistical and systematic errors in the measurements affects the reconstruction. Besides, the measurements set \(\left\{ {{\text{f}}_{{\text{i}}} } \right\}_{{{\text{i}} = 1 \ldots N_{los} }}\) is usually quite sparse in tokamaks, due to the lack of available space. This implies that the number of plasma pixels \(N_{p}\), necessary to obtain a satisfactory spatial resolution is usually much greater than the number of measurements, i.e., \(N_{p} \gg N_{los}\), making the problem underdetermined. Therefore, a solution with satisfying specific properties should be selected among the infinity of possible solutions, by adding some a priori information in the reconstruction. Many different inversion methods have been developed to solve the inversion problem for tokamak plasmas. Some of them are based on global basis functions like the Fourier–Bessel decomposition [2, 3], and more recently on local basis functions (e.g., pixels). The traditional approach is to minimize (or maximize) a functional that includes some a priori knowledge on the plasma emissivity shape, such as in the maximum entropy [4, 5], maximum-likelihood [6] and other Bayesian methods [8]. Other approaches involve genetic algorithms [14] or Monte Carlo methods [15]. In the prospect of real-time applications, machine learning and neural networks are also of great interest for performing tomographic reconstructions at low computational cost [9, 16].

Nevertheless, in this paper we focus on one of the most common approaches for SXR tomography in tokamak plasmas, with the central emphasis placed on the smoothness of the solution. A. N. Tikhonov introduced such regularization procedure [10], for which the a priori information is added through a regularization matrix \(H\) that imposes smoothness on the emissivity profiles. For example, it can aim at minimizing the local spatial gradient \(H =^{{\varvec{t}}}\!\!{\varvec{\nabla}}\cdot{\varvec{\nabla}}\) (the superscript t denotes the matrix transpose operation), with the gradient in the matrix form (e.g., forward difference) along the \(X\) direction:

$$ {\varvec{\nabla}}_{{\varvec{X}}} = \frac{1}{\Delta X}\left( {\begin{array}{*{20}c} { - 1} &\quad 1 &\quad 0 &\quad \cdots &\quad 0 \\ 0 &\quad \ddots &\quad \ddots &\quad {\left( 0 \right)} &\quad \vdots \\ \vdots &\quad \ddots &\quad \ddots &\quad \ddots &\quad 0 \\ \vdots &\quad {\left( 0 \right)} &\quad 0 &\quad { - 1} &\quad 1 \\ 0 &\quad \cdots &\quad 0 &\quad { - 1} &\quad 1 \\ \end{array} } \right), $$
(10)

with \(\Delta X\) the spatial step size. A meaningful solution can then be obtained by finding the minimum of a functional \(\phi\):

$$ \phi = \chi^{2} + \lambda R, $$
(11)

where the regularization parameter \(\lambda\) allows to obtain a compromise between the minimization of \(\chi^{2}\) and of the regularization term \(R =^{{\varvec{t}}} {\varvec{\varepsilon}} \cdot {\varvec{H}} \cdot {\varvec{\varepsilon}}\). The problem aims at finding a solution \({\varvec{\varepsilon}}_{{{\varvec{rec}}}}\) that satisfies:

$$ {\varvec{\varepsilon}}_{{{\varvec{rec}}}} = \arg \mathop {\min }\limits_{{\varvec{\varepsilon}}} \left( \phi \right) = \arg \mathop {\min }\limits_{{\varvec{\varepsilon}}} \left( {\left| {{\varvec{f}}_{{{\varvec{meas}}}} - {\varvec{T}} \cdot {\varvec{\varepsilon}}} \right|^{2} + \lambda^{{\varvec{t}}} {\varvec{\varepsilon}} \cdot {\varvec{H}} \cdot {\varvec{\varepsilon}}} \right). $$
(12)

The explicit solution \({\varvec{\varepsilon}}_{{{\varvec{rec}}}}\) of the minimization problem written in Eq. (12) can be expressed as:

$$ {\varvec{\varepsilon}}_{{{\varvec{rec}}}} = \left( {{}_{\user2{ }}^{{\varvec{t}}} {\varvec{T}}.{\varvec{T}} + \lambda {\varvec{H}}} \right)^{ - 1} \cdot^{{\varvec{t}}} {\varvec{T}} \cdot {\varvec{f}}_{{{\varvec{meas}}}} , $$
(13)

where the value of \(\lambda\) is a parameter of the reconstruction and has to be chosen carefully. An empirical approach that can be found in [17] is to choose the ratio of matrix traces in Eq. (13):

$$ \lambda_{tr} = \frac{{{\text{trace}}\left( {^{{\varvec{t}}} {\varvec{T}} \cdot {\varvec{T}}} \right)}}{{{\text{trace}}\left( {\varvec{H}} \right)}}. $$
(14)

This method has the advantage of being simple and fast, although it can be far from the optimal value as we will see in the next sections. A second approach, so-called L-curve corner, is to perform a scan of the regularization parameter and to plot the regularization term \(R\) as a function of the measurement error \(\chi^{2}\). Such plot has usually a L-shape with an apparent inflection point or corner which is chosen as the optimal \(\lambda\) value. However, this method is computationally costly since it requires to perform a scan over a wide range of the regularization parameter. Besides, the L-curve corner is not always clearly defined in the case of noisy and sparse measurements, as we will see in the next sections. Another method relies on the discrepancy principle. The goal is to obtain a measurement error \(\chi_{N}^{2}\) equivalent to the expected noise level \(\sigma_{{{\text{noise}}}}^{2}\) (assuming a Gaussian distribution):

$$ \chi_{N}^{2} \left( {\lambda_{{{\text{discr}}}} } \right) = \frac{{\chi^{2} \left( {\lambda_{{{\text{discr}}}} } \right)}}{{N_{{{\text{los}}}}^{ } }} \simeq \sigma_{{{\text{noise}}}}^{2} . $$
(15)

The purpose is to smooth out emissivity structures that are below the noise level, and keep those of higher intensity—considered as valuable information—in the reconstruction.

2.3.1 Second-order Philips–Tikhonov regularization

For benchmark purposes, two Tikhonov regularization methods are introduced and used in this paper. The first one is a second-order Philips–Tikhonov regularization (PTR) and aims at minimizing the second-order derivative (the Laplacian) of the emissivity. Such a method was for instance developed at JET for neutron tomography [18]. In this case, the regularization operator HPTR is expressed as:

$$ H_{PTR}\, {=}\,^{{\varvec{t}}} \nabla^{2} \cdot \nabla^{2} . $$
(16)

Such an operator imposes constraint on the spatial variation of the emissivity gradient and not on the emissivity gradient itself. It thus prevents the appearance of highly fluctuating emissivity structures in the plasma core and sharp profiles at the edge.

2.3.2 Minimum Fisher information

The second reconstruction technique introduced here, so-called Minimum Fisher information (MFI), is one of the most widely applied methods nowadays for SXR tomography in tokamaks, for instance in JET [19], ASDEX Upgrade [20], TCV [1], COMPASS Upgrade [21] or Tore Supra [22] and WEST [23]. For a given 2D emissivity distribution \(\varepsilon \left( {x,y} \right)\), the associated Fisher information is defined as:

$$ I_{F} \left( {\varvec{\varepsilon}} \right) = \iint {\frac{{\varepsilon^{\prime}\left( {x,y} \right)^{2} }}{{\varepsilon \left( {x,y} \right)}}{\text{d}}x{\text{d}}y,} $$
(17)

where the prime denotes the spatial derivative. Numerical transcription of the Fisher information in the regularization operator HMFI gives:

$$ H_{MFI} \,{=}\,^{{\varvec{t}}} \nabla \cdot {\varvec{W}} \cdot \nabla , $$
(18)

aiming at selecting the solution with the least curvature, with \({\varvec{W}}\) a ponderation matrix defined as:

$$ W_{ij} = \delta_{ij} \min \left( {\frac{1}{{\varepsilon_{i} }},\frac{1}{{\varepsilon_{min} }}} \right), $$
(19)

where \(\delta_{ij}\) denotes the Kronecker delta and \(\varepsilon_{min} > 0\) is a lower bound used to avoid singularities for the zero emissivity regions. The ponderation matrix \({\varvec{W}}\) imposes a constraint of lower gradients in the low emissivity regions, e.g., plasma edge (where \(W_{ij}\) value is high). The constraint is relaxed in the plasma core (where \(W_{ij}\) value is low), allowing the reconstruction of spatially fluctuating structures in that region. This can be useful, e.g., to monitor MHD activities or impurity asymmetric distributions [24]. It can be noted that, unlike PTR, the prior knowledge of the emissivity field is necessary to estimate the ponderation matrix W. This issue can be overcome with an iterative approach. A first guess, here with a flat emissivity \({\varvec{\varepsilon}}^{\left( 0 \right)} = 1\), is chosen for the first estimation of the ponderation matrix elements \(W_{ij}^{\left( 0 \right)}\) using Eq. (19). From this, a new emissivity \({\varvec{\varepsilon}}^{\left( 1 \right)}\) can be calculated using Eq. (13). The procedure can then be repeated \(k\) times as depicted in Fig. 3, until convergence \({\varvec{\varepsilon}}^{{\left( {{\varvec{k}} + 1} \right)}} \simeq {\varvec{\varepsilon}}^{{\left( {\varvec{k}} \right)}}\) is reached with some tolerance of typically 5–10% (usually 3–5 iterations are needed).

Fig. 3
figure 3

Workflow of the algorithm used to find the solution that minimizes functional of the MFI method, with a convergence criterion of 5%. Symbols in red are quantities susceptible to change after each iteration

3 Application to a simple 1D problem

3.1 Definition of the geometry

We first implement the tomography method for a simple symmetric case, where the plasma consists of several nested homogeneous layers, as represented in Fig. 4. We have thus a 1D problem where the plasma emissivity depends only on the distance from the plasma center \(\varepsilon = \varepsilon \left( r \right)\). This symmetry implies that the detector LoS is only defined by its closest distance from the plasma center \(z_{{{\text{LoS}}}}\), where all LoS are represented horizontally on the figure.

Fig. 4
figure 4

Geometry of the simple 1D tomographic problem, indicating plasma layers (in blue) and lines-of-sight trajectories (in red)

Four synthetic emissivity profiles of different shapes are used to assess the capabilities of the tomography method in various plasma conditions:

$$ \left\{ {\begin{array}{*{20}l} {\varepsilon_{1} \left( r \right) = 1 - \frac{2}{\pi }atan\left( {\left( {r - 0.7} \right)/50} \right)} \\ {\varepsilon_{2} \left( r \right) = \left( {1 - r} \right)^{3} } \\ {\varepsilon_{3} \left( r \right) = \varepsilon_{1} \left( r \right) \sqrt {1 - r} } \\ {\varepsilon_{4} \left( r \right) = \sqrt {1 - r} + \left( {1 - \cos \left( {2\pi r} \right)} \right) } \\ \end{array} } \right., $$
(20)

allowing to obtain in the dataset, after normalization to 1, a so-called flat profile (\(\varepsilon_{1}\)), a peaked profile (\(\varepsilon_{2}\)), a regular profile (\(\varepsilon_{3}\)) and a hollow profile (\(\varepsilon_{4}\)), as shown in Fig. 5.

Fig. 5
figure 5

Left: synthetic plasma emissivity profiles used to test the tomography methods. Right: associated line-integrated measurements

3.2 Onion-peeling method

An intuitive reconstruction method would be to choose \(N_{p} = N_{{{\text{los}}}} = N\) in order to simply retrieve the emissivity by performing “onion-peeling” of the plasma emissivity. Indeed, since the last plasma layer would only be crossed by one line-of-sight, the last emissivity element could be retrieved using the fact that \(\varepsilon_{N} = f_{N} /T_{NN}\) and afterwards all the other emissivity values, iteratively:

$$ \left\{ {\begin{array}{*{20}l} {\varepsilon_{N} = f_{N} /T_{NN} } \\ {\varepsilon_{k} = \left( {f_{k} - \mathop \sum \limits_{j = k + 1}^{N} T_{kj} . \varepsilon_{j} } \right)/T_{kk} , k < N} \\ \end{array} } \right. $$
(21)

Although simple and smart at first sight, this method is giving very poor results in practice. First, it limits drastically the spatial resolution of the reconstructed emissivity distribution. Second, the method shows a high sensitivity to perturbative noise, as depicted in Fig. 6 with a regular profile. Random noise is added here according to Eq. (6), with a zero-mean Gaussian distribution and a standard deviation equal to 2% of the signal intensity. The onion-peeling method implies that the reconstruction errors obtained at the plasma layers \(\delta \varepsilon_{k + 1:N}\) accumulate iteratively to the inner layer \(\delta \varepsilon_{k}\), making the reconstruction less and less reliable as we move to the plasma center. Thus, the use of a regularization procedure like Tikhonov regularization is required for plasma tomography. It is worth noting that despite poor reconstruction results, the retrofits match well the line-integrated measurements, see Fig. 6 (right). This is referred to as measurements overfitting.

Fig. 6
figure 6

Left: emissivity reconstruction of the regular profile using the onion-peeling method, with 2% of Gaussian noise. Right: associated line-integrated measurements

A tomographic reconstruction using MFI is displayed in Fig. 7, for different values of the regularization parameter \(\lambda\), in order to illustrate the impact of \(\lambda\) value on the smoothness of the solution. A full \(\lambda\) scan is performed in order to investigate the optimal solution, as shown in Fig. 8. The figure of merit to estimate the quality of the reconstruction is based on the root-mean-square emissivity error \(RMS_{em}\):

$$ RMS_{em} = \sqrt {\frac{1}{{N_{p} }}\mathop \sum \limits_{i = 1}^{{N_{p} }} \left( {\varepsilon_{rec,i} - \varepsilon_{i} } \right)^{2} } . $$
(22)
Fig. 7
figure 7

Line-integrated measurements (left) and associated reconstructed emissivity (right) by MFI, for different values of the regularization parameter \(\lambda = 10^{ - 5}\) (top), \(\lambda = 10^{ - 4}\) (middle) and \(\lambda = 10^{ - 3}\) (bottom)

Fig. 8
figure 8

Top: Root-mean-square emissivity error RMSem as a function of the regularization parameter \(\lambda\). Bottom: associated measurement error normalized to the noise level \(\chi_{N}^{2} /\sigma_{noise}^{2}\)

It can be seen that there is one optimal value of \(\lambda_{ }\) which minimizes RMSem, noted \(\lambda_{{{\text{opt}}}}\). It is also visible that the discrepancy principle allows to find a solution \(\lambda_{{{\text{discr}}}}\) very close to the optimum, while the trace ratio solution \(\lambda_{tr}\) leads here to an overfitting of the measurements.

Figure 9 shows the L-curve for two different set of reconstruction parameters. It can be seen on the left that finding the L-curve corner can lead to a solution close to \(\lambda_{{{\text{opt}}}}\) when the corner is well-defined. Unfortunately, it is not always the case, as shown in Fig. 9 (right), in situations with significant noise and low number of measurements. In such situations, the L-curve corner method is not applicable. For these reasons, the discrepancy principle will be used hereafter for \(\lambda\) optimization.

Fig. 9
figure 9

L-curves displaying the regularization parameter \(R\) as a function of the measurement error \(\chi_{ }^{2}\) for two different sets of reconstruction parameters

Since the number of plasma layers \(N_{p}\) is a free parameter of the reconstruction, it is valuable to study how it affects the overall quality of the reconstruction. Such \(N_{p}\) scan is shown in Fig. 10 for the four profiles, together with the associated computation time (normalized to the minimum one). In this specific case, no significant decrease of RMSem is observed for \(N_{p} > 50\), while the computation time is increasing exponentially. In practice, it is therefore recommended to limit wisely the plasma resolution to avoid unmanageable computation cost.

Fig. 10
figure 10

Top: root-mean-square emissivity error (RMSem) as a function of the number of plasma pixels Np for the four profiles, where the error bar denotes the standard deviation. Bottom: associated computational time (normalized to the minimum one) for each tomographic reconstruction

3.3 Tomography tests

We will now assess the general capabilities of the PTR and MFI methods for the four different synthetic emissivity profiles: regular, peaked, flat and hollow, as defined in the previous sections. A set of 75 tomographic reconstructions for each profile is performed after adding 1% of random noise with a zero-mean Gaussian distribution. The average reconstruction for each profile is plotted in Fig. 11, together with the envelope made of the lowest and highest reconstructed values along the profile.

Fig. 11
figure 11

Set of 75 tomographic reconstruction for each of the synthetic emissivity profiles with 1% of Gaussian noise, using MFI (left) and PTR (right). The envelopes represent the lowest and highest reconstructed values along the emissivity profiles

As a result, both PTR and MFI show similar capabilities to recover the general shape of each of the profiles. Nevertheless, MFI appears more sensitive to noise (larger envelope) than PTR but exhibits a higher capability to follow steep gradients at the edge of the profiles and in the core (e.g., for the peaked profile). This could explain why MFI is usually preferred for SXR tomography [19,20,21,22,23] and PTR for neutron tomography in the literature [18].

3.4 Detectors in situ cross-calibration correction

While statistical noise can only be smoothed out by a proper optimization of the regularization parameter, systematic errors originating from the imperfect knowledge of the detectors sensitivity persist through the reconstructions of different emissivity profiles. Therefore, they can in principle be detected and corrected, provided that the measurements database contains enough statistics.

In this section, we propose a methodology to perform a cross-calibration correction of detectors sensitivity and we test its capabilities and limitations, using the defined set of synthetic profiles in the 1D problem configuration. A calibration error coefficient is applied to each detector with a Gaussian probability distribution where \(\sigma = 0.1\). The method aims at using the redundancy of information between detectors to predict the expected measurements from a given detector. The benchmark between the expected and experimental measurements from this detector can then be used to estimate a cross-calibration correction factor. The iterative procedure is decomposed as follows:

  1. 1.

    Perform the tomographic reconstructions of the full measurements set while omitting the detector #1 out of the reconstruction process.

  2. 2.

    Use the reconstructed emissivity profiles to predict the measurements \(f_{rec,1}\) from the detector #1 using \(f_{rec,1} = \mathop \sum \nolimits_{j = 1}^{{N_{p} }} T_{1j} \varepsilon_{rec,j}\).

  3. 3.

    Estimate the mean \(m_{f,1}\) and variance \(\sigma_{f,1}\) of the measurements ratio \(f_{meas,1} /f_{rec,1}\). It should be expected that \(m_{f,1} \simeq 1\) when the detector sensitivity is already well-known.

  4. 4.

    Repeat the steps 1 to 3 for all other detectors in order to estimate \(m_{{f,1:N_{los} }}\) and \(\sigma_{{f,1:N_{los} }}\), as shown in Fig. 12.

  5. 5.

    Select the detectors for which the following condition is fulfilled: \(\left| {m_{f,i} - 1} \right| > K\sigma_{f,i}\), with K a constant determined empirically (here \(K = 3/2\)). This condition ensures that the applied correction coefficient \(C_{i} = 1/m_{f,i}\) is robust against the data statistical variability.

  6. 6.

    Apply the cross-calibration correction factor \(C_{k}\) to the detector showing the highest calibration error, i.e., \(k = \arg \mathop {\max }\nolimits_{i} \left| {m_{f,i} - 1} \right|\) and still respecting the condition 5.

  7. 7.

    Repeat the steps 1 to 6, until no detector is selected in the step 5.

Fig. 12
figure 12

Data variability (mean and standard deviation) of the real and expected measurements ratio for each detector, at different iterations of the calibration correction procedure. The red arrows indicate the channel that will be corrected for the next iteration

In this calibration procedure, only one detector with the highest measurements deviation is recalibrated at each iteration step. This careful approach allows to mitigate the risk of improper detector calibration correction in the cases where the sensitivity of few neighboring channels is commonly overestimated (or commonly underestimated).

Obviously, the success of this method relies on the fact that the data variability \(\sigma_{f}\) is smaller than the measurements deviation \(\left| {m_{f,i} - 1} \right|\). Two critical elements for this purpose are the quality of the measurements database (high statistics and diversity of emissivity profiles) and the accuracy of the tomographic reconstructions. As it was demonstrated in the last sections, the reconstruction accuracy strongly depends on the proper choice of the regularization parameter \(\lambda\) thanks to the discrepancy principle \(\chi_{N}^{2} \left( {\lambda_{{{\text{discr}}}} } \right) \simeq \sigma_{{{\text{noise}}}}^{2}\). However, it should be noted that the optimal error tolerance \(\sigma_{{{\text{noise}}}}\) will progressively decrease during the correction procedure, as the global detectors cross-calibration is expected to improve after each iteration. It is therefore proposed to overcome this issue by applying a relaxation of the \(\sigma_{{{\text{noise}}}}\) value. Initially, the iterative process is run with a relatively high \(\sigma_{{{\text{noise}}}}\) value. After the end of step 7 (when no detector can be corrected anymore), the whole process is repeated with a \(\sigma_{{{\text{noise}}}}\) value decreased by 10%. The \(\sigma_{{{\text{noise}}}}\) is therefore decreased progressively, until the tolerance error reaches a lower threshold (e.g., the level of statistical noise, here 1%).

As expected, it can be seen in Fig. 12 that the calibration of edge channels, where the signal intensity is the lowest, is the most challenging task. The evolution of the mean absolute calibration error after each iteration of the correction procedure is shown in Fig. 13, and the final set of correction coefficients is presented in Fig. 14. Although relatively successful, it appears that the cross-calibration procedure is less accurate for the edge channels (as discussed before), but also for the very core channels. The latter observation can be explained by the fact that the regularization procedure slightly oversmooths steep emissivity gradients in the core, affecting the tomographic reconstructions of the peaked and hollow profiles, as seen in Fig. 11.

Fig. 13
figure 13

Evolution of the mean absolute calibration error after each iteration of the correction procedure. The calibration error restricted to the first 20 core channels is plotted in blue and the tolerance error \(\sigma_{noise}^{{}}\) in orange

Fig. 14
figure 14

Error coefficients (in blue) and obtained correction coefficient (in red) after the correction procedure

Overall, the lack of diversity in the chosen set of emissivity profiles can partly explain the limitations observed for the correction of core and edge channels. In a real tokamak environment, this issue could be solved by including plasma discharges in which the plasma center is vertically oscillating, in order to increase the spatial coverage of the lines-of-sight. Upgrading the tomography algorithm, e.g., by including the magnetic equilibrium as a priori information in the reconstruction [23], should also lead to a significant improvement.

3.5 Detectors position correction

A similar procedure as described in the previous section is applied here in order to correct mispositioning of the detectors. A position error shift \(\Delta z_{{{\text{LoS}}}}\) is applied to each channel with a Gaussian distribution where \(\sigma = 0.1\). In this case, a scan of the position of each detector is performed, in order to find the detector position that matches the best the measured signal. An example of position scan is presented in Fig. 15, where it is clearly visible that the optimal position shift is associated with the lowest data variability.

Fig. 15
figure 15

Data variability of the detector #6 as a function of its position shift \(\Delta z_{{{\text{LoS}}}}\)

The evolution of the mean absolute position error after each iteration of the correction procedure is shown in Fig. 16 and the final corrective set of position shifts is presented in Fig. 17. Interestingly, it can be noted that the global position error increases after the iteration #38, probably indicating an issue of measurements overfitting when the tolerance error becomes too low. Similarly as for the detector sensitivity cross-calibration, the method could benefit from an increase of the diversity of profiles in the database and from an upgrade of the tomography algorithm.

Fig. 16
figure 16

Evolution of the mean absolute position error after each iteration of the correction procedure, for all (black crosses) and 20 core (blue squares) channels. The tolerance error \(\sigma_{{{\text{noise}}}}\) is plotted in orange

Fig. 17
figure 17

Initial shifts of detector positions (in blue) and obtained correction shifts (in red) after the correction procedure

In practice, the sensitivity and position cross-calibration of the detectors should be performed simultaneously in a real tokamak environment. An example of such calibration procedure can be found for the SXR diagnostic of ASDEX Upgrade in [20].

4 Application to a more realistic 2D scenario

The goal of this section is to introduce the basic steps necessary to build a synthetic SXR tomography diagnostics in a tokamak environment, together with some tools to assess the capabilities of the 2D tomography algorithm.

4.1 Definition of the tokamak geometry

First of all, a magnetic equilibrium should be defined in order to develop a reference plasma scenario. The equilibrium can be taken from a reconstruction numerical code, experimentally or by modeling [25]. In the absence of these tools, we will use a simpler solution, namely the Soloviev equilibrium as an analytical solution of the Grad-Shafranov equation [26], where the magnetic flux \(\psi \left( {R,Z} \right)\) is expressed as:

$$ \psi \left( {R,Z} \right) = \frac{1}{2}\left( {c_{2} R_{0}^{2} + c_{0} r^{2} } \right) + \frac{1}{8}\left( {c_{1} - c_{0} } \right)\left( {r^{2} - R_{0}^{2} } \right)^{2} , $$
(23)

with \(R_{0} = 0.5 m\) the major radius and \(c_{0} = B_{0} /\left( {R_{0}^{2} \kappa_{0} q_{0} } \right)\), \(c_{1} = B_{0} \left( {\kappa_{0}^{2} + 1} \right)/\left( {R_{0}^{2} \kappa_{0} q_{0} } \right)\) and \(c_{2} = 0\) are constant, where we will choose \(B_{0} = 2 T\) for the magnetic field, \(\kappa_{0} = 1.2\) for the ellipticity and \(q_{0} = 1.4\) for the safety factor on the axis.

The plasma boundary (rb, zb) is defined by the following equation [26]:

$$ z_{b} = \pm \frac{1}{{r_{b} }}\sqrt {\frac{{2R_{0}^{2} \kappa_{0} q_{0} }}{{B_{0} }}\psi_{b} - \frac{{\kappa_{0}^{2} }}{4}\left( {r_{b}^{2} - R_{0}^{2} } \right)^{2} } , $$
(24)

where \(\psi_{b}\) = 0.049 here. The normalized magnetic flux \(\psi_{N}\) is such that:

$$ \psi_{N} = \frac{{\psi - \psi_{0} }}{{\psi_{b} - \psi_{0} }}, $$
(25)

with \(\psi_{0}\) the magnetic flux on the axis. The resulting equilibrium is plotted in Fig. 18, where the plasma is additionally horizontally shifted by Rshift = 0.16 m. In the following, the normalized minor radius \(\rho = \sqrt {\psi_{N} }\) will be used to label the magnetic flux surfaces.

Fig. 18
figure 18

Magnetic flux map of the Soloviev equilibrium

4.2 Definition of the detection system

The most commonly used detection system for SXR tomography in tokamaks is based on silicon barrier diodes (SBD), though other technologies based on gaseous detectors such as Gas Electron Multipliers (GEM) and Low Voltage Ionization Chambers (LVIC) are under development [27,28,29]. The X-ray absorption efficiency of a SBD can be estimated by:

$$ \eta_{Si} = 1 - \exp \left( { - \alpha V} \right)\left[ {1 - \frac{{\alpha L_{p} }}{{1 + \alpha L_{p} }}\left\{ {1 - \exp \left( { - \left( {\alpha + 1/L_{p} } \right)\left( {D - V} \right)} \right)} \right\}} \right], $$
(26)

with \(\alpha \) the absorption coefficient of silicon in \({\text{cm}}^{2} /{\text{g}}\), that can be found for instance in the NIST XCOM database [30] and where we will choose the depletion zone \(V = 3{ }\upmu {\text{m}}\), the diffusion length \(L_{p} = 200 \upmu {\text{m}}\) and the thickness of the substrate \(D = 380{ }\upmu {\text{m}}\), see [31]. A 50-µm-thick beryllium pinhole is added in the system, almost absolutely cutting the SXR spectrum below 1 keV. The resulting detector spectral response \(\eta_{SXR}\) is displayed in Fig. 19 (left).

Fig. 19
figure 19

Left: ideal spectral response of a silicon photodiode with a 50-µm-thick beryllium low-energy filter. Right: geometry of the lines-of-sight of the five considered cameras

A set of five SXR cameras, each composed of 20 photodiodes, is introduced here as presented in Fig. 19 (right). Each line-of-sight is defined by the positions of the photodiode and the associated pinhole. The cameras are placed on the outer board of the vacuum chamber, to account for the lack of available space that usually occurs in tokamaks.

4.3 Plasma radiation scenario

The SXR plasma emissivity \(\varepsilon \left( {h\nu } \right)\) in W.m−3.eV−1 of the bulk plasma, usually composed of fully ionized species such as hydrogen, deuterium, tritium or helium, comes from the electron–ion (free–free) collisions and can be described by the Bremsstrahlung formula [32]:

$$ \varepsilon \left( {h\nu } \right) = 1.54 \times 10^{ - 38} n_{e}^{2} Z_{eff} \frac{1}{{\sqrt {T_{e} } }}\exp \left( { - \frac{h\nu }{{T_{e} }}} \right)G_{ff} , $$
(27)

where ne and Te are the plasma electron density and temperature, respectively, \(Z_{{{\text{eff}}}} = \mathop \sum \nolimits_{i} n_{i} Z_{i}^{2} /n_{e}\) denotes the plasma effective charge (\(Z_{{{\text{eff}}}} = 1.2\) here) and Gff is a free–free Gaunt factor, usually close to unity in the SXR range \(h\nu\) = [1 keV; 20 keV].

The presence of nonfully ionized impurities in the plasma, such as argon, iron or tungsten ions, can modify significantly the SXR emissivity, due to complex contributions from line radiation (bound–bound) and radiative recombination (free-bound). Besides, it requires solving the ionization equilibrium of each species. For simplicity, this aspect is kept out of the scope of this paper. Nevertheless, it can be noted that many valuable spectroscopic data are periodically released by the Open-ADAS project and can be used to implement impurity radiation and transport in numerical codes [33].

Equation (27) requires to define electron temperature and density radial profiles in order to determine the radiation power spectrum, for instance based on an integrated tokamak simulator [34]. Here, we will mimic the shape of usual tokamak profiles by using Eq. (20) of regular and flat profiles with Te(0) = 3 keV and ne(0) = 5 × 1019m−3 respectively, as presented in Fig. 20 (left). The associated profile of the radiation power spectrum is plotted in Fig. 21. By integrating over all photon energies and convoluting with the photodiodes spectral response, the total and SXR-filtered radiation profiles are obtained and plotted in Fig. 20 (right). It can be noted that the SXR detectors can roughly detect half of the total radiated power in this plasma scenario.

Fig. 20
figure 20

Left: Electron temperature and density profiles. Right: total and SXR-filtered radiated power profiles

Fig. 21
figure 21

Bremsstrahlung power spectrum as a function of the minor radius

It is then possible to remap the radiation profile onto the magnetic equilibrium in order to obtain a 2D synthetic emissivity field on a fine mesh (100⨯100 resolution here), as depicted in Fig. 22. The associated line-integrated measurements are obtained by convoluting the emissivity field with the transfer matrix of the detection system, see Eq. (6), as shown in Fig. 23.

Fig. 22
figure 22

Simulated soft X-ray emissivity profile in the defined geometry

Fig. 23
figure 23

Synthetic SXR line-integrated measurements where the cameras are labeled from 1 to 5

All the necessary tools have now been developed to investigate the tomographic capabilities of the defined detection system. Some tomographic tests are shown in the next sections, including the impact of the number of cameras, the level of noise, the possibility of anisotropic regularization and the presence of poloidal asymmetry on the quality of the reconstruction.

4.4 Some examples of tomographic reconstruction

4.4.1 First tomographic test

A first tomographic test (using MFI) of the emissivity distribution obtained in the previous section is performed and presented in Fig. 24, including 1% of Gaussian noise in the measurements. It can be seen that the geometrical coverage of the plasma allows a rather satisfying reconstruction of the emissivity field, where the position and intensity of the maximum are properly recovered, as well as the general shape of the distribution. The reconstruction error map displayed in Fig. 24 (right) allows to observe that the reconstruction error is close to zero in the plasma center, while it reaches values up to 15% at the plasma edge due to the difficulty to reconstruct accurately steep emissivity gradients.

Fig. 24
figure 24

SXR tomographic reconstruction test with the initial emissivity phantom (left), the reconstructed emissivity (center) and the associated error map (right), assuming 1% of Gaussian noise in the measurements

4.4.2 Tomographic reconstructions versus the number of cameras

Since the available space for diagnostics port-plugs is usually quite limited (as well as the budget), studying the optimal number and position of cameras for the purpose of tomographic reconstructions should be a mandatory step in the diagnostic design. Here, we show how the number of cameras affects the quality of the reconstruction in our defined scenario. The obtained tomograms are presented in Fig. 25. It can be seen that the number of cameras impacts dramatically the quality of the reconstruction. At least two or three cameras with a different angle of view are necessary to recover the global shape of the emissivity distribution.

Fig. 25
figure 25

Tomographic reconstruction test for different numbers of camera used in the reconstruction

4.4.3 Reconstruction vs. noise

The robustness of the reconstruction against experimental uncertainties is one of the critical aspects to consider for a tomographic system, as already discussed in the previous sections. In tokamaks, the level of noise in line-integrated measurements can largely vary from less than 1% to more than 10% depending on the energy detection range (soft x-ray, hard x-ray or gamma-ray), the photon statistics, the detector technology, the electronic acquisition system or the electromagnetic environment surrounding the diagnostic. Here, we perform tomographic tests for four different levels, i.e., 2%, 5%, 10% and 20%, of zero-mean Gaussian noise in the measurements as depicted in Fig. 26. The associated tomograms are presented in Fig. 27, showing a relatively good resilience against noise with a reconstruction error growing, for 5% of noise, from 15 to 20% at the plasma edge and from 1 to 5% in the plasma core. Even for 20% of noise, the emissivity error in the plasma core remains below 10–15%, despite the presence of significant artifacts around the last closed flux surface.

Fig. 26
figure 26

Line-integrated measurements and the associated retrofits for different levels of Gaussian noise \(2\%\) (top left), \(5\%\) (top right), \(10\%\) (bottom left) and 20% (bottom right)

Fig. 27
figure 27

Tomographic reconstruction tests (top) and the associated error maps (bottom) for different levels of Gaussian noise, from left to right: \(2\%\), \(5\%\), \(10\%\) and 20%

4.4.4 Anisotropic regularization: a priori information from the magnetic equilibrium

In tokamaks, the plasma emissivity is usually expected to exhibit steeper gradients in the direction perpendicular to the magnetic flux surfaces (i.e., radially), due to reduced particle and heat transport, than in the parallel ones (i.e., toroidal and poloidal directions). Using this a priori information on the emissivity distribution could therefore significantly help finding an appropriate solution for the tomographic problem. In practice, the knowledge about the magnetic equilibrium (ME) can be included by constructing an anisotropic regularization operator:

$$ H_{{MFI}} = \left( {1 - \tau } \right){}^{\user2{t}} {{\nabla }}_{{//}}\cdot \user2{W} \cdot {{\nabla }}_{{//}} + \tau {}^{\user2{t}} {{\nabla }}_{ \bot }\cdot \user2{W} \cdot {{\nabla }}_{ \bot }, $$
(28)

where \(\nabla_{//}\), \(\nabla_{ \bot }\) denote the parallel and perpendicular components of the gradient, respectively, as depicted in Fig. 28. The anisotropy factor \(\tau <0.5\) allows to impose a higher level of smoothness of the solution in the parallel direction.

Fig. 28
figure 28

Magnetic flux surfaces implemented in the reconstruction, where the blue arrows denote the gradients: in the x direction (top left) and y direction (top right) for the isotropic case, and in the direction parallel (bottom left) and perpendicular (bottom right) to the flux surfaces for the anisotropic case

To demonstrate the beneficial effect of anisotropic regularization in our simple scenario, a series of tomographic reconstruction tests, with 2% of Gaussian noise in the measurements, is presented in Fig. 29 for different \(\tau\) values from \(\tau = 0.5\) (equivalent to isotropic regularization) up to \(\tau = 0.01\). It is clearly visible that adding the ME constraint reduces the presence of artifacts outside of last closed surface and diminishes the emissivity error in the plasma center. The ME constraint can also be useful to reconstruct more accurately poloidal asymmetries in the emissivity distribution, as discussed in the next section.

Fig. 29
figure 29

Tomographic reconstruction tests including the magnetic equilibrium, with different values of the anisotropy factor, from left to right: \(\tau = 0.5\), \(\tau = 0.2\), \(\tau = 0.05\) and \(\tau = 0.01\)

Nevertheless, including the ME constraint implies adding a free parameter \(\tau\) as well as an additional source of error in the reconstruction, as any error in ME reconstruction (horizontal or vertical shift, distortions) could propagate in the tomographic inversion. To illustrate this aspect, a series of tomographic reconstruction tests with anisotropic regularization (\(\tau = 0.05\)) is presented in Fig. 30, where a horizontal shift of ME to the low-field side by \(R_{{{\text{shift}}}} = 2 {\text{cm}}\) up to \(R_{{{\text{shift}}}} = 20{\text{ cm}}\) was applied. As a result, while a ME shift lower than cm does not seem to impact dramatically the quality of the solution, errors of the level or larger than 10 cm appear to be detrimental for the tomographic reconstruction in the presented scenario.

4.4.5 Asymmetric emissivity profiles

It was assumed in the previous sections that the emissivity was homogeneous on each magnetic flux surface. This assumption relies on the fact that the parallel transport is few orders of magnitude faster than the perpendicular transport, such that electron temperature and density distribution is homogenized on each flux surface. This consideration is usually correct in plasmas with low-Z impurities; however, heavy impurities like tungsten can be subject to centrifugal [35], electrostatic [36] or friction forces [37] that induce an asymmetry of their poloidal distribution. For instance, Reinke et al. derived an analytical formula for the poloidal asymmetry of the impurity density \(n_{S}\) induced by the centrifugal force and by ion cyclotron resonance heating (ICRH):

$$ \frac{{\tilde{n}_{S} }}{{\langle n_{S}\rangle }} = 2 \frac{r}{{R_{0} }}\cos \left( \theta \right)\left[ {\frac{{m_{S} \omega_{S}^{2} R_{0} }}{{2T_{i} }}\left( {1 - \frac{{Zm_{i} }}{{m_{S} }}\frac{{Z_{eff} T_{e} }}{{Z_{eff} T_{e} + T_{i} }}} \right) - \frac{{Zf_{m} }}{2}\frac{{T_{e} }}{{Z_{eff} T_{e} + T_{i} }}\eta \left( r \right)} \right], $$
(29)

where \(n_{S} = {{\langle n_{S}\rangle }} + \tilde{n}_{S}\), \(\omega_{S}^{2}\) is the rotation velocity and \(\eta \left( r \right)\) a factor accounting for the temperature anisotropy ratio (\(T_{ \bot } /T_{\parallel }\)) of the minority species. From this expression, we will test the tomography algorithm by simply assuming that the centrifugal asymmetry take the following general form for the emissivity:

$$ \frac{{\tilde{\varepsilon }}}{\langle \varepsilon \rangle } = K r\cos \theta , $$
(30)

with the coefficient K = 1 here, leading to a low-field side (LFS) asymmetry. The asymmetric emissivity profile and its reconstruction, using the full set of five cameras, are presented in Fig. 31.

Fig. 30
figure 30

Tomographic reconstruction tests including the ME constraint (\(\tau = 0.05\)) with a horizontal shift to the low-field side: \(R_{{{\text{shift}}}} = 2 {\text{cm}}\), \(R_{{{\text{shift}}}} = 5 {\text{cm}}\), \(R_{{{\text{shift}}}} = 10 {\text{cm}}\) and \(R_{{{\text{shift}}}} = 20 {\text{cm}}\) (from left to right).

The high-field-side asymmetry that can be induced by a change of the electrostatic potential in the presence of off-axis ICRH is mimicked by:

$$ \frac{{\tilde{\varepsilon }}}{\langle \varepsilon \rangle} = - \eta \left( r \right)r\cos \theta . $$
(31)

Here, \(\eta \left( r \right) = \eta_{0} \exp \left[ { - \left( {r - r_{0} } \right)^{2} /\sigma^{2} } \right]\) where we choose \(\eta_{0} = 2\), \(r_{0} = 0.4\), \(\sigma = 0.2\), leading to a high-field-side (HFS) asymmetry. The emissivity and associated reconstruction are shown in Fig. 32.

Fig. 31
figure 31

Tomographic reconstruction test in the case of a low-field side poloidal asymmetry and 1% of Gaussian noise, with the initial emissivity (top left), the reconstructed emissivity without ME constraint (top center), with ME constraint and \(\tau = 0.05\) (top right) and the associated measurements (bottom)

Fig. 32
figure 32

Tomographic reconstruction test in the case of a high-field side poloidal asymmetry and 1% of Gaussian noise, with: the initial emissivity (left), the reconstructed emissivity without ME constraint (top center), with ME constraint and \(\tau = 0.05\) (top right) and the associated measurements (bottom)

As a result, it can be observed that the reconstructed tomograms recover the general shape of the asymmetry as well as the radial shift of the emissivity maximum. Though the fine details of the HFS asymmetry are smoothed out by the isotropic regularization (middle plots), the anisotropic regularization (right plots) seems to reconstruct more accurately the typical banana shape.

5 Conclusion

In this paper, we have introduced some methodology and tools to implement a tomography algorithm for fusion devices. Based on a simple 1D tomography problem, the Tikhonov regularization has been described in detail with a focus on the second-order Philips–Tikhonov regularization and the Minimum Fisher information method. The optimal reconstruction parameters have been studied, such as the proper choice of the emissivity spatial resolution and the regularization parameter, thanks to the L-curve corner method or the discrepancy principle. A methodology has been proposed to perform an in situ sensitivity and position cross-calibration of the detectors with an iterative approach, by using the information redundancy and data variability in a given set of reconstructed profiles. The method has been validated on a set of synthetic data and its limitations have been discussed. Finally, a more realistic 2D synthetic X-ray tomography diagnostic has been introduced and described, including the detector response and the plasma scenario. The possibility to include the magnetic equilibrium constraint with an anisotropic regularization operator has been investigated. The synthetic diagnostic has been tested in several situations, including the presence of poloidal asymmetries in the emissivity distribution, illustrating the capabilities and limitations of a tomography algorithm to recover specific emissivity patterns in a given geometry.