Deconvolution of multi-Boltzmann x-ray distribution from linear absorption spectrometer via analytical parameter reduction

Accurate characterization of incident radiation is a fundamental challenge for diagnostic design. Herein, we present an efficient spectral analysis routine that is able to characterize multiple components within the spectral emission by analytically reducing the number of parameters. The technique is presented alongside the design of a hard x-ray linear absorption spectrometer using the example of multiple Boltzmann-like spectral distributions; however, it is generally applicable to all absorption based spectrometer designs and can be adapted to any incident spectral shape. This routine is demonstrated to be tolerable to experimental noise and suitable for real-time data processing at multi-Hz repetition rates.


I. INTRODUCTION
Hard x-ray emission from high intensity laser interactions with solid targets is a broad spectral distribution extending up to the peak energies of the accelerated electrons. Careful characterization of this emission can inform us about the properties of the population of energetic electrons propagating within the target, rather than measuring it after escaping the sheath potential, which is known to reduce the measured electron energies. 1 Numerous diagnostic techniques have been demonstrated [2][3][4][5] with several early methods relying on an image plate and metallic filters in both linear 4 and areal layouts. 5 Rusby et al. presented an active method that relied upon scintillator crystals to both attenuate and detect the incident radiation. 2 The design used here is an adaption to this technique, presented initially by Armstrong, 6 that utilizes tungsten filters and Lu 1.8 Y 0.2 SiO5 (LYSO) scintillators to make a more compact rail that can operate both inside and outside a vacuum chamber-filtering from the external flange must be considered in the latter case. These types of designs are a key method of characterizing x-ray energies between the limit of k-edges (∼100 keV) and the onset of activation (∼8 MeV), and therefore, accurate reconstruction is a pertinent area of study.
Each of these designs has involved a spectral reconstruction approach that followed a similar routine. First, the diagnostic and filters are simulated using Monte Carlo codes to build a response matrix. Then, using a predefined spectral shape, trial responses can be generated and compared to the measured data. These spectral shapes can often be expressed as dependent on two variables: flux and a control (temperature, critical energy, bandwidth, etc.). This trial process relies on both an adequate fitting routine that can handle a wide dynamic range and the incident radiation to be approximated by a distribution with a single control parameter, such as temperature. In such a case, it is possible to simply scan the desired parameter space varying the flux and control

ARTICLE
scitation.org/journal/rsi variable of the trial spectra to find the closest match to the data. However, if the incident spectral distribution is more complex, with multiple components, this process quickly becomes intractable. The emission from solid targets is expected to broadly follow the accelerated electron population and exhibit a Boltzmannlike distribution with multiple temperature components. 4,7-9 Emissions from laser-wakefield interactions typically have both a bremsstrahlung-like distribution and the characteristic betatron spectra, 10 the former seeded from electrons interacting with any shielding or filtering in the beam. In each case, for two-component distributions, characterizing the emission requires scanning through four independent parameters: the flux and temperature/critical energy of the first component and the flux and temperature/critical energy of the second. One can envision more complicated incident spectral shapes that contain additional components within the emission, each requiring a flux and control parameter to be deconvolved. This is generally how solutions to single temperature approximations are found; [2][3][4]9 however, scaling this method to multiple temperature components is inefficient as one needs to calculate a large 2 × N dimensional parameter space to find the best match to the measured data. Chen et al. 4,9 discussed the application of a twotemperature fitting routine when outlining the diagnostic response, utilizing a normalized ratio, R Chen , of two temperatures between 10 keV and 10 MeV of the form where T cold and T hot are the temperatures of each distribution. This method aims to short cut the problem of four dimensional space by first normalizing the emission and then scanning over the three parameters; however, the total flux would still need to be calculated after the fact as the fourth parameter, which still leaves this as a 4D problem.
In this paper, we present a novel approach that reduces the number of explicit variables that need to be scanned by directly calculating the optimum flux for each spectral component, i.e., the number of photons, ni, that minimizes the difference between the measured data and the trial spectra for a given set of control variables. Applying this technique to the measurement of the multiple components of an x-ray distribution can provide insight into the underlying physical processes, such as isolating a low-energy thermal emission that can otherwise dominate the interaction 6 or probing an energetic but low flux component of the emission. 4,9 This opens up the potential to investigate new physical processes as lasers approach the QED regime, 11,12 where strong-field processes can manifest as energetic peaks in the photon emission 11,13 -distinct in spectral distribution yet emanating from the same laser-plasma interaction as bremsstrahlung emission from classical collisional processes. We note that depending on the interaction details, such as the presence of preplasma and electron distribution function, the QED regimes can produce Boltzmann-like distributions as well, as in the following simulations exhibiting no signs of decreased spectral density at low photon energies. [14][15][16][17] Thus, the ability to deal with multi-component Boltzmann-like distributions and to distinguish them from non-Boltzmann distributions is of major interest for the present and future experiments.

II. LINEAR ABSORPTION SPECTROMETER DESIGN
The first step in reconstructing the incident spectra is creating a response matrix, Γ, for the given spectrometer design. This can be calculated using Monte Carlo simulations, such as GEANT4, 18 to determine the energy deposited in each layer of the spectrometer as a function of photon energy. This technique can be generally applied to any form of attenuation based spectrometer; however, the results presented below are based on the design initially presented in Ref. 6. This design consists of ten 2 × 10 × 30 mm 3 LYSO (Lu 1.8 Y 0.2 SiO5) scintillator crystals positioned in a linear array with five tungsten filters of the same dimensions interspersed in the latter half of the spectrometer, with a total length, L, of the spectrometer to be ∼60 mm. The crystals and filters are held in a 3D printed frame that optically isolates the front crystals and aligns the output plane to the collection lens. The lens in question should be as fast (high NA) as possible to ensure high collection efficiency of the emitted light, and the scintillators should be wrapped in a high-reflectivity coating 19,20 to maximize the fraction of emitted optical light. These factors and the inherent conversion efficiency of the scintillator should then be considered to absolutely calibrate the device, i.e., determine the conversion between detected light (counts) and absorbed energy in each crystal. 20 The scintillators are then positioned within a lead housing that acts both as a partial collimator and as a protection from scattered radiation. However, as the x rays are geometrically expanding over the source to spectrometer distance, Z, there is a relative fluence change between the first and last crystal that should be accounted for in the generation of the response matrix. This effect can be considered negligible provided Z ≫ L. Γ should, also, include the absolute conversion between the energy deposited and light detected on the CCD to permit a direct comparison between measured and expected data. The generation of the response matrix should include the specifics for each experimental setup, for example, if the device is situated inside the vacuum chamber, one only needs to account for any attenuation due to light tightening, but if it is outside, the flange should be accounted for.
A schematic of the diagnostic is shown in Fig. 1(a), and typical experimental setups and photos can be found in Refs. 2 and 6. When used in laser-plasma interactions, the diagnostic is also used together with a permanent magnetic field to remove charged particles from the incident radiation. This specific design has been used already in several studies; 6,21 LYSO was selected due to its high density, effective Z, and relative light output in comparison to other scintillator materials.
In the response matrix shown in Fig. 2, there is attenuation at lower energies due to a 12 mm Al flange. If there are further materials in the way, the response matrix should be recalculated to address these specifics, and more broadly, when altering the layout of the scintillator crystals within the spectrometer, a new response matrix is required. However, the reconstruction technique described herein is generally applicable, regardless of the specific filter/scintillator layout.
Of note in the response shown is how the latter scintillator responses begin to exceed the first crystal response as the incident energy increases and the discontinuity at ∼60 keV. The former is due to secondary x rays (predominantly 511 keV, generated through electron-positron annihilation) being generated in the first crystal and propagating through the array. However, the latter is due to the Lu k-edge in LYSO.

A. Analytical derivation
With the response matrix for single energies determined, we are able to calculate how the spectrometer will respond to an incident spectral distribution. The emission from solid targets is expected to broadly follow the accelerated electron population and exhibit a Boltzmann-like distribution with multiple temperature components. 7,8 This is seen in both simulations 22 and experimental measurements alike 4,9 and arises due to a myriad of complex and competing processes-for example, multiple absorption mechanisms to couple energy to the electron population 22,23 or variations in electron energy due to interactions with the sheath field. 1 However, the approach can be generalized to any incident spectral shape, and so we define where F is the total spectral distribution as a function of photon energy, Eγ. F(Eγ) represents the summation of multiple spectral components, fi, each of which is, in turn, dependent on photon energy, the flux of each component, ni, and a control parameter, Ti, for the given spectral shape. The control parameters vary depending on the spectral shape; however, for Boltzmann-like spectra, this would be temperature, and for synchrotron-like spectra, this would be critical energy. fi has i as a subscript to outline that it is possible to consider multiple spectral shapes as well as components-i.e., a peak with a thermal distribution. With this defined, we can outline that our measured data are therefore the product of this spectral shape and our response matrix, Γ, integrated over all energies. The response matrix and, therefore, the measured signals, M k , are a function of the crystal number, k, In order to solve for F(Eγ), we can use a trial solution S k that is the summation of a set number of spectral components with parameters n 1 , T 1 , n 2 , T 2 , etc., where fi is the desired spectral shapes to be used-this can be extended as necessary, and the equation can be rewritten as where ri k is the single photon response for each crystal, k, and each spectral component, i, This expression for the reduced response can be pre-calculated for each form of fi, as it is independent of the measured data. We wish to find the best fit between our trial spectra and the measured data in order to determine the parameters in F, and so we can look to minimize the difference between the measured signal and the trial solution through a sum of squares expression where X k is a normalization that we can apply as necessary-either the measured value (M k ) itself or the standard error/uncertainty (δM k ) associated with each crystal. In cases where the uncertainty is high, e.g., low count levels across crystals, using δM k can help mitigate over-fitting in the routine. Here, K is the total number of

Review of Scientific Instruments
ARTICLE scitation.org/journal/rsi scintillators in the spectrometer (ten in our case). From the merit function, ϕ, and the expression for S k previously defined, we can re-express as a function of the normalized data, Q k , and reduced response, R i,k , Here, we use a standard notation where the central dot denotes a dot product. We can directly solve for the minimum of this merit function by computing the derivatives ∂ϕ/∂ni and equating them to zero. We note that Eq. (9) is a quadratic form with ϕ tending to +inf when any of ni tends to ±inf, i.e., the only zero-derivative point, n(Ti) = [n 1 , n 2 , . . .], is the solution corresponding to single ϕ's minimum for the given temperatures, Ti. This leaves us with a system of N linear equations with N variables as follows: where Ri T denotes a transposed matrix. This equation system is easily solvable by any of the available methods-we used the Solve function in the Mathematica software. 24 We provide explicit solutions for one-, two-, and three-temperature cases in the Appendix.   For some sets of temperatures, some of the ni values can be negative, which is clearly unphysical and, as such, we set them to be zeros. This does not represent a problem as it can only happen away from the sought-for optimum solution.

B. Example case
The routine, and an example of the negative values in calculating n 1 , n 2 , is demonstrated in Fig. 3 for two Boltzmann-like distributions with the parameters given in Table I.
First, the expected values are generated from the response matrix and a random noise is added, i.e., a random factor in the interval [−2%, +2%] is applied to the data. Second, a sparse grid of temperatures is constructed of 100 logarithmically spaced steps   The two symmetric minima of this sparse array are close approximations to the correct answer, and these two fits represent the same solution with swapped T 1 and T 2 . At this point, we can either refine the test grid with smaller intervals over a narrow range about these minima or use local minimization routines such as Nelder-Mead or Powell. 25,26 The sparse grid guarantees that we are close to the global minima of the parameter space and either final step is able to finalize the optimization. The inset of Fig. 4 compares the respective performance of these methods toward converging to the optimal solution.

C. Comparison to other methods
Minimization or optimization routines, such as Nelder-Mead or Powell, 25,26 are an active area of study. These techniques, in general, seek to find efficient ways to explore multi-dimensional parameter spaces and find the best comparison based on some criteria (or merit function) starting from a defined or random position within a said parameter space. We now compare the result from the Nelder-Mead to a variety of start positions, "unguided," "best-guess," and the output from the first stage of this routine, to the fine grid calculation. The starting parameters for these routines are expressed in Tables II and III; unguided uses a random start position over the expected space-with the larger of the two fluxes corresponding to the lower of the two temperatures to fit the typical values we expect. The "best-guess" routine uses the measured

ARTICLE
scitation.org/journal/rsi crystal values of the first and fifth crystals as the initial flux and 0.1 and 1 MeV temperature for T 1 and T 2 , respectively. These initial values are chosen to approximate the typical emission seen from other similar diagnostics 2,4,9 and could be altered to suit different laser parameters.
In both cases, we find that the solutions are typically poorer than either of the sparse grid methods. Figure 5 demonstrates the reliability of reconstruction for each method. To generate this graph, we computed 10 000 input spectra with n 1 , n 2 between 10 and 10 6 and T 1 , T 2 between 0.1 and 10 MeV. To measure the success of the reconstruction, we can take the mean error of the reconstructed parameters divided by the input and record this value for each simulation. Namely, the reconstruction error is calculated as follows: where the stars denote the reconstructed values.

ARTICLE scitation.org/journal/rsi
In Fig. 5, this is plotted as a function of the effective energy in each spectral component calculated as n 1 T 1 and n 2 T 2 . By doing this, we see the region over which the different methods are able to accurately reconstruct the input parameters. For each method using the sparse routine, we are able to reliably reconstruct over a significantly larger proportion of the parameter space than either the "unguided" or "best guess."

IV. DISCUSSION
While Fig. 5 demonstrates a poor reconstruction for both the unguided routines, each technique can typically find a good merit value. This demonstrates a concern relating to these diagnostics in general. Each routine typically produces a precise match between the data and the trial data (low merit value); however, the accuracy of the reconstructed parameters [Eq. (11)] is poorer for both the "unguided" or "best guess" routines. This highlights that there can be an element of non-unicity to solutions. Figure 6(a) outlines that this effect is exacerbated in the purely optimizer routines, the top two curves. Figures 5 and 6(a) show that incorrect reconstructions are less frequent for the analytical approaches presented in this manuscript and only occur when one of the two temperature components has a negligible energy (nT) compared with another. To reiterate this point, we see in Fig. 6(a) that a merit value of 10 −10 is readily achieved by all techniques, yet only the sparse-fine routine demonstrates good accuracy in the reconstructed values. While not explored in this article, there is scope for improved designs of spectrometers that minimize the non-unicity of solutions.
Diagnostics like these are well suited to existing ∼0.1 Hz facilities [27][28][29][30] as well as upcoming high repetition laser facilities 31 due to the high repetition nature of the scintillators and cameras. As such, it is important to have analysis routines that can find a solution faster than the laser repetition rate, and from the results in Fig. 6(b), we see that this routine-on a standard desktop machine-would suit for systems operating up to a few Hz, with scope for improvement, either through more efficient code development or faster machines.
In experimental situations, this analysis routine will need to address noise in the data-from each crystal, we can get a measure of the uncertainty through the standard deviation across the crystal and set that as X k in Eq. (9). This measure should factor into the layout and geometry of the spectrometer in the experiment, as one should seek to maximize the signal-to-noise ratio (SNR) by positioning the spectrometer as close to target as possible. Recent measurements with the linear absorption spectrometer design reported herein have demonstrated typical noise values between 0.3% and 2.5%. 32 Figure 7 shows the result from adding 0% to 10% Gaussian noise prior to reconstruction, and we see that for the routines, the area of accurate reconstruction decreases significantly with increasing noise levels. The merit function with X k set to the error of each crystal value can be used to find the deviation of the experimental spectra from the expected model. For example, if the model used was a several-temperature Boltzmann distribution, the reconstruction resulted in large merit function values that would be a strong indication of non-Boltzmann component presence. Certainly, this method is more sensitive when the experimental error bars are small, which is a simple consequence of the fact that a non-Boltzmann component should not be buried within the experimental noise.
This technique can be generalized beyond the summation of thermal distributions since Eq. (3) is independent of spectral shape. Defining several response types for ri-for example, a thermal distribution and a peaked distribution-would allow the routine to probe for hidden features without necessitating a significant change in the algorithm. Each spectral shape would need to be defined by a single flux and control parameter, such as critical energy for synchrotron emission.

V. SUMMARY
In summary, we present a novel solution to determining complex spectral distributions from attenuation based diagnostics. This works by first reducing the number of explicit variables by half (e.g., 4D to 2D for a two-temperature distribution). The routine, in short, follows this operation: (i) Simulate the spectrometer response to individual photon energies. (ii) Calculate the spectrometer response to single-photon distributions in expected spectral component shape. (iii) Scan a low-resolution (sparse) parameter space, solving for optimum flux for each set of control parameters. (iv) At optimum, defined by a merit function [Eq. (9)], rescan through a high-resolution grid or optimizer routine.
We present a comparison of this routine with unguided techniques, demonstrating the significant increase in accuracy over a wider region of input conditions for the technique presented here. We outline the tolerance to experimental noise and uncertainty and demonstrate that the computation time for this technique-while longer than reconstructions via either a blind technique-is suitable for online operation for upcoming Hz-level repetition rate high power laser facilities on a standard desktop. With accurate measurement and deconvolution of these higher order terms within emitted x-ray spectra, we can better understand the internal plasma processes occurring. This technique can be applied to numerous spectral functions/shapes-by altering the response ri in Eqs. (3) and (9) to that of a synchrotron-like emission or inverse-Compton spectra or a combination of thermal and non-thermal components-which greatly increases what we can reliably reconstruct with absorption based x-ray spectrometers.  The equations below are the solutions to the system of linear equations [Eq. (10)] for the case where N = 1, 2, 3. Within these expressions, the terms Rij refer to the dot product of Ri and Rj-the reduced response in Eq. (6) of the ith and jth components, respectively-and QR i denotes the dot product of Q and Ri-when X k in the merit function is M k , this reduces to simply the sum of Ri. Throughout, Q and Ri are vectors of length K.
The derivations for each of these equations are shown in the supplementary material for both cases where X k in the merit function is M k and Q reduces to 1 and, alternatively, where X k is set to the uncertainty in the data and Q does not reduce so simply. This method can also be extended to increasing numbers of components by solving Eq. (10) for more terms.
For a single temperature distribution,