|
Ortega, P. G., Torres-Espallardo, I., Cerutti, F., Ferrari, A., Gillam, J. E., Lacasta, C., et al. (2015). Noise evaluation of Compton camera imaging for proton therapy. Phys. Med. Biol., 60(5), 1845–1863.
Abstract: Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming. energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.
|
|
|
Olleros, P., Caballero, L., Domingo-Pardo, C., Babiano, V., Ladarescu, I., Calvo, D., et al. (2018). On the performance of large monolithic LaCl3(Ce) crystals coupled to pixelated silicon photosensors. J. Instrum., 13, P03014–17pp.
Abstract: We investigate the performance of large area radiation detectors, with high energy-and spatial-resolution, intended for the development of a Total Energy Detector with gamma-ray imaging capability, so-called i-TED. This new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements, versus the commonly used C6D6 liquid scintillation total-energy detectors. In this work, we study in detail the impact of the readout photosensor on the energy response of large area (50 x 50 mm(2)) monolithic LaCl3(Ce) crystals, in particular when replacing a conventional mono-cathode photomultiplier tube by an 8 x 8 pixelated silicon photomultiplier. Using the largest commercially available monolithic SiPM array (25 cm(2)), with a pixel size of 6 x 6 mm(2), we have measured an average energy resolution of 3.92% FWHM at 662 keV for crystal thick-nesses of 10, 20 and 30 mm. The results are confronted with detailed Monte Carlo (MC) calculations, where optical processes and properties have been included for the reliable tracking of the scintillation photons. After the experimental validation of the MC model, we use our MC code to explore the impact of a smaller photosensor segmentation on the energy resolution. Our optical MC simulations predict only a marginal deterioration of the spectroscopic performance for pixels of 3 x 3 mm(2).
|
|
|
Norena, J., Verde, L., Jimenez, R., Pena-Garay, C., & Gomez, C. (2012). Cancelling out systematic uncertainties. Mon. Not. Roy. Astron. Soc., 419(2), 1040–1050.
Abstract: We present a method to minimize, or even cancel out, the nuisance parameters affecting a measurement. Our approach is general and can be applied to any experiment or observation where systematic errors are a concern e.g. are larger than statistical errors. We compare it with the Bayesian technique used to deal with nuisance parameters: marginalization, and show how the method compares and improves by avoiding biases. We illustrate the method with several examples taken from the astrophysics and cosmology world: baryonic acoustic oscillations (BAOs), cosmic clocks, Type Ia supernova (SNIa) luminosity distance, neutrino oscillations and dark matter detection. By applying the method we not only recover some known results but also find some interesting new ones. For BAO experiments we show how to combine radial and angular BAO measurements in order to completely eliminate the dependence on the sound horizon at radiation drag. In the case of exploiting SNIa as standard candles we show how the uncertainty in the luminosity distance by a second parameter modelled as a metallicity dependence can be eliminated or greatly reduced. When using cosmic clocks to measure the expansion rate of the universe, we demonstrate how a particular combination of observables nearly removes the metallicity dependence of the galaxy on determining differential ages, thus removing the agemetallicity degeneracy in stellar populations. We hope that these findings will be useful in future surveys to obtain robust constraints on the dark energy equation of state.
|
|
|
NEXT Collaboration(Simon, A. et al), Gomez-Cadenas, J. J., Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., et al. (2017). Application and performance of an ML-EM algorithm in NEXT. J. Instrum., 12, P08009–22pp.
Abstract: The goal of the NEXT experiment is the observation of neutrinoless double beta decay in Xe-136 using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
|
|
|
NEXT Collaboration(Renner, J. et al), Martinez-Lema, G., Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., et al. (2018). Initial results on energy resolution of the NEXT-White detector. J. Instrum., 13, P10020–14pp.
Abstract: One of the major goals of the NEXT-White (NEW) detector is to demonstrate the energy resolution that an electroluminescent high pressure xenon TPC can achieve for high energy tracks. For this purpose, energy calibrations with Cs-137 and Th-232 sources have been carried out as a part of the long run taken with the detector during most of 2017. This paper describes the initial results obtained with those calibrations, showing excellent linearity and an energy resolution that extrapolates to approximately 1% FWHM at Q(beta beta).
|
|