Home | [1–10] << 11 12 13 14 15 16 17 18 19 20 >> [21–30] |
![]() |
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Ortega, P. G., Torres-Espallardo, I., Cerutti, F., Ferrari, A., Gillam, J. E., Lacasta, C., et al. (2015). Noise evaluation of Compton camera imaging for proton therapy. Phys. Med. Biol., 60(5), 1845–1863.
Abstract: Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming. energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.
Keywords: proton therapy; Compton camera; Monte Carlo methods; FLUKA; prompt gamma; range verification; MLEM
|
Olmo, G. J., Rubiera-Garcia, D., & Wojnar, A. (2020). Stellar structure models in modified theories of gravity: Lessons and challenges. Phys. Rep., 876, 1–75.
Abstract: The understanding of stellar structure represents the crossroads of our theories of the nuclear force and the gravitational interaction under the most extreme conditions observably accessible. It provides a powerful probe of the strong field regime of General Relativity, and opens fruitful avenues for the exploration of new gravitational physics. The latter can be captured via modified theories of gravity, which modify the Einstein-Hilbert action of General Relativity and/or some of its principles. These theories typically change the Tolman-Oppenheimer-Volkoff equations of stellar's hydrostatic equilibrium, thus having a large impact on the astrophysical properties of the corresponding stars and opening a new window to constrain these theories with present and future observations of different types of stars. For relativistic stars, such as neutron stars, the uncertainty on the equation of state of matter at supranuclear densities intertwines with the new parameters coming from the modified gravity side, providing a whole new phenomenology for the typical predictions of stellar structure models, such as mass-radius relations, maximum masses, or moment of inertia. For non-relativistic stars, such as white, brown and red dwarfs, the weakening/strengthening of the gravitational force inside astrophysical bodies via the modified Newtonian (Poisson) equation may induce changes on the star's mass, radius, central density or luminosity, having an impact, for instance, in the Chandrasekhar's limit for white dwarfs, or in the minimum mass for stable hydrogen burning in high-mass brown dwarfs. This work aims to provide a broad overview of the main such results achieved in the recent literature for many such modified theories of gravity, by combining the results and constraints obtained from the analysis of relativistic and non-relativistic stars in different scenarios. Moreover, we will build a bridge between the efforts of the community working on different theories, formulations, types of stars, theoretical modelings, and observational aspects, highlighting some of the most promising opportunities in the field.
|
Olmo, G. J., Orazi, E., & Pradisi, G. (2022). Conformal metric-affine gravities. J. Cosmol. Astropart. Phys., 10(10), 057–21pp.
Abstract: We revisit the gauge symmetry related to integrable projective transformations in metric-affine formalism, identifying the gauge field of the Weyl (conformal) symmetry as a dynamical component of the affine connection. In particular, we show how to include the local scaling symmetry as a gauge symmetry of a large class of geometric gravity theories, introducing a compensator dilaton field that naturally gives rise to a Stuckelberg sector where a spontaneous breaking mechanism of the conformal symmetry is at work to generate a mass scale for the gauge field. For Ricci-based gravities that include, among others, General Relativity, f(R) and f(R, R μnu R μnu) theories and the EiBI model, we prove that the on-shell gauge vector associated to the scaling symmetry can be identified with the torsion vector, thus recovering and generalizing conformal invariant theories in the Riemann-Cartan formalism, already present in the literature.
|
Olmo, G. J., & Rubiera-Garcia, D. (2012). Nonsingular Charged Black Holes A La Palatini. Int. J. Mod. Phys. D, 21(8), 1250067–6pp.
Abstract: We argue that the quantum nature of matter and gravity should lead to a discretization of the allowed states of the matter confined in the interior of black holes. To support and illustrate this idea, we consider a quadratic extension of general relativity (GR) formulated a la Palatini and show that nonrotating, electrically charged black holes develop a compact core at the Planck density which is nonsingular if the mass spectrum satisfies a certain discreteness condition. We also find that the area of the core is proportional to the number of charges times the Planck area.
|
Olmo, G. J., & Rubiera-Garcia, D. (2015). Brane-world and loop cosmology from a gravity-matter coupling perspective. Phys. Lett. B, 740, 73–79.
Abstract: We show that the effective brane-world and the loop quantum cosmology background expansion histories can be reproduced from a modified gravity perspective in terms of an f (R) gravity action plus a g(R) term non-minimally coupled with the matter Lagrangian. The reconstruction algorithm that we provide depends on a free function of the matter density that must be specified in each case and allows to obtain analytical solutions always. In the simplest cases, the function f (R) is quadratic in the Ricci scalar, R, whereas g(R) is linear. Our approach is compared with recent results in the literature. We show that working in the Palatini formalism there is no need to impose any constraint that keeps the equations second order, which is a key requirement for the successful implementation of the reconstruction algorithm.
|
Olmo, G. J., Rubiera-Garcia, D., & Sanchez-Puente, A. (2018). Accelerated observers and the notion of singular spacetime. Class. Quantum Gravity, 35(5), 055010–18pp.
Abstract: Geodesic completeness is typically regarded as a basic criterion to determine whether a given spacetime is regular or singular. However, the principle of general covariance does not privilege any family of observers over the others and, therefore, observers with arbitrary motions should be able to provide a complete physical description of the world. This suggests that in a regular spacetime, all physically acceptable observers should have complete paths. In this work we explore this idea by studying the motion of accelerated observers in spherically symmetric spacetimes and illustrate it by considering two geodesically complete black hole spacetimes recently described in the literature. We show that for bound and locally unbound accelerations, the paths of accelerated test particles are complete, providing further support to the regularity of such spacetimes.
|
Olleros, P., Caballero, L., Domingo-Pardo, C., Babiano, V., Ladarescu, I., Calvo, D., et al. (2018). On the performance of large monolithic LaCl3(Ce) crystals coupled to pixelated silicon photosensors. J. Instrum., 13, P03014–17pp.
Abstract: We investigate the performance of large area radiation detectors, with high energy-and spatial-resolution, intended for the development of a Total Energy Detector with gamma-ray imaging capability, so-called i-TED. This new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements, versus the commonly used C6D6 liquid scintillation total-energy detectors. In this work, we study in detail the impact of the readout photosensor on the energy response of large area (50 x 50 mm(2)) monolithic LaCl3(Ce) crystals, in particular when replacing a conventional mono-cathode photomultiplier tube by an 8 x 8 pixelated silicon photomultiplier. Using the largest commercially available monolithic SiPM array (25 cm(2)), with a pixel size of 6 x 6 mm(2), we have measured an average energy resolution of 3.92% FWHM at 662 keV for crystal thick-nesses of 10, 20 and 30 mm. The results are confronted with detailed Monte Carlo (MC) calculations, where optical processes and properties have been included for the reliable tracking of the scintillation photons. After the experimental validation of the MC model, we use our MC code to explore the impact of a smaller photosensor segmentation on the energy resolution. Our optical MC simulations predict only a marginal deterioration of the spectroscopic performance for pixels of 3 x 3 mm(2).
|
Oliver-Canamas, L., Vijande, J., Candela-Juan, C., Gimeno-Olmos, J., Pujades-Claumarchirant, M. C., Rovira-Escutia, J. J., et al. (2023). A User-Friendly System for Mailed Dosimetric Audits of Ir-192 or Co-60 HDR Brachytherapy Sources. Cancers, 15(9), 2484–14pp.
Abstract: Nowadays, the options available to perform external dosimetric audits of the high dose rate (HDR) brachytherapy treatment process are limited. In this work, we present a methodology that allows for performing dosimetric audits in this field. A phantom was designed and manufactured for this purpose. The criteria for its design, together with the in-house measurements for its characterization, are presented. The result is a user-friendly system that can be mailed to perform dosimetric audits in HDR brachytherapy on-site for systems using either Iridium-192 (Ir-192) or Cobalt-60 (Co-60) sources. Objectives: The main goal of this work is to design and characterize a user-friendly methodology to perform mailed dosimetric audits in high dose rate (HDR) brachytherapy for systems using either Iridium-192 (Ir-192) or Cobalt-60 (Co-60) sources. Methods: A solid phantom was designed and manufactured with four catheters and a central slot to place one dosimeter. Irradiations with an Elekta MicroSelectron V2 for Ir-192, and with a BEBIG Multisource for Co-60 were performed for its characterization. For the dose measurements, nanoDots, a type of optically stimulated luminescent dosimeters (OSLDs), were characterized. Monte Carlo (MC) simulations were performed to evaluate the scatter conditions of the irradiation set-up and to study differences in the photon spectra of different Ir-192 sources (Microselectron V2, Flexisource, BEBIG Ir2.A85-2 and Varisource VS2000) reaching the dosimeter in the irradiation set-up. Results: MC simulations indicate that the surface material on which the phantom is supported during the irradiations does not affect the absorbed dose in the nanoDot. Generally, differences below 5% were found in the photon spectra reaching the detector when comparing the Microselectron V2, the Flexisource and the BEBIG models. However, differences up to 20% are observed between the V2 and the Varisource VS2000 models. The calibration coefficients and the uncertainty in the dose measurement were evaluated. Conclusions: The system described here is able to perform dosimetric audits in HDR brachytherapy for systems using either Ir-192 or Co-60 sources. No significant differences are observed between the photon spectra reaching the detector for the MicroSelectron V2, the Flexisource and the BEBIG Ir-192 sources. For the Varisource VS2000, a higher uncertainty is considered in the dose measurement to allow for the nanoDot response.
|
Oliver, S., Gimenez-Alventosa, V., Berumen, F., Gimenez, V., Beaulieu, L., Ballester, F., et al. (2023). Benchmark of the PenRed Monte Carlo framework for HDR brachytherapy. Z. Med. Phys., 33(4), 511–528.
Abstract: Purpose: The purpose of this study is to validate the PenRed Monte Carlo framework for clinical applications in brachytherapy. PenRed is a C++ version of Penelope Monte Carlo code with additional tallies and utilities. Methods and materials: Six benchmarking scenarios are explored to validate the use of PenRed and its improved bachytherapy-oriented capabilities for HDR brachytherapy. A new tally allowing the evaluation of collisional kerma for any material using the track length kerma estimator and the possibility to obtain the seed positions, weights and directions processing directly the DICOM file are now implemented in the PenRed distribution. The four non-clinical test cases developed by the Joint AAPM-ESTRO-ABG-ABS WG-DCAB were evaluated by comparing local and global absorbed dose differences with respect to established reference datasets. A prostate and a palliative lung cases, were also studied. For them, absorbed dose ratios, global absorbed dose differences, and cumulative dose-volume histograms were obtained and discussed. Results: The air-kerma strength and the dose rate constant corresponding to the two sources agree with the reference datatests within 0.3% (Sk) and 0.1% (K). With respect to the first three WG-DCAB test cases, more than 99.8% of the voxels present local (global) differences within +/- 1%(+/- 0.1%) of the reference datasets. For test Case 4 reference dataset, more than 94.9%(97.5%) of voxels show an agreement within +/- 1%(+/- 0.1%), better than similar benchmarking calculations in the literature. The track length kerma estimator scorer implemented increases the numerical efficiency of brachytherapy calculations two orders of magnitude, while the specific brachytherapy source allows the user to avoid the use of error-prone intermediate steps to translate the DICOM information into the simulation. In both clinical cases, only minor absorbed dose differences arise in the low-dose isodoses. 99.8% and 100% of the voxels have a global absorbed dose difference ratio within +/- 0.2%for the prostate and lung cases, respectively. The role played by the different segmentation and composition material in the bone structures was discussed, obtaining negligible absorbed dose differ-ences. Dose-volume histograms were in agreement with the reference data.Conclusions: PenRed incorporates new tallies and utilities and has been validated for its use for detailed and precise high-dose-rate brachytherapy simulations.
Keywords: Monte Carlo; PenRed; Brachytherapy; DICOM; Medical physics
|