Home | << 1 2 3 4 5 6 7 8 9 >> |
Mertsch, P., Parimbelli, G., de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2020). Neutrino clustering in the Milky Way and beyond. J. Cosmol. Astropart. Phys., 01(1), 015–23pp.
Abstract: The standard cosmological model predicts the existence of a Cosmic Neutrino Background, which has not yet been observed directly. Some experiments aiming at its detection are currently under development, despite the tiny kinetic energy of the cosmological relic neutrinos, which makes this task incredibly challenging. Since massive neutrinos are attracted by the gravitational potential of our Galaxy, they can cluster locally. Neutrinos should be more abundant at the Earth position than at an average point in the Universe. This fact may enhance the expected event rate in any future experiment. Past calculations of the local neutrino clustering factor only considered a spherical distribution of matter in the Milky Way and neglected the influence of other nearby objects like the Virgo cluster, although recent N-body simulations suggest that the latter may actually be important. In this paper, we adopt a back-tracking technique, well established in the calculation of cosmic rays fluxes, to perform the first three-dimensional calculation of the number density of relic neutrinos at the Solar System, taking into account not only the matter composition of the Milky Way, but also the contribution of the Andromeda galaxy and the Virgo cluster. The effect of Virgo is indeed found to be relevant and to depend non-trivially on the value of the neutrino mass. Our results show that the local neutrino density is enhanced by 0.53% for a neutrino mass of 10 meV, 12% for 50 meV, 50% for 100 meV or 500% for 300 meV.
|
Etxebeste, A., Dauvergne, D., Fontana, M., Letang, J. M., Llosa, G., Muñoz, E., et al. (2020). CCMod: a GATE module for Compton camera imaging simulation. Phys. Med. Biol., 65(5), 055004–17pp.
Abstract: Compton cameras are gamma-ray imaging systems which have been proposed for a wide variety of applications such as medical imaging, nuclear decommissioning or homeland security. In the design and optimization of such a system Monte Carlo simulations play an essential role. In this work, we propose a generic module to perform Monte Carlo simulations and analyses of Compton Camera imaging which is included in the open-source GATE/Geant4 platform. Several digitization stages have been implemented within the module to mimic the performance of the most commonly employed detectors (e.g. monolithic blocks, pixelated scintillator crystals, strip detectors...). Time coincidence sorter and sequence coincidence reconstruction are also available in order to aim at providing modules to facilitate the comparison and reproduction of the data taken with different prototypes. All processing steps may be performed during the simulation (on-the-fly mode) or as a post-process of the output files (offline mode). The predictions of the module have been compared with experimental data in terms of energy spectra, angular resolution, efficiency and back-projection image reconstruction. Consistent results within a 3-sigma interval were obtained for the energy spectra except for low energies where small differences arise. The angular resolution measure for incident photons of 1275 keV was also in good agreement between both data sets with a value close to 13 degrees. Moreover, with the aim of demonstrating the versatility of such a tool the performance of two different Compton camera designs was evaluated and compared.
Keywords: Monte Carlo; simulation; gamma imaging; Compton camera
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Modelling radiation damage to pixel sensors in the ATLAS detector. J. Instrum., 14, P06012–52pp.
Abstract: Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment at the LHC. Given their close proximity to the interaction point, these detectors will be exposed to an unprecedented amount of radiation over their lifetime. The current pixel detector will receive damage from non-ionizing radiation in excess of 10(15) 1 MeV n(eq)/cm(2), while the pixel detector designed for the high-luminosity LHC must cope with an order of magnitude larger fluence. This paper presents a digitization model incorporating effects of radiation damage to the pixel sensors. The model is described in detail and predictions for the charge collection efficiency and Lorentz angle are compared with collision data collected between 2015 and 2017 (<= 10(15) 1 MeV n(eq)/cm(2)).
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
Etxebeste, A., Barrio, J., Bernabeu, J., Lacasta, C., Llosa, G., Muñoz, E., et al. (2019). Study of sensitivity and resolution for full ring PET prototypes based on continuous crystals and analytical modeling of the light distribution. Phys. Med. Biol., 64(3), 035015–17pp.
Abstract: Sensitivity and spatial resolution are the main parameters to maximize in the performance of a PET scanner. For this purpose, detectors consisting of a combination of continuous crystals optically coupled to segmented photodetectors have been employed. With the use of continuous crystals the sensitivity is increased with respect to the pixelated crystals. In addition, spatial resolution is no longer limited to the crystal size. The main drawback is the difficulty in determining the interaction position. In this work, we present the characterization of the performance of a full ring based on cuboid continuous crystals coupled to SiPMs. To this end, we have employed the simulations developed in a previous work for our experimental detector head. Sensitivity could be further enhanced by using tapered crystals. This enhancement is obtained by increasing the solid angle coverage, reducing the wedge-shaped gaps between contiguous detectors. The performance of the scanners based on both crystal geometries was characterized following NEMA NU 4-2008 standardized protocol in order to compare them. An average sensitivity gain over the entire axial field of view of 13.63% has been obtained with tapered geometry while similar performance of the spatial resolution has been proven with both scanners. The activity at which NECR and true peak occur is smaller and the peak value is greater for tapered crystals than for cuboid crystals. Moreover, a higher degree of homogeneity was obtained in the sensitivity map due to the tighter packing of the crystals, which reduces the gaps and results in a better recovery of homogeneous regions than for the cuboid configuration. Some of the results obtained, such as spatial resolution, depend on the interaction position estimation and may vary if other method is employed.
|
Kuo, J. L., Lattanzi, M., Cheung, K., & Valle, J. W. F. (2018). Decaying warm dark matter and structure formation. J. Cosmol. Astropart. Phys., 12(12), 026–24pp.
Abstract: We examine the cosmology of warm dark matter (WDM), both stable and decaying, from the point of view of structure formation. We compare the matter power spectrum associated to WDM masses of 1.5 keV and 0.158 keV, with that expected for the stable cold dark matter ACDM Xi SCDM paradigm, taken as our reference model. We scrutinize the effects associated to the warm nature of dark matter, as well as the fact that it decays. The decaying warm dark matter (DWDM) scenario is well-motivated, emerging in a broad class of particle physics theories where neutrino masses arise from the spontaneous breaking of a continuous global lepton number symmetry. The majoron arises as a Nambu-Goldstone boson, and picks up a mass from gravitational effects, that explicitly violate global symmetries. The majoron necessarily decays to neutrinos, with an amplitude proportional to their tiny mass, which typically gives it cosmologically long lifetimes. Using N-body simulations we show that our DWDM picture leads to a viable alternative to the ACDM scenario, with predictions that can differ substantially on small scales.
Keywords: cosmological simulations; dark matter simulations
|
Guadilla, V. et al, Tain, J. L., Algora, A., Agramunt, J., Gelletly, W., Jordan, D., et al. (2018). Characterization and performance of the DTAS detector. Nucl. Instrum. Methods Phys. Res. A, 910, 79–89.
Abstract: DTAS is a segmented total absorption y-ray spectrometer developed for the DESPEC experiment at FAIR. It is composed of up to eighteen NaI(Tl) crystals. In this work we study the performance of this detector with laboratory sources and also under real experimental conditions. We present a procedure to reconstruct offline the sum of the energy deposited in all the crystals of the spectrometer, which is complicated by the effect of NaI(Tl) light-yield non-proportionality. The use of a system to correct for time variations of the gain in individual detector modules, based on a light pulse generator, is demonstrated. We describe also an event-based method to evaluate the summing-pileup electronic distortion in segmented spectrometers. All of this allows a careful characterization of the detector with Monte Carlo simulations that is needed to calculate the response function for the analysis of total absorption gamma-ray spectroscopy data. Special attention was paid to the interaction of neutrons with the spectrometer, since they are a source of contamination in studies of beta-delayed neutron emitting nuclei.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., et al. (2018). Comparison between simulated and observed LHC beam backgrounds in the ATLAS experiment at E-beam=4 TeV. J. Instrum., 13, P12006–41pp.
Abstract: Results of dedicated Monte Carlo simulations of beam-induced background (BIB) in the ATLAS experiment at the Large Hadron Collider (LHC) are presented and compared with data recorded in 2012. During normal physics operation this background arises mainly from scattering of the 4 TeV protons on residual gas in the beam pipe. Methods of reconstructing the BIB signals in the ATLAS detector, developed and implemented in the simulation chain based on the FLUKA Monte Carlo simulation package, are described. The interaction rates are determined from the residual gas pressure distribution in the LHC ring in order to set an absolute scale on the predicted rates of BIB so that they can be compared quantitatively with data. Through these comparisons the origins of the BIB leading to different observables in the ATLAS detectors are analysed. The level of agreement between simulation results and BIB measurements by ATLAS in 2012 demonstrates that a good understanding of the origin of BIB has been reached.
|
Muñoz, E., Barrio, J., Bernabeu, J., Etxebeste, A., Lacasta, C., Llosa, G., et al. (2018). Study and comparison of different sensitivity models for a two-plane Compton camera. Phys. Med. Biol., 63(13), 135004–19pp.
Abstract: Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with Na-22 sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
Keywords: Compton camera imaging; MLEM; Monte Carlo simulations; image quality
|