Home | << 1 2 3 4 5 6 7 8 9 >> |
![]() |
Jueid, A., Kip, J., Ruiz de Austri, R., & Skands, P. (2023). Impact of QCD uncertainties on antiproton spectra from dark-matter annihilation. J. Cosmol. Astropart. Phys., 04(4), 068–15pp.
Abstract: Dark-matter particles that annihilate or decay can undergo complex sequences of processes, including strong and electromagnetic radiation, hadronisation, and hadron de-cays, before particles that are stable on astrophysical time scales are produced. Antiprotons produced in this way may leave footprints in experiments such as AMS-02. Several groups have reported an excess of events in the antiproton flux in the rigidity range of 10-20 GV. However, the theoretical modeling of baryon production is not straightforward and relies in part on phenomenological models in Monte Carlo event generators. In this work, we assess the impact of QCD uncertainties on the spectra of antiprotons from dark-matter annihila-tion. As a proof-of-principle, we show that for a two-parameter model that depends only on the thermally-averaged annihilation cross section ((o -v)) and the dark-matter mass (Mx), QCD uncertainties can affect the best-fit mass by up to ti 14% (with large uncertainties for large DM masses), depending on the choice of Mx and the annihilation channel (bb over bar or W+W-), and (o -v) by up to ti 10%. For comparison, changes to the underlying diffusion parameters are found to be within 1%-5%, and the results are also quite resilient to the choice of cosmic-ray propagation model. These findings indicate that QCD uncertainties need to be included in future DM analyses. To facilitate full-fledged analyses, we provide the spectra in tabulated form including QCD uncertainties and code snippets to perform mass interpolations and quick DM fits. The code can be found in this GitHub [1] repository.
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
de los Rios, M., Petac, M., Zaldivar, B., Bonaventura, N. R., Calore, F., & Iocco, F. (2023). Determining the dark matter distribution in simulated galaxies with deep learning. Mon. Not. Roy. Astron. Soc., 525(4), 6015–6035.
Abstract: We present a novel method of inferring the dark matter (DM) content and spatial distribution within galaxies, using convolutional neural networks (CNNs) trained within state-of-the-art hydrodynamical simulations (Illustris-TNG100). Within the controlled environment of the simulation, the framework we have developed is capable of inferring the DM mass distribution within galaxies of mass similar to 10(11)-10(13)M(circle dot) from the gravitationally baryon-dominated internal regions to the DM-rich, baryon-depleted outskirts of the galaxies, with a mean absolute error always below approximate to 0.25 when using photometrical and spectroscopic information. With respect to traditional methods, the one presented here also possesses the advantages of not relying on a pre-assigned shape for the DM distribution, to be applicable to galaxies not necessarily in isolation, and to perform very well even in the absence of spectroscopic observations.
|
Caron, S., Gomez-Vargas, G. A., Hendriks, L., & Ruiz de Austri, R. (2018). Analyzing gamma rays of the Galactic Center with deep learning. J. Cosmol. Astropart. Phys., 05(5), 058–24pp.
Abstract: We present the application of convolutional neural networks to a particular problem in gamma ray astronomy. Explicitly, we use this method to investigate the origin of an excess emission of GeV gamma rays in the direction of the Galactic Center, reported by several groups by analyzing Fermi-LAT data. Interpretations of this excess include gamma rays created by the annihilation of dark matter particles and gamma rays originating from a collection of unresolved point sources, such as millisecond pulsars. We train and test convolutional neural networks with simulated Fermi-LAT images based on point and diffuse emission models of the Galactic Center tuned to measured gamma ray data. Our new method allows precise measurements of the contribution and properties of an unresolved population of gamma ray point sources in the interstellar diffuse emission model. The current model predicts the fraction of unresolved point sources with an error of up to 10% and this is expected to decrease with future work.
Keywords: gamma ray experiments; dark matter simulations
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
Etxebeste, A., Dauvergne, D., Fontana, M., Letang, J. M., Llosa, G., Muñoz, E., et al. (2020). CCMod: a GATE module for Compton camera imaging simulation. Phys. Med. Biol., 65(5), 055004–17pp.
Abstract: Compton cameras are gamma-ray imaging systems which have been proposed for a wide variety of applications such as medical imaging, nuclear decommissioning or homeland security. In the design and optimization of such a system Monte Carlo simulations play an essential role. In this work, we propose a generic module to perform Monte Carlo simulations and analyses of Compton Camera imaging which is included in the open-source GATE/Geant4 platform. Several digitization stages have been implemented within the module to mimic the performance of the most commonly employed detectors (e.g. monolithic blocks, pixelated scintillator crystals, strip detectors...). Time coincidence sorter and sequence coincidence reconstruction are also available in order to aim at providing modules to facilitate the comparison and reproduction of the data taken with different prototypes. All processing steps may be performed during the simulation (on-the-fly mode) or as a post-process of the output files (offline mode). The predictions of the module have been compared with experimental data in terms of energy spectra, angular resolution, efficiency and back-projection image reconstruction. Consistent results within a 3-sigma interval were obtained for the energy spectra except for low energies where small differences arise. The angular resolution measure for incident photons of 1275 keV was also in good agreement between both data sets with a value close to 13 degrees. Moreover, with the aim of demonstrating the versatility of such a tool the performance of two different Compton camera designs was evaluated and compared.
Keywords: Monte Carlo; simulation; gamma imaging; Compton camera
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Centrality determination in heavy-ion collisions with the LHCb detector. J. Instrum., 17(5), P05009–31pp.
Abstract: The centrality of heavy-ion collisions is directly related to the created medium in these interactions. A procedure to determine the centrality of collisions with the LHCb detector is implemented for lead-lead collisions root s(NN) = 5 TeV and lead-neon fixed-target collisions at root s(NN) = 69 GeV. The energy deposits in the electromagnetic calorimeter are used to determine and define the centrality classes. The correspondence between the number of participants and the centrality for the lead-lead collisions is in good agreement with the correspondence found in other experiments, and the centrality measurements for the lead-neon collisions presented here are performed for the first time in fixed-target collisions at the LHC.
|
Moline, A., Ibarra, A., & Palomares-Ruiz, S. (2015). Future sensitivity of neutrino telescopes to dark matter annihilations from the cosmic diffuse neutrino signal. J. Cosmol. Astropart. Phys., 06(6), 005–34pp.
Abstract: Cosmological observations and cold dark matter N-body simulations indicate that our Universe is populated by numerous halos, where dark matter particles annihilate, potentially producing Standard Model particles. In this paper we calculate the contribution to the diffuse neutrino background from dark matter annihilations in halos at all redshifts and we estimate the future sensitivity to the annihilation cross section of neutrino telescopes such as IceCube or ANTARES. We consider various parametrizations to describe the internal halo properties and for the halo mass function in order to bracket the theoretical uncertainty in the limits from the modeling of the cosmological annihilation flux. We find that observations of the cosmic diffuse neutrino flux at large angular distances from the galactic center lead to constraints on the dark matter annihilation cross section which are complementary to ( and for some extrapolations of the astrophysical parameters, better than) those stemming from observations of the Milky Way halo, especially for neutrino telescopes not pointing directly to the Milky Way center, as is the case of IceCube.
|
Caron, S., Eckner, C., Hendriks, L., Johannesson, G., Ruiz de Austri, R., & Zaharijas, G. (2023). Mind the gap: the discrepancy between simulation and reality drives interpretations of the Galactic Center Excess. J. Cosmol. Astropart. Phys., 06(6), 013–56pp.
Abstract: The Galactic Center Excess (GCE) in GeV gamma rays has been debated for over a decade, with the possibility that it might be due to dark matter annihilation or undetected point sources such as millisecond pulsars (MSPs). This study investigates how the gamma-ray emission model (-yEM) used in Galactic center analyses affects the interpretation of the GCE's nature. To address this issue, we construct an ultra-fast and powerful inference pipeline based on convolutional Deep Ensemble Networks. We explore the two main competing hypotheses for the GCE using a set of-yEMs with increasing parametric freedom. We calculate the fractional contribution (fsrc) of a dim population of MSPs to the total luminosity of the GCE and analyze its dependence on the complexity of the ryEM. For the simplest ryEM, we obtain fsrc = 0.10 f 0.07, while the most complex model yields fsrc = 0.79 f 0.24. In conclusion, we find that the statement about the nature of the GCE (dark matter or not) strongly depends on the assumed ryEM. The quoted results for fsrc do not account for the additional uncertainty arising from the fact that the observed gamma-ray sky is out-of-distribution concerning the investigated ryEM iterations. We quantify the reality gap between our ryEMs using deep-learning-based One-Class Deep Support Vector Data Description networks, revealing that all employed ryEMs have gaps to reality. Our study casts doubt on the validity of previous conclusions regarding the GCE and dark matter, and underscores the urgent need to account for the reality gap and consider previously overlooked “out of domain” uncertainties in future interpretations.
|
Nzongani, U., Zylberman, J., Doncecchi, C. E., Perez, A., Debbasch, F., & Arnault, P. (2023). Quantum circuits for discrete-time quantum walks with position-dependent coin operator. Quantum Inf. Process., 22(7), 270–46pp.
Abstract: The aim of this paper is to build quantum circuits that implement discrete-time quantum walks having an arbitrary position-dependent coin operator. The position of the walker is encoded in base 2: with n wires, each corresponding to one qubit, we encode 2(n) position states. The data necessary to define an arbitrary position-dependent coin operator is therefore exponential in n. Hence, the exponentiality will necessarily appear somewhere in our circuits. We first propose a circuit implementing the position-dependent coin operator, that is naive, in the sense that it has exponential depth and implements sequentially all appropriate position-dependent coin operators. We then propose a circuit that “transfers” all the depth into ancillae, yielding a final depth that is linear in n at the cost of an exponential number of ancillae. Themain idea of this linear-depth circuit is to implement in parallel all coin operators at the different positions. Reducing the depth exponentially at the cost of having an exponential number of ancillae is a goal which has already been achieved for the problem of loading classical data on a quantum circuit (Araujo in Sci Rep 11:6329, 2021) (notice that such a circuit can be used to load the initial state of the walker). Here, we achieve this goal for the problem of applying a position-dependent coin operator in a discrete-time quantum walk. Finally, we extend the result of Welch (New J Phys 16:033040, 2014) from position-dependent unitaries which are diagonal in the position basis to position-dependent 2 x 2-block-diagonal unitaries: indeed, we show that for a position dependence of the coin operator (the block-diagonal unitary) which is smooth enough, one can find an efficient quantum-circuit implementation approximating the coin operator up to an error epsilon (in terms of the spectral norm), the depth and size of which scale as O(1/epsilon). A typical application of the efficient implementation would be the quantum simulation of a relativistic spin-1/2 particle on a lattice, coupled to a smooth external gauge field; notice that recently, quantum spatial-search schemes have been developed which use gauge fields as the oracle, to mark the vertex to be found (Zylberman in Entropy 23:1441, 2021), (Fredon arXiv:2210.13920). A typical application of the linear-depth circuit would be when there is spatial noise on the coin operator (and hence a non-smooth dependence in the position).
Keywords: Quantum walks; Quantum circuits; Quantum simulation
|