Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
Caron, S., Eckner, C., Hendriks, L., Johannesson, G., Ruiz de Austri, R., & Zaharijas, G. (2023). Mind the gap: the discrepancy between simulation and reality drives interpretations of the Galactic Center Excess. J. Cosmol. Astropart. Phys., 06(6), 013–56pp.
Abstract: The Galactic Center Excess (GCE) in GeV gamma rays has been debated for over a decade, with the possibility that it might be due to dark matter annihilation or undetected point sources such as millisecond pulsars (MSPs). This study investigates how the gamma-ray emission model (-yEM) used in Galactic center analyses affects the interpretation of the GCE's nature. To address this issue, we construct an ultra-fast and powerful inference pipeline based on convolutional Deep Ensemble Networks. We explore the two main competing hypotheses for the GCE using a set of-yEMs with increasing parametric freedom. We calculate the fractional contribution (fsrc) of a dim population of MSPs to the total luminosity of the GCE and analyze its dependence on the complexity of the ryEM. For the simplest ryEM, we obtain fsrc = 0.10 f 0.07, while the most complex model yields fsrc = 0.79 f 0.24. In conclusion, we find that the statement about the nature of the GCE (dark matter or not) strongly depends on the assumed ryEM. The quoted results for fsrc do not account for the additional uncertainty arising from the fact that the observed gamma-ray sky is out-of-distribution concerning the investigated ryEM iterations. We quantify the reality gap between our ryEMs using deep-learning-based One-Class Deep Support Vector Data Description networks, revealing that all employed ryEMs have gaps to reality. Our study casts doubt on the validity of previous conclusions regarding the GCE and dark matter, and underscores the urgent need to account for the reality gap and consider previously overlooked “out of domain” uncertainties in future interpretations.
|
Bernal, N., Forero-Romero, J. E., Garani, R., & Palomares-Ruiz, S. (2014). Systematic uncertainties from halo asphericity in dark matter searches. J. Cosmol. Astropart. Phys., 09(9), 004–30pp.
Abstract: Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm(3). Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.
|
Moline, A., Ibarra, A., & Palomares-Ruiz, S. (2015). Future sensitivity of neutrino telescopes to dark matter annihilations from the cosmic diffuse neutrino signal. J. Cosmol. Astropart. Phys., 06(6), 005–34pp.
Abstract: Cosmological observations and cold dark matter N-body simulations indicate that our Universe is populated by numerous halos, where dark matter particles annihilate, potentially producing Standard Model particles. In this paper we calculate the contribution to the diffuse neutrino background from dark matter annihilations in halos at all redshifts and we estimate the future sensitivity to the annihilation cross section of neutrino telescopes such as IceCube or ANTARES. We consider various parametrizations to describe the internal halo properties and for the halo mass function in order to bracket the theoretical uncertainty in the limits from the modeling of the cosmological annihilation flux. We find that observations of the cosmic diffuse neutrino flux at large angular distances from the galactic center lead to constraints on the dark matter annihilation cross section which are complementary to ( and for some extrapolations of the astrophysical parameters, better than) those stemming from observations of the Milky Way halo, especially for neutrino telescopes not pointing directly to the Milky Way center, as is the case of IceCube.
|
Achterberg, A., Amoroso, S., Caron, S., Hendriks, L., Ruiz de Austri, R., & Weniger, C. (2015). A description of the Galactic Center excess in the Minimal Supersymmetric Standard Model. J. Cosmol. Astropart. Phys., 08(8), 006–27pp.
Abstract: Observations with the Fermi Large Area Telescope (LAT) indicate an excess in gamma rays originating from the center of our Galaxy. A possible explanation for this excess is the annihilation of Dark Matter particles. We have investigated the annihilation of neutralinos as Dark Matter candidates within the phenomenological Minimal Supersymmetric Standard Model (pMSSM). An iterative particle filter approach was used to search for solutions within the pMSSM. We found solutions that are consistent with astroparticle physics and collider experiments, and provide a fit to the energy spectrum of the excess. The neutralino is a Bino/Higgsino or Bino/Wino/Higgsino mixture with a mass in the range 84-92 GeV or 87-97 GeV annihilating into W bosons. A third solutions is found for a neutralino of mass 174-187 GeV annihilating into top quarks. The best solutions yield a Dark Matter relic density 0.06 < Omega h(2) < 0.13. These pMSSM solutions make clear forecasts for LHC, direct and indirect DM detection experiments. If the pMSSM explanation of the excess seen by Fermi-LAT is correct, a DM signal might be discovered soon.
|
Villaescusa-Navarro, F., & Dalal, N. (2011). Cores and cusps in warm dark matter halos. J. Cosmol. Astropart. Phys., 03(3), 024–16pp.
Abstract: The apparent presence of large core radii in Low Surface Brightness galaxies has been claimed as evidence in favor of warm dark matter. Here we show that WDM halos do not have cores that are large fractions of the halo size: typically, r(core)/r(200) less than or similar to 10(-3). This suggests an astrophysical origin for the large cores observed in these galaxies, as has been argued by other authors.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Natochii, A. et al, & Marinas, C. (2023). Measured and projected beam backgrounds in the Belle II experiment at the SuperKEKB collider. Nucl. Instrum. Methods Phys. Res. A, 1055, 168550–21pp.
Abstract: The Belle II experiment at the SuperKEKB electron-positron collider aims to collect an unprecedented data set of 50 ab-1 to study CP-violation in the B-meson system and to search for Physics beyond the Standard Model. SuperKEKB is already the world's highest-luminosity collider. In order to collect the planned data set within approximately one decade, the target is to reach a peak luminosity of 6 x 1035 cm-2 s-1by further increasing the beam currents and reducing the beam size at the interaction point by squeezing the betatron function down to betay* = 0.3 mm. To ensure detector longevity and maintain good reconstruction performance, beam backgrounds must remain well controlled. We report on current background rates in Belle II and compare these against simulation. We find that a number of recent refinements have significantly improved the background simulation accuracy. Finally, we estimate the safety margins going forward. We predict that backgrounds should remain high but acceptable until a luminosity of at least 2.8 x 1035 cm-2 s-1is reached for betay* = 0.6 mm. At this point, the most vulnerable Belle II detectors, the Time-of-Propagation (TOP) particle identification system and the Central Drift Chamber (CDC), have predicted background hit rates from single-beam and luminosity backgrounds that add up to approximately half of the maximum acceptable rates.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., Cerda Alberich, L., et al. (2017). Study of the material of the ATLAS inner detector for Run 2 of the LHC. J. Instrum., 12, P12009–59pp.
Abstract: The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity root s = 13 TeV pp collision sample corresponding to around 2.0 nb(-1) collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel region is studied using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.
|
Poley, L., Stolzenberg, U., Schwenker, B., Frey, A., Gottlicher, P., Marinas, C., et al. (2021). Mapping the material distribution of a complex structure in an electron beam. J. Instrum., 16(1), P01010–33pp.
Abstract: The simulation and analysis of High Energy Physics experiments require a realistic simulation of the detector material and its distribution. The challenge is to describe all active and passive parts of large scale detectors like ATLAS in terms of their size, position and material composition. The common method for estimating the radiation length by weighing individual components, adding up their contributions and averaging the resulting material distribution over extended structures provides a good general estimate, but can deviate significantly from the material actually present. A method has been developed to assess its material distribution with high spatial resolution using the reconstructed scattering angles and hit positions of high energy electron tracks traversing an object under investigation. The study presented here shows measurements for an extended structure with a highly inhomogeneous material distribution. The structure under investigation is an End-of-Substructure-card prototype designed for the ATLAS Inner Tracker strip tracker – a PCB populated with components of a large range of material budgets and sizes. The measurements presented here summarise requirements for data samples and reconstructed electron tracks for reliable image reconstruction of large scale, inhomogeneous samples, choices of pixel sizes compared to the size of features under investigation as well as a bremsstrahlung correction for high material densities and thicknesses.
|