Bhattacharya, A., Esmaili, A., Palomares-Ruiz, S., & Sarcevic, I. (2019). Update on decaying and annihilating heavy dark matter with the 6-year IceCube HESE data. J. Cosmol. Astropart. Phys., 03(5), 051–30pp.
Abstract: In view of the IceCube's 6-year high-energy starting events (HESE) sample, we revisit the possibility that the updated data may be better explained by a combination of neutrino fluxes from dark matter decay and an isotropic astrophysical power-law than purely by the latter. We find that the combined two-component flux qualitatively improves the fit to the observed data over a purely astrophysical one, and discuss how these updated fits compare against a similar analysis done with the 4-year HESE data. We also update fits involving dark matter decay via multiple channels, without any contribution from the astrophysical flux. We find that a DM-only explanation is not excluded by neutrino data alone. Finally, we also consider the possibility of a signal from dark matter annihilations and perform analogous analyses to the case of decays, commenting on its implications.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Measurement of the inclusive isolated-photon cross section in pp collisions at root s=13 TeV using 36 fb(-1) of ATLAS data. J. High Energy Phys., 10(10), 203–51pp.
Abstract: The differential cross section for isolated-photon production in pp collisions is measured at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC using an integrated luminosity of 36.1 fb(-1). The differential cross section is presented as a function of the photon transverse energy in different regions of photon pseudorapidity. The differential cross section as a function of the absolute value of the photon pseudorapidity is also presented in different regions of photon transverse energy. Next-to-leading-order QCD calculations from Jetphox and Sherpa as well as next-to-next-to-leading-order QCD calculations from Nnlojet are compared with the measurement, using several parameterisations of the proton parton distribution functions. The predictions provide a good description of the data within the experimental and theoretical uncertainties.
|
Caputo, A., Esposito, A., Geoffray, E., Polosa, A. D., & Sun, S. C. (2020). Dark matter, dark photon and superfluid He-4 from effective field theory. Phys. Lett. B, 802, 135258–6pp.
Abstract: We consider a model of sub-GeV dark matter whose interaction with the Standard Model is mediated by a new vector boson (the dark photon) which couples kinetically to the photon. We describe the possibility of constraining such a model using a superfluid He-4 detector, by means of an effective theory for the description of the superfluid phonon. We find that such a detector could provide bounds that are competitive with other direct detection experiments only for ultralight vector mediator, in agreement with previous studies. As a byproduct we also present, for the first time, the low-energy effective field theory for the interaction between photons and phonons.
|
NEXT Collaboration(Fernandes, A. F. M. et al), Alvarez, V., Benlloch-Rodriguez, J. M., Carcel, S., Carrion, J. V., Diaz, J., et al. (2020). Low-diffusion Xe-He gas mixtures for rare-event detection: electroluminescence yield. J. High Energy Phys., 04(4), 034–18pp.
Abstract: High pressure xenon Time Projection Chambers (TPC) based on secondary scintillation (electroluminescence) signal amplification are being proposed for rare event detection such as directional dark matter, double electron capture and double beta decay detection. The discrimination of the rare event through the topological signature of primary ionisation trails is a major asset for this type of TPC when compared to single liquid or double-phase TPCs, limited mainly by the high electron diffusion in pure xenon. Helium admixtures with xenon can be an attractive solution to reduce the electron diffu- sion significantly, improving the discrimination efficiency of these optical TPCs. We have measured the electroluminescence (EL) yield of Xe-He mixtures, in the range of 0 to 30% He and demonstrated the small impact on the EL yield of the addition of helium to pure xenon. For a typical reduced electric field of 2.5 kV/cm/bar in the EL region, the EL yield is lowered by similar to 2%, 3%, 6% and 10% for 10%, 15%, 20% and 30% of helium concentration, respectively. This decrease is less than what has been obtained from the most recent simulation framework in the literature. The impact of the addition of helium on EL statistical fluctuations is negligible, within the experimental uncertainties. The present results are an important benchmark for the simulation tools to be applied to future optical TPCs based on Xe-He mixtures.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., et al. (2020). Measurement of isolated-photon plus two-jet production in pp collisions at root s=13 TeV with the ATLAS detector. J. High Energy Phys., 03(3), 179–49pp.
Abstract: The dynamics of isolated-photon plus two-jet production in pp collisions at a centre-of-mass energy of 13 TeV are studied with the ATLAS detector at the LHC using a dataset corresponding to an integrated luminosity of 36.1 fb(-1). Cross sections are measured as functions of a variety of observables, including angular correlations and invariant masses of the objects in the final state, gamma + jet + jet. Measurements are also performed in phase-space regions enriched in each of the two underlying physical mechanisms, namely direct and fragmentation processes. The measurements cover the range of photon (jet) transverse momenta from 150 GeV (100 GeV) to 2 TeV. The tree-level plus parton-shower predictions from Sherpa and Pythia as well as the next-to-leading-order QCD predictions from Sherpa are compared with the measurements. The next-to-leading-order QCD predictions describe the data adequately in shape and normalisation except for regions of phase space such as those with high values of the invariant mass or rapidity separation of the two jets, where the predictions overestimate the data.
|
Gimenez-Alventosa, V., Gimenez, V., Ballester, F., Vijande, J., & Andreo, P. (2020). Monte Carlo calculation of beam quality correction factors for PTW cylindrical ionization chambers in photon beams. Phys. Med. Biol., 65(20), 205005–11pp.
Abstract: The beam quality correction factork(Q)for megavoltage photon beams has been calculated for eight PTW (Freiburg, Germany) ionization chambers (Farmer chambers PTW30010, PTW30011, PTW30012, and PTW30013, Semiflex 3D chambers PTW31021, PTW31010, and PTW31013, and the PinPoint 3D chamber PTW31016). Simulations performed on the widely used NE-2571 ionization chamber have been used to benchmark the results. The Monte Carlo code PENELOPE/penEasy was used to calculate the absorbed dose to a point in water and the absorbed dose to the active air volume of the chambers for photon beams in the range 4 to 24 MV. Of the nine ionization chambers analysed, only five are included in the current version of the International Code of Practice for dosimetry based on standards of absorbed dose to water (IAEA TRS 398). The values reported in this work agree with those in the literature within the uncertainty estimates and are to be included in the average values of the data obtained by different working groups for the forthcoming update of TRS 398.
|
Poley, L., Stolzenberg, U., Schwenker, B., Frey, A., Gottlicher, P., Marinas, C., et al. (2021). Mapping the material distribution of a complex structure in an electron beam. J. Instrum., 16(1), P01010–33pp.
Abstract: The simulation and analysis of High Energy Physics experiments require a realistic simulation of the detector material and its distribution. The challenge is to describe all active and passive parts of large scale detectors like ATLAS in terms of their size, position and material composition. The common method for estimating the radiation length by weighing individual components, adding up their contributions and averaging the resulting material distribution over extended structures provides a good general estimate, but can deviate significantly from the material actually present. A method has been developed to assess its material distribution with high spatial resolution using the reconstructed scattering angles and hit positions of high energy electron tracks traversing an object under investigation. The study presented here shows measurements for an extended structure with a highly inhomogeneous material distribution. The structure under investigation is an End-of-Substructure-card prototype designed for the ATLAS Inner Tracker strip tracker – a PCB populated with components of a large range of material budgets and sizes. The measurements presented here summarise requirements for data samples and reconstructed electron tracks for reliable image reconstruction of large scale, inhomogeneous samples, choices of pixel sizes compared to the size of features under investigation as well as a bremsstrahlung correction for high material densities and thicknesses.
|
ANTARES Collaboration(Albert, A. et al), Carretero, V., Colomer, M., Gozzini, R., Hernandez-Rey, J. J., Illuminati, G., et al. (2021). ANTARES upper limits on the multi-TeV neutrino emission from the GRBs detected by IACTs. J. Cosmol. Astropart. Phys., 03(3), 092–17pp.
Abstract: The first gamma-ray burst detections by Imaging Atmospheric Cherenkov Telescopes have been recently announced: GRB 190114C, detected by MAGIC, GRB 180720B and GRB 190829A, observed by H.E.S.S. A dedicated search for neutrinos in space and time coincidence with the gamma-ray emission observed by IACTs has been performed using ANTARES data. The search covers both the prompt and afterglow phases, yielding no neutrinos in coincidence with the three GRBs studied. Upper limits on the energetics of the neutrino emission are inferred. The resulting upper limits are several orders of magnitude above the observed gamma-ray emission, and they do not allow to constrain the available models.
|
Cieri, L., & Sborlini, G. F. R. (2021). Exploring QED Effects to Diphoton Production at Hadron Colliders. Symmetry-Basel, 13(6), 994–17pp.
Abstract: In this article, we report phenomenological studies about the impact of O(alpha) corrections to diphoton production at hadron colliders. We explore the application of the Abelianized version of the qT-subtraction method to efficiently compute NLO QED contributions, taking advantage of the symmetries relating QCD and QED corrections. We analyze the experimental consequences due to the selection criteria and we find percent-level deviations for M-gamma gamma > 1TeV. An accurate description of the tail of the invariant mass distribution is very important for new physics searches which have the diphoton process as one of their main backgrounds. Moreover, we emphasize the importance of properly dealing with the observable photons by reproducing the experimental conditions applied to the event reconstruction.
|
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|