|
Di Valentino, E., Melchiorri, A., Mena, O., & Vagnozzi, S. (2020). Interacting dark energy in the early 2020s: A promising solution to the H-0 and cosmic shear tensions. Phys. Dark Universe, 30, 100666–12pp.
Abstract: We examine interactions between dark matter and dark energy in light of the latest cosmological observations, focusing on a specific model with coupling proportional to the dark energy density. Our data includes Cosmic Microwave Background (CMB) measurements from the Planck 2018 legacy data release, late-time measurements of the expansion history from Baryon Acoustic Oscillations (BAO) and Supernovae Type Ia (SNeIa), galaxy clustering and cosmic shear measurements from the Dark Energy Survey Year 1 results, and the 2019 local distance ladder measurement of the Hubble constant H-0 from the Hubble Space Telescope. Considering Planck data both in combination with BAO or SNeIa data reduces the H-0 tension to a level which could possibly be compatible with a statistical fluctuation. The very same model also significantly reduces the Omega(m) – sigma(8) tension between CMB and cosmic shear measurements. Interactions between the dark sectors of our Universe remain therefore a promising joint solution to these persisting cosmological tensions.
|
|
|
Aiola, S., Amhis, Y., Billoir, P., Jashal, B. K., Henry, L., Oyanguren, A., et al. (2021). Hybrid seeding: A standalone track reconstruction algorithm for scintillating fibre tracker at LHCb. Comput. Phys. Commun., 260, 107713–5pp.
Abstract: We describe the Hybrid seeding, a stand-alone pattern recognition algorithm aiming at finding charged particle trajectories for the LHCb upgrade. A significant improvement to the charged particle reconstruction efficiency is accomplished by exploiting the knowledge of the LHCb magnetic field and the position of energy deposits in the scintillating fibre tracker detector. Moreover, we achieve a low fake rate and a small contribution to the overall timing budget of the LHCb real-time data processing.
|
|
|
AGATA Collaboration(Avigo, R. et al), Domingo-Pardo, C., Gadea, A., & Gonzalez, V. (2020). Low-lying electric dipole gamma-continuum for the unstable Fe-62(,)64 nuclei: Strength evolution with neutron number. Phys. Lett. B, 811, 135951–6pp.
Abstract: The gamma-ray emission from the nuclei Fe-62,Fe-64 following Coulomb excitation at bombarding energy of 400-440 AMeV was measured with special focus on E1 transitions in the energy region 4-8 MeV. The unstable neutron-rich nuclei Fe-62,Fe-64 were produced at the FAIR-GSI laboratories and selected with the FRS spectrometer. The gamma decay was detected with AGATA. From the measured gamma-ray spectra the summed E1 strength is extracted and compared to microscopic quasi-particle phonon model calculations. The trend of the E1 strength with increasing neutron number is found to be fairly well reproduced with calculations that assume a rather complex structure of the 1(-) states (three-phonon states) inducing a strong fragmentation of the E1 nuclear response below the neutron binding energy.
|
|
|
Balibrea-Correa, J., Lerendegui-Marco, J., Babiano-Suarez, V., Caballero, L., Calvo, D., Ladarescu, I., et al. (2021). Machine Learning aided 3D-position reconstruction in large LaCl3 crystals. Nucl. Instrum. Methods Phys. Res. A, 1001, 165249–17pp.
Abstract: We investigate five different models to reconstruct the 3D gamma-ray hit coordinates in five large LaCl3(Ce) monolithic crystals optically coupled to pixelated silicon photomultipliers. These scintillators have a base surface of 50 x 50 mm(2) and five different thicknesses, from 10 mm to 30 mm. Four of these models are analytical prescriptions and one is based on a Convolutional Neural Network. Average resolutions close to 1-2 mm fwhm are obtained in the transverse crystal plane for crystal thicknesses between 10 mm and 20 mm using analytical models. For thicker crystals average resolutions of about 3-5 mm fwhm are obtained. Depth of interaction resolutions between 1 mm and 4 mm are achieved depending on the distance of the interaction point to the photosensor surface. We propose a Machine Learning algorithm to correct for linearity distortions and pin-cushion effects. The latter allows one to keep a large field of view of about 70%-80% of the crystal surface, regardless of crystal thickness. This work is aimed at optimizing the performance of the so-called Total Energy Detector with Compton imaging capability (i-TED) for time-of-flight neutron capture cross-section measurements.
|
|
|
ANTARES Collaboration(Albert, A. et al), Alves, S., Carretero, V., Colomer, M., Gozzini, R., Hernandez-Rey, J. J., et al. (2021). Measurement of the atmospheric nu(e) and nu(mu) energy spectra with the ANTARES neutrino telescope. Phys. Lett. B, 816, 136228–7pp.
Abstract: This letter presents a combined measurement of the energy spectra of atmospheric nu(e) and nu(mu) in the energy range between similar to 100 GeV and similar to 50 TeV with the ANTARES neutrino telescope. The analysis uses 3012 days of detector livetime in the period 2007-2017, and selects 1016 neutrinos interacting in (or close to) the instrumented volume of the detector, yielding shower-like events (mainly from nu(e) + (nu) over bar (e) charged current plus all neutrino neutral current interactions) and starting track events (mainly from nu(mu) + (nu) over bar (mu) charged current interactions). The contamination by atmospheric muons in the final sample is suppressed at the level of a few per mill by different steps in the selection analysis, including a Boosted Decision Tree classifier. The distribution of reconstructed events is unfolded in terms of electron and muon neutrino fluxes. The derived energy spectra are compared with previous measurements that, above 100 GeV, are limited to experiments in polar ice and, for nu(mu), to Super-Kamiokande.
|
|
|
Hall, O. et al, Agramunt, J., Algora, A., Domingo-Pardo, C., Morales, A. I., Rubio, B., et al. (2021). beta-delayed neutron emission of r-process nuclei at the N=82 shell closure. Phys. Lett. B, 816, 136266–7pp.
Abstract: Theoretical models of beta-delayed neutron emission are used as crucial inputs in r-process calculations. Benchmarking the predictions of these models is a challenge due to a lack of currently available experimental data. In this work the beta-delayed neutron emission probabilities of 33 nuclides in the important mass regions south and south-west of Sn-132 are presented, 16 for the first time. The measurements were performed at RIKEN using the Advanced Implantation Detector Array (AIDA) and the BRIKEN neutron detector array. The P-1n values presented constrain the predictions of theoretical models in the region, affecting the final abundance distribution of the second r-process peak at A approximate to 130.
|
|
|
Del Debbio, L., & Ramos, A. (2021). Lattice determinations of the strong coupling. Phys. Rep.-Rev. Sec. Phys. Lett., 920, 1–71.
Abstract: Lattice QCD has reached a mature status. State of the art lattice computations include u, d, s (and even the c) sea quark effects, together with an estimate of electromagnetic and isospin breaking corrections for hadronic observables. This precise and first principles description of the standard model at low energies allows the determination of multiple quantities that are essential inputs for phenomenology and not accessible to perturbation theory. One of the fundamental parameters that are determined from simulations of lattice QCD is the strong coupling constant, which plays a central role in the quest for precision at the LHC. Lattice calculations currently provide its best determinations, and will play a central role in future phenomenological studies. For this reason we believe that it is timely to provide a pedagogical introduction to the lattice determinations of the strong coupling. Rather than analysing individual studies, the emphasis will be on the methodologies and the systematic errors that arise in these determinations. We hope that these notes will help lattice practitioners, and QCD phenomenologists at large, by providing a self-contained introduction to the methodology and the possible sources of systematic error. The limiting factors in the determination of the strong coupling turn out to be different from the ones that limit other lattice precision observables. We hope to collect enough information here to allow the reader to appreciate the challenges that arise in order to improve further our knowledge of a quantity that is crucial for LHC phenomenology. Crown Copyright & nbsp;(c) 2021 Published by Elsevier B.V. All rights reserved.
|
|
|
Villanueva-Domingo, P., Mena, O., & Palomares-Ruiz, S. (2021). A Brief Review on Primordial Black Holes as Dark Matter. Front. Astron. Space Sci., 8, 681084–10pp.
Abstract: Primordial black holes (PBHs) represent a natural candidate for one of the components of the dark matter (DM) in the Universe. In this review, we shall discuss the basics of their formation, abundance and signatures. Some of their characteristic signals are examined, such as the emission of particles due to Hawking evaporation and the accretion of the surrounding matter, effects which could leave an impact in the evolution of the Universe and the formation of structures. The most relevant probes capable of constraining their masses and population are discussed.
|
|
|
Aguilar, A. C., De Soto, F., Ferreira, M. N., Papavassiliou, J., & Rodriguez-Quintero, J. (2021). Infrared facets of the three-gluon vertex. Phys. Lett. B, 818, 136352–7pp.
Abstract: We present novel lattice results for the form factors of the quenched three-gluon vertex of QCD, in two special kinematic configurations that depend on a single momentum scale. We consider three form factors, two associated with a classical tensor structure and one without tree-level counterpart, exhibiting markedly different infrared behaviors. Specifically, while the former display the typical suppression driven by a negative logarithmic singularity at the origin, the latter saturates at a small negative constant. These exceptional features are analyzed within the Schwinger-Dyson framework, with the aid of special relations obtained from the Slavnov-Taylor identities of the theory. The emerging picture of the underlying dynamics is thoroughly corroborated by the lattice results, both qualitatively as well as quantitatively.
|
|
|
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|
|