|
Alimena, J. et al, Hirsch, M., Mamuzic, J., Mitsou, V. A., & Santra, A. (2020). Searching for long-lived particles beyond the Standard Model at the Large Hadron Collider. J. Phys. G, 47(9), 090501–226pp.
Abstract: Particles beyond the Standard Model (SM) can generically have lifetimes that are long compared to SM particles at the weak scale. When produced at experiments such as the Large Hadron Collider (LHC) at CERN, these long-lived particles (LLPs) can decay far from the interaction vertex of the primary proton-proton collision. Such LLP signatures are distinct from those of promptly decaying particles that are targeted by the majority of searches for new physics at the LHC, often requiring customized techniques to identify, for example, significantly displaced decay vertices, tracks with atypical properties, and short track segments. Given their non-standard nature, a comprehensive overview of LLP signatures at the LHC is beneficial to ensure that possible avenues of the discovery of new physics are not overlooked. Here we report on the joint work of a community of theorists and experimentalists with the ATLAS, CMS, and LHCb experiments-as well as those working on dedicated experiments such as MoEDAL, milliQan, MATHUSLA, CODEX-b, and FASER-to survey the current state of LLP searches at the LHC, and to chart a path for the development of LLP searches into the future, both in the upcoming Run 3 and at the high-luminosity LHC. The work is organized around the current and future potential capabilities of LHC experiments to generally discover new LLPs, and takes a signature-based approach to surveying classes of models that give rise to LLPs rather than emphasizing any particular theory motivation. We develop a set of simplified models; assess the coverage of current searches; document known, often unexpected backgrounds; explore the capabilities of proposed detector upgrades; provide recommendations for the presentation of search results; and look towards the newest frontiers, namely high-multiplicity 'dark showers', highlighting opportunities for expanding the LHC reach for these signals.
|
|
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
|
|
Stadler, J., Boehm, C., & Mena, O. (2019). Comprehensive study of neutrino-dark matter mixed damping. J. Cosmol. Astropart. Phys., 08(8), 014–23pp.
Abstract: Mixed damping is a physical effect that occurs when a heavy species is coupled to a relativistic fluid which is itself free streaming. As a cross-case between collisional damping and free-streaming, it is crucial in the context of neutrino-dark matter interactions. In this work, we establish the parameter space relevant for mixed damping, and we derive an analytical approximation for the evolution of dark matter perturbations in the mixed damping regime to illustrate the physical processes responsible for the suppression of cosmological perturbations. Although extended Boltzmann codes implementing neutrino-dark matter scattering terms automatically include mixed damping, this effect has not been systematically studied. In order to obtain reliable numerical results, it is mandatory to reconsider several aspects of neutrino-dark matter interactions, such as the initial conditions, the ultra-relativistic fluid approximation and high order multiple moments in the neutrino distribution. Such a precise treatment ensures the correct assessment of the relevance of mixed damping in neutrino-dark matter interactions.
|
|
|
PTOLEMY Collaboration(Betti, M. G. et al), Gariazzo, S., & Pastor, S. (2019). Neutrino physics with the PTOLEMY project: active neutrino properties and the light sterile case. J. Cosmol. Astropart. Phys., 07(7), 047–31pp.
Abstract: The PTOLEMY project aims to develop a scalable design for a Cosmic Neutrino Background (CNB) detector, the first of its kind and the only one conceived that can look directly at the image of the Universe encoded in neutrino background produced in the first second after the Big Bang. The scope of the work for the next three years is to complete the conceptual design of this detector and to validate with direct measurements that the non-neutrino backgrounds are below the expected cosmological signal. In this paper we discuss in details the theoretical aspects of the experiment and its physics goals. In particular, we mainly address three issues. First we discuss the sensitivity of PTOLEMY to the standard neutrino mass scale. We then study the perspectives of the experiment to detect the CNB via neutrino capture on tritium as a function of the neutrino mass scale and the energy resolution of the apparatus. Finally, we consider an extra sterile neutrino with mass in the eV range, coupled to the active states via oscillations, which has been advocated in view of neutrino oscillation anomalies. This extra state would contribute to the tritium decay spectrum, and its properties, mass and mixing angle, could be studied by analyzing the features in the beta decay electron spectrum.
|
|
|
n_TOF Collaboration(Lederer-Woods, C. et al), Domingo-Pardo, C., Tain, J. L., & Tarifeño-Saldivia, A. (2019). Measurement of Ge-73(n, gamma) cross sections and implications for stellar nucleosynthesis. Phys. Lett. B, 790, 458–465.
Abstract: Ge-73(n, gamma) cross sections were measured at the neutron time-of-flight facility n_TOF at CERN up to neutron energies of 300 keV, providing for the first time experimental data above 8 keV. Results indicate that the stellar cross section at kT = 30 keV is 1.5 to 1.7 times higher than most theoretical predictions. The new cross sections result in a substantial decrease of Ge-73 produced in stars, which would explain the low isotopic abundance of Ge-73 in the solar system.
|
|
|
Escudero, M., Hooper, D., Krnjaic, G., & Pierre, M. (2019). Cosmology with a very light Lmu – Ltau gauge boson. J. High Energy Phys., 03(3), 071–29pp.
Abstract: In this paper, we explore in detail the cosmological implications of an abelian L – L gauge extension of the Standard Model featuring a light and weakly coupled Z. Such a scenario is motivated by the longstanding approximate to 4 sigma discrepancy between the measured and predicted values of the muon's anomalous magnetic moment, (g – 2), as well as the tension between late and early time determinations of the Hubble constant. If sufficiently light, the Z population will decay to neutrinos, increasing the overall energy density of radiation and altering the expansion history of the early universe. We identify two distinct regions of parameter space in this model in which the Hubble tension can be significantly relaxed. The first of these is the previously identified region in which a approximate to 10 – 20 MeV Z reaches equilibrium in the early universe and then decays, heating the neutrino population and delaying the process of neutrino decoupling. For a coupling of g (-) similar or equal to (3 – 8) x 10(-4), such a particle can also explain the observed (g – 2) anomaly. In the second region, the Z is very light (mZ approximate to 1eV to MeV) and very weakly coupled (g (-) approximate to 10(-13) to 10(-9)). In this case, the Z population is produced through freeze-in, and decays to neutrinos after neutrino decoupling. Across large regions of parameter space, we predict a contribution to the energy density of radiation that can appreciably relax the reported Hubble tension, N-eff similar or equal to 0.2.
|
|
|
Reig, M. (2019). On the high-scale instanton interference effect: axion models without domain wall problem. J. High Energy Phys., 08(8), 167–13pp.
Abstract: We show that a new chiral, confining interaction can be used to break Peccei-Quinn symmetry dynamically and solve the domain wall problem, simultaneously. The resulting theory is an invisible QCD axion model without domain walls. No dangerous heavy relics appear.
|
|
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
|
|
Binosi, D., Chang, L., Ding, M. H., Gao, F., Papavassiliou, J., & Roberts, C. D. (2019). Distribution amplitudes of heavy-light mesons. Phys. Lett. B, 790, 257–262.
Abstract: A symmetry-preserving approach to the continuum bound-state problem in quantum field theory is used to calculate the masses, leptonic decay constants and light-front distribution amplitudes of empirically accessible heavy-light mesons. The inverse moment of the B-meson distribution is particularly important in treatments of exclusive B-decays using effective field theory and the factorisation formalism; and its value is therefore computed: lambda(B) = (zeta = 2GeV) = 0.54(3) GeV. As an example and in anticipation of precision measurements at new-generation B-factories, the branching fraction for the rare B -> gamma (E-gamma)l nu(l) radiative decay is also calculated, retaining 1/m(B)(2), and 1/E-gamma(2) corrections to the differential decay width, with the result Gamma(B -> gamma l nu l) /Gamma(B) = 0.47 (15) on E-gamma > 1.5 GeV.
|
|
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Electron and photon energy calibration with the ATLAS detector using 2015-2016 LHC proton-proton collision data. J. Instrum., 14, P03017–60pp.
Abstract: This paper presents the electron and photon energy calibration obtained with the ATLAS detector using about 36 fb(-1) of LHC proton-proton collision data recorded at root s = 13 TeV in 2015 and 2016. The different calibration steps applied to the data and the optimization of the reconstruction of electron and photon energies are discussed. The absolute energy scale is set using a large sample of Z boson decays into electron-positron pairs. The systematic uncertainty in the energy scale calibration varies between 0.03% to 0.2% in most of the detector acceptance for electrons with transverse momentum close to 45 GeV. For electrons with transverse momentum of 10 GeV the typical uncertainty is 0.3% to 0.8% and it varies between 0.25% and 1% for photons with transverse momentum around 60 GeV. Validations of the energy calibration with J/psi -> e(+)e(-) decays and radiative Z boson decays are also presented.
|
|