Cabrera, M. E., Casas, J. A., Mitsou, V. A., Ruiz de Austri, R., & Terron, J. (2012). Histogram comparison tools for the search of new physics at LHC. Application to the CMSSM. J. High Energy Phys., 04(4), 133–27pp.
Abstract: We propose a rigorous and effective way to compare experimental and theoretical histograms, incorporating the different sources of statistical and systematic uncertainties. This is a useful tool to extract as much information as possible from the comparison between experimental data with theoretical simulations, optimizing the chances of identifying New Physics at the LHC. We illustrate this by showing how a search in the CMSSM parameter space, using Bayesian techniques, can effectively find the correct values of the CMSSM parameters by comparing histograms of events with multijets + missing transverse momentum displayed in the effective-mass variable. The procedure is in fact very efficient to identify the true supersymmetric model, in the case supersymmetry is really there and accessible to the LHC.
|
Ferreira, M. N., & Papavassiliou, J. (2023). Gauge Sector Dynamics in QCD. Particles, 6(1), 312–363.
Abstract: The dynamics of the QCD gauge sector give rise to non-perturbative phenomena that are crucial for the internal consistency of the theory; most notably, they account for the generation of a gluon mass through the action of the Schwinger mechanism, the taming of the Landau pole, the ensuing stabilization of the gauge coupling, and the infrared suppression of the three-gluon vertex. In the present work, we review some key advances in the ongoing investigation of this sector within the framework of the continuum Schwinger function methods, supplemented by results obtained from lattice simulations.
|
ATLAS Collaboration(Aad, G. et al), Aikot, A., Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., et al. (2024). Electron and photon energy calibration with the ATLAS detector using LHC Run 2 data. J. Instrum., 19(2), P02009–58pp.
Abstract: This paper presents the electron and photon energy calibration obtained with the ATLAS detector using 140 fb-1 of LHC proton -proton collision data recorded at -Js = 13 TeV between 2015 and 2018. Methods for the measurement of electron and photon energies are outlined, along with the current knowledge of the passive material in front of the ATLAS electromagnetic calorimeter. The energy calibration steps are discussed in detail, with emphasis on the improvements introduced in this paper. The absolute energy scale is set using a large sample of Z -boson decays into electron -positron pairs, and its residual dependence on the electron energy is used for the first time to further constrain systematic uncertainties. The achieved calibration uncertainties are typically 0.05% for electrons from resonant Z -boson decays, 0.4% at ET – 10 GeV, and 0.3% at ET – 1 TeV; for photons at ET <^>' 60 GeV, they are 0.2% on average. This is more than twice as precise as the previous calibration. The new energy calibration is validated using .11tfr -, ee and radiative Z -boson decays.
|
Gammaldi, V., Zaldivar, B., Sanchez-Conde, M. A., & Coronado-Blazquez, J. (2023). A search for dark matter among Fermi-LAT unidentified sources with systematic features in machine learning. Mon. Not. Roy. Astron. Soc., 520(1), 1348–1361.
Abstract: Around one-third of the point-like sources in the Fermi-LAT catalogues remain as unidentified sources (unIDs) today. Indeed, these unIDs lack a clear, univocal association with a known astrophysical source. If dark matter (DM) is composed of weakly interacting massive particles (WIMPs), there is the exciting possibility that some of these unIDs may actually be DM sources, emitting gamma-rays from WIMPs annihilation. We propose a new approach to solve the standard, machine learning (ML) binary classification problem of disentangling prospective DM sources (simulated data) from astrophysical sources (observed data) among the unIDs of the 4FGL Fermi-LAT catalogue. We artificially build two systematic features for the DM data which are originally inherent to observed data: the detection significance and the uncertainty on the spectral curvature. We do it by sampling from the observed population of unIDs, assuming that the DM distributions would, if any, follow the latter. We consider different ML models: Logistic Regression, Neural Network (NN), Naive Bayes, and Gaussian Process, out of which the best, in terms of classification accuracy, is the NN, achieving around 93 . 3 per cent +/- 0 . 7 per cent performance. Other ML evaluation parameters, such as the True Ne gativ e and True Positive rates, are discussed in our work. Applying the NN to the unIDs sample, we find that the de generac y between some astrophysical and DM sources can be partially solved within this methodology. None the less, we conclude that there are no DM source candidates among the pool of 4FGL Fermi-LAT unIDs.
|
Garcia, A. R., Martinez, T., Cano-Ott, D., Castilla, J., Guerrero, C., Marin, J., et al. (2012). MONSTER: a time of flight spectrometer for beta-delayed neutron emission measurements. J. Instrum., 7, C05012–12pp.
Abstract: The knowledge of the beta-decay properties of nuclei contributes decisively to our understanding of nuclear phenomena: the beta-delayed neutron emission of neutron rich nuclei plays an important role in the nucleosynthesis r-process and constitutes a probe for nuclear structure of very neutron rich nuclei providing information about the high energy part of the full beta strength (S-beta) function. In addition, beta-delayed neutrons are essential for the control and safety of nuclear reactors. In order to determine the neutron energy spectra and emission probabilities from neutron precursors a MOdular Neutron time-of-flight SpectromeTER (MONSTER) has been proposed for the DESPEC experiment at the future FAIR facility. The design of MONSTER and status of its construction are reported in this work.
|
Gomez-Cadenas, J. J., Benlloch-Rodriguez, J. M., Ferrario, P., Monrabal, F., Rodriguez, J., & Toledo, J. F. (2016). Investigation of the coincidence resolving time performance of a PET scanner based on liquid xenon: a Monte Carlo study. J. Instrum., 11, P09011–18pp.
Abstract: The measurement of the time of flight of the two 511 keV gammas recorded in coincidence in a PET scanner provides an effective way of reducing the random background and therefore increases the scanner sensitivity, provided that the coincidence resolving time (CRT) of the gammas is sufficiently good. The best commercial PET-TOF system today (based in LYSO crystals and digital SiPMs), is the VEREOS of Philips, boasting a CRT of 316 ps (FWHM). In this paper we present a Monte Carlo investigation of the CRT performance of a PET scanner exploiting the scintillating properties of liquid xenon. We find that an excellent CRT of 70 ps (depending on the PDE of the sensor) can be obtained if the scanner is instrumented with silicon photomultipliers (SiPMs) sensitive to the ultraviolet light emitted by xenon. Alternatively, a CRT of 160 ps can be obtained instrumenting the scanner with (much cheaper) blue-sensitive SiPMs coated with a suitable wavelength shifter. These results show the excellent time of flight capabilities of a PET device based in liquid xenon.
|
ATLAS Tile Calorimeter System(Abdallah, J. et al), Ferrer, A., Fiorini, L., Hernandez Jimenez, Y., Higon-Rodriguez, E., Ruiz-Martinez, A., et al. (2016). The Laser calibration of the ATLAS Tile Calorimeter during the LHC run 1. J. Instrum., 11, T10005–29pp.
Abstract: This article describes the Laser calibration system of the ATLAS hadronic Tile Calorimeter that has been used during the run 1 of the LHC. First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % for the end-caps. Finally, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented.
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2016). Performance of b-jet identification in the ATLAS experiment. J. Instrum., 11, P04008–126pp.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.
|