|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., et al. (2018). Comparison between simulated and observed LHC beam backgrounds in the ATLAS experiment at E-beam=4 TeV. J. Instrum., 13, P12006–41pp.
Abstract: Results of dedicated Monte Carlo simulations of beam-induced background (BIB) in the ATLAS experiment at the Large Hadron Collider (LHC) are presented and compared with data recorded in 2012. During normal physics operation this background arises mainly from scattering of the 4 TeV protons on residual gas in the beam pipe. Methods of reconstructing the BIB signals in the ATLAS detector, developed and implemented in the simulation chain based on the FLUKA Monte Carlo simulation package, are described. The interaction rates are determined from the residual gas pressure distribution in the LHC ring in order to set an absolute scale on the predicted rates of BIB so that they can be compared quantitatively with data. Through these comparisons the origins of the BIB leading to different observables in the ATLAS detectors are analysed. The level of agreement between simulation results and BIB measurements by ATLAS in 2012 demonstrates that a good understanding of the origin of BIB has been reached.
|
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Centrality determination in heavy-ion collisions with the LHCb detector. J. Instrum., 17(5), P05009–31pp.
Abstract: The centrality of heavy-ion collisions is directly related to the created medium in these interactions. A procedure to determine the centrality of collisions with the LHCb detector is implemented for lead-lead collisions root s(NN) = 5 TeV and lead-neon fixed-target collisions at root s(NN) = 69 GeV. The energy deposits in the electromagnetic calorimeter are used to determine and define the centrality classes. The correspondence between the number of participants and the centrality for the lead-lead collisions is in good agreement with the correspondence found in other experiments, and the centrality measurements for the lead-neon collisions presented here are performed for the first time in fixed-target collisions at the LHC.
|
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|