Aiola, S., Amhis, Y., Billoir, P., Jashal, B. K., Henry, L., Oyanguren, A., et al. (2021). Hybrid seeding: A standalone track reconstruction algorithm for scintillating fibre tracker at LHCb. Comput. Phys. Commun., 260, 107713–5pp.
Abstract: We describe the Hybrid seeding, a stand-alone pattern recognition algorithm aiming at finding charged particle trajectories for the LHCb upgrade. A significant improvement to the charged particle reconstruction efficiency is accomplished by exploiting the knowledge of the LHCb magnetic field and the position of energy deposits in the scintillating fibre tracker detector. Moreover, we achieve a low fake rate and a small contribution to the overall timing budget of the LHCb real-time data processing.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Bouhova-Thacker, E., Kostyukhin, V., Koffas, T., Liebig, W., Limper, M., Piacquadio, G. N., et al. (2010). Expected Performance of Vertex Reconstruction in the ATLAS Experiment at the LHC. IEEE Trans. Nucl. Sci., 57(2), 760–767.
Abstract: In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10(34) cm(-2) s(-1)) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper is the expected performance of the vertex reconstruction used in the ATLAS experiment. The algorithms for the reconstruction of primary and secondary vertices as well as for finding photon conversions and vertex reconstruction in jets are described. The implementation of vertex algorithms which follows a very modular design based on object-oriented C++ is presented. A user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.
|
ATLAS Collaboration(Aad, G. et al), Aikot, A., Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., et al. (2024). Electron and photon energy calibration with the ATLAS detector using LHC Run 2 data. J. Instrum., 19(2), P02009–58pp.
Abstract: This paper presents the electron and photon energy calibration obtained with the ATLAS detector using 140 fb-1 of LHC proton -proton collision data recorded at -Js = 13 TeV between 2015 and 2018. Methods for the measurement of electron and photon energies are outlined, along with the current knowledge of the passive material in front of the ATLAS electromagnetic calorimeter. The energy calibration steps are discussed in detail, with emphasis on the improvements introduced in this paper. The absolute energy scale is set using a large sample of Z -boson decays into electron -positron pairs, and its residual dependence on the electron energy is used for the first time to further constrain systematic uncertainties. The achieved calibration uncertainties are typically 0.05% for electrons from resonant Z -boson decays, 0.4% at ET – 10 GeV, and 0.3% at ET – 1 TeV; for photons at ET <^>' 60 GeV, they are 0.2% on average. This is more than twice as precise as the previous calibration. The new energy calibration is validated using .11tfr -, ee and radiative Z -boson decays.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Electron and photon energy calibration with the ATLAS detector using 2015-2016 LHC proton-proton collision data. J. Instrum., 14, P03017–60pp.
Abstract: This paper presents the electron and photon energy calibration obtained with the ATLAS detector using about 36 fb(-1) of LHC proton-proton collision data recorded at root s = 13 TeV in 2015 and 2016. The different calibration steps applied to the data and the optimization of the reconstruction of electron and photon energies are discussed. The absolute energy scale is set using a large sample of Z boson decays into electron-positron pairs. The systematic uncertainty in the energy scale calibration varies between 0.03% to 0.2% in most of the detector acceptance for electrons with transverse momentum close to 45 GeV. For electrons with transverse momentum of 10 GeV the typical uncertainty is 0.3% to 0.8% and it varies between 0.25% and 1% for photons with transverse momentum around 60 GeV. Validations of the energy calibration with J/psi -> e(+)e(-) decays and radiative Z boson decays are also presented.
|
ATLAS Collaboration(Aad, G. et al), Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., Ferrer, A., et al. (2013). Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run. J. Instrum., 8, P07004–72pp.
Abstract: This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Centrality determination in heavy-ion collisions with the LHCb detector. J. Instrum., 17(5), P05009–31pp.
Abstract: The centrality of heavy-ion collisions is directly related to the created medium in these interactions. A procedure to determine the centrality of collisions with the LHCb detector is implemented for lead-lead collisions root s(NN) = 5 TeV and lead-neon fixed-target collisions at root s(NN) = 69 GeV. The energy deposits in the electromagnetic calorimeter are used to determine and define the centrality classes. The correspondence between the number of participants and the centrality for the lead-lead collisions is in good agreement with the correspondence found in other experiments, and the centrality measurements for the lead-neon collisions presented here are performed for the first time in fixed-target collisions at the LHC.
|
NEXT Collaboration(Renner, J. et al), Benlloch-Rodriguez, J., Botas, A., Ferrario, P., Gomez-Cadenas, J. J., Alvarez, V., et al. (2017). Background rejection in NEXT using deep neural networks. J. Instrum., 12, T01004–21pp.
Abstract: We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|
ATLAS Collaboration(Abat, E. et al), Bernabeu Verdu, J., Castillo Gimenez, V., Costa, M. J., Escobar, C., Ferrer, A., et al. (2011). A layer correlation technique for pion energy calibration at the 2004 ATLAS Combined Beam Test. J. Instrum., 6, P06001–35pp.
Abstract: A new method for calibrating the hadron response of a segmented calorimeter is developed and successfully applied to beam test data. It is based on a principal component analysis of energy deposits in the calorimeter layers, exploiting longitudinal shower development information to improve the measured energy resolution. Corrections for invisible hadronic energy and energy lost in dead material in front of and between the calorimeters of the ATLAS experiment were calculated with simulated Geant4 Monte Carlo events and used to reconstruct the energy of pions impinging on the calorimeters during the 2004 Barrel Combined Beam Test at the CERN H8 area. For pion beams with energies between 20 GeV and 180 GeV, the particle energy is reconstructed within 3% and the energy resolution is improved by between 11% and 25% compared to the resolution at the electromagnetic scale.
|