|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|
|
|
NEXT Collaboration(Renner, J. et al), Benlloch-Rodriguez, J., Botas, A., Ferrario, P., Gomez-Cadenas, J. J., Alvarez, V., et al. (2017). Background rejection in NEXT using deep neural networks. J. Instrum., 12, T01004–21pp.
Abstract: We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
|
|
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Electron and photon energy calibration with the ATLAS detector using 2015-2016 LHC proton-proton collision data. J. Instrum., 14, P03017–60pp.
Abstract: This paper presents the electron and photon energy calibration obtained with the ATLAS detector using about 36 fb(-1) of LHC proton-proton collision data recorded at root s = 13 TeV in 2015 and 2016. The different calibration steps applied to the data and the optimization of the reconstruction of electron and photon energies are discussed. The absolute energy scale is set using a large sample of Z boson decays into electron-positron pairs. The systematic uncertainty in the energy scale calibration varies between 0.03% to 0.2% in most of the detector acceptance for electrons with transverse momentum close to 45 GeV. For electrons with transverse momentum of 10 GeV the typical uncertainty is 0.3% to 0.8% and it varies between 0.25% and 1% for photons with transverse momentum around 60 GeV. Validations of the energy calibration with J/psi -> e(+)e(-) decays and radiative Z boson decays are also presented.
|
|
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
|
|
LHCb Collaboration(Aaij, R. et al), Garcia Martin, L. M., Henry, L., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., et al. (2019). Measurement of the electron reconstruction efficiency at LHCb. J. Instrum., 14, P11023–20pp.
Abstract: The single electron track-reconstruction efficiency is calibrated using a sample corresponding to 1.3 fb(-1) of pp collision data recorded with the LHCb detector in 2017. This measurement exploits B+ -> J/psi (e(+)e(-))K+ decays, where one of the electrons is fully reconstructed and paired with the kaon, while the other electron is reconstructed using only the information of the vertex detector. Despite this partial reconstruction, kinematic and geometric constraints allow the B meson mass to be reconstructed and the signal to be well separated from backgrounds. This in turn allows the electron reconstruction efficiency to be measured by matching the partial track segment found in the vertex detector to tracks found by LHCb's regular reconstruction algorithms. The agreement between data and simulation is evaluated, and corrections are derived for simulated electrons in bins of kinematics. These correction factors allow LHCb to measure branching fractions involving single electrons with a systematic uncertainty below 1%.
|
|