Aiola, S., Amhis, Y., Billoir, P., Jashal, B. K., Henry, L., Oyanguren, A., et al. (2021). Hybrid seeding: A standalone track reconstruction algorithm for scintillating fibre tracker at LHCb. Comput. Phys. Commun., 260, 107713–5pp.
Abstract: We describe the Hybrid seeding, a stand-alone pattern recognition algorithm aiming at finding charged particle trajectories for the LHCb upgrade. A significant improvement to the charged particle reconstruction efficiency is accomplished by exploiting the knowledge of the LHCb magnetic field and the position of energy deposits in the scintillating fibre tracker detector. Moreover, we achieve a low fake rate and a small contribution to the overall timing budget of the LHCb real-time data processing.
|
Renner, J., Cervera-Villanueva, A., Hernando, J. A., Izmaylov, A., Monrabal, F., Muñoz, J., et al. (2015). Improved background rejection in neutrinoless double beta decay experiments using a magnetic field in a high pressure xenon TPC. J. Instrum., 10, P12020–19pp.
Abstract: We demonstrate that the application of an external magnetic field could lead to an improved background rejection in neutrinoless double-beta (0 nu beta beta) decay experiments using a high-pressure xenon (HPXe) TPC. HPXe chambers are capable of imaging electron tracks, a feature that enhances the separation between signal events (the two electrons emitted in the 0 nu beta beta decay of Xe-136) and background events, arising chiefly from single electrons of kinetic energy compatible with the end-point of the 0 nu beta beta decay (Q(beta beta)). Applying an external magnetic field of sufficiently high intensity (in the range of 0.5-1 Tesla for operating pressures in the range of 5-15 atmospheres) causes the electrons to produce helical tracks. Assuming the tracks can be properly reconstructed, the sign of the curvature can be determined at several points along these tracks, and such information can be used to separate signal (0 nu beta beta) events containing two electrons producing a track with two different directions of curvature from background (single-electron) events producing a track that should spiral in a single direction. Due to electron multiple scattering, this strategy is not perfectly efficient on an event-by-event basis, but a statistical estimator can be constructed which can be used to reject background events by one order of magnitude at a moderate cost (about 30%) in signal efficiency. Combining this estimator with the excellent energy resolution and topological signature identification characteristic of the HPXe TPC, it is possible to reach a background rate of less than one count per ton-year of exposure. Such a low background rate is an essential feature of the next generation of 0 nu beta beta experiments, aiming to fully explore the inverse hierarchy of neutrino masses.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2014). Precision luminosity measurements at LHCb. J. Instrum., 9, P12005–91pp.
Abstract: Measuring cross-sections at the LHC requires the luminosity to be determined accurately at each centre-of-mass energy root s. In this paper results are reported from the luminosity calibrations carried out at the LHC interaction point 8 with the LHCb detector for root s = 2.76, 7 and 8TeV (proton-proton collisions) and for root s(NN) = 5TeV (proton-lead collisions). Both the “van der Meer scan” and “beam-gas imaging” luminosity calibration methods were employed. It is observed that the beam density profile cannot always be described by a function that is factorizable in the two transverse coordinates. The introduction of a two-dimensional description of the beams improves significantly the consistency of the results. For proton-proton interactions at root s = 8TeV a relative precision of the luminosity calibration of 1.47% is obtained using van der Meer scans and 1.43% using beam-gas imaging, resulting in a combined precision of 1.12%. Applying the calibration to the full data set determines the luminosity with a precision of 1.16%. This represents the most precise luminosity measurement achieved so far at a bunched-beam hadron collider.
|
LHCb Collaboration(Aaij, R. et al), Garcia Martin, L. M., Henry, L., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., et al. (2019). Measurement of the electron reconstruction efficiency at LHCb. J. Instrum., 14, P11023–20pp.
Abstract: The single electron track-reconstruction efficiency is calibrated using a sample corresponding to 1.3 fb(-1) of pp collision data recorded with the LHCb detector in 2017. This measurement exploits B+ -> J/psi (e(+)e(-))K+ decays, where one of the electrons is fully reconstructed and paired with the kaon, while the other electron is reconstructed using only the information of the vertex detector. Despite this partial reconstruction, kinematic and geometric constraints allow the B meson mass to be reconstructed and the signal to be well separated from backgrounds. This in turn allows the electron reconstruction efficiency to be measured by matching the partial track segment found in the vertex detector to tracks found by LHCb's regular reconstruction algorithms. The agreement between data and simulation is evaluated, and corrections are derived for simulated electrons in bins of kinematics. These correction factors allow LHCb to measure branching fractions involving single electrons with a systematic uncertainty below 1%.
|
NEXT Collaboration(Alvarez, V. et al), Carcel, S., Cervera-Villanueva, A., Diaz, J., Ferrario, P., Gil, A., et al. (2013). Operation and first results of the NEXT-DEMO prototype using a silicon photomultiplier tracking array. J. Instrum., 8, P09011–20pp.
Abstract: NEXT-DEMO is a high-pressure xenon gas TPC which acts as a technological test-bed and demonstrator for the NEXT-100 neutrinoless double beta decay experiment. In its current configuration the apparatus fully implements the NEXT-100 design concept. This is an asymmetric TPC, with an energy plane made of photomultipliers and a tracking plane made of silicon photomultipliers (SiPM) coated with TPB. The detector in this new configuration has been used to reconstruct the characteristic signature of electrons in dense gas, demonstrating the ability to identify the MIP and “blob” regions. Moreover, the SiPM tracking plane allows for the definition of a large fiducial region in which an excellent energy resolution of 1.82% FWHM at 511 keV has been measured (a value which extrapolates to 0.83% at the xenon Q(beta beta)).
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo Gimenez, V., et al. (2021). The ATLAS Fast TracKer system. J. Instrum., 16(7), P07006–61pp.
Abstract: The ATLAS Fast TracKer (FTK) was designed to provide full tracking for the ATLAS high-level trigger by using pattern recognition based on Associative Memory (AM) chips and fitting in high-speed field programmable gate arrays. The tracks found by the FTK are based on inputs from all modules of the pixel and silicon microstrip trackers. The as-built FTK system and components are described, as is the online software used to control them while running in the ATLAS data acquisition system. Also described is the simulation of the FTK hardware and the optimization of the AM pattern banks. An optimization for long-lived particles with large impact parameter values is included. A test of the FTK system with the data playback facility that allowed the FTK to be commissioned during the shutdown between Run 2 and Run 3 of the LHC is reported. The resulting tracks from part of the FTK system covering a limited eta-phi region of the detector are compared with the output from the FTK simulation. It is shown that FTK performance is in good agreement with the simulation.
|
ATLAS Collaboration(Aad, G. et al), Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., Ferrer, A., et al. (2013). Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run. J. Instrum., 8, P07004–72pp.
Abstract: This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.
|
ATLAS Collaboration(Abat, E. et al), Bernabeu Verdu, J., Castillo Gimenez, V., Costa, M. J., Escobar, C., Ferrer, A., et al. (2011). A layer correlation technique for pion energy calibration at the 2004 ATLAS Combined Beam Test. J. Instrum., 6, P06001–35pp.
Abstract: A new method for calibrating the hadron response of a segmented calorimeter is developed and successfully applied to beam test data. It is based on a principal component analysis of energy deposits in the calorimeter layers, exploiting longitudinal shower development information to improve the measured energy resolution. Corrections for invisible hadronic energy and energy lost in dead material in front of and between the calorimeters of the ATLAS experiment were calculated with simulated Geant4 Monte Carlo events and used to reconstruct the energy of pions impinging on the calorimeters during the 2004 Barrel Combined Beam Test at the CERN H8 area. For pion beams with energies between 20 GeV and 180 GeV, the particle energy is reconstructed within 3% and the energy resolution is improved by between 11% and 25% compared to the resolution at the electromagnetic scale.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|