LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Identification of charm jets at LHCb. J. Instrum., 17(2), P02028–23pp.
Abstract: The identification of charm jets is achieved at LHCb for data collected in 2015-2018 using a method based on the properties of displaced vertices reconstructed and matched with jets. The performance of this method is determined using a dijet calibration dataset recorded by the LHCb detector and selected such that the jets are unbiased in quantities used in the tagging algorithm. The charm-tagging efficiency is reported as a function of the transverse momentum of the jet. The measured efficiencies are compared to those obtained from simulation and found to be in good agreement.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2014). Precision luminosity measurements at LHCb. J. Instrum., 9, P12005–91pp.
Abstract: Measuring cross-sections at the LHC requires the luminosity to be determined accurately at each centre-of-mass energy root s. In this paper results are reported from the luminosity calibrations carried out at the LHC interaction point 8 with the LHCb detector for root s = 2.76, 7 and 8TeV (proton-proton collisions) and for root s(NN) = 5TeV (proton-lead collisions). Both the “van der Meer scan” and “beam-gas imaging” luminosity calibration methods were employed. It is observed that the beam density profile cannot always be described by a function that is factorizable in the two transverse coordinates. The introduction of a two-dimensional description of the beams improves significantly the consistency of the results. For proton-proton interactions at root s = 8TeV a relative precision of the luminosity calibration of 1.47% is obtained using van der Meer scans and 1.43% using beam-gas imaging, resulting in a combined precision of 1.12%. Applying the calibration to the full data set determines the luminosity with a precision of 1.16%. This represents the most precise luminosity measurement achieved so far at a bunched-beam hadron collider.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2015). Measurement of the track reconstruction efficiency at LHCb. J. Instrum., 10, P02007–23pp.
Abstract: The determination of track reconstruction efficiencies at LHCb using J/psi -> mu(+)mu(-) decays is presented. Efficiencies above 95% are found for the data taking periods in 2010, 2011, and 2012. The ratio of the track reconstruction efficiency of muons in data and simulation is compatible with unity and measured with an uncertainty of 0.8% for data taking in 2010, and at a precision of 0.4% for data taking in 2011 and 2012. For hadrons an additional 1.4% uncertainty due to material interactions is assumed. This result is crucial for accurate cross section and branching fraction measurements in LHCb.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|
NEXT Collaboration(Alvarez, V. et al), Carcel, S., Cervera-Villanueva, A., Diaz, J., Ferrario, P., Gil, A., et al. (2013). Initial results of NEXT-DEMO, a large-scale prototype of the NEXT-100 experiment. J. Instrum., 8, P04002–25pp.
Abstract: NEXT-DEMO is a large-scale prototype of the NEXT-100 detector, an electroluminescent time projection chamber that will search for the neutrinoless double beta decay of Xe-136 using 100-150 kg of enriched xenon gas. NEXT-DEMO was built to prove the expected performance of NEXT-100, namely, energy resolution better than 1% FWHM at 2.5MeV and event topological reconstruction. In this paper we describe the prototype and its initial results. A resolution of 1.75% FWHM at 511 keV (which extrapolates to 0.8% FWHM at 2.5 MeV) was obtained at 10 bar pressure using a gamma-ray calibration source. Also, a basic study of the event topology along the longitudinal coordinate is presented, proving that it is possible to identify the distinct dE/dx of electron tracks in high-pressure xenon using an electroluminescence TPC.
|
NEXT Collaboration(Alvarez, V. et al), Carcel, S., Cervera-Villanueva, A., Diaz, J., Ferrario, P., Gil, A., et al. (2013). Operation and first results of the NEXT-DEMO prototype using a silicon photomultiplier tracking array. J. Instrum., 8, P09011–20pp.
Abstract: NEXT-DEMO is a high-pressure xenon gas TPC which acts as a technological test-bed and demonstrator for the NEXT-100 neutrinoless double beta decay experiment. In its current configuration the apparatus fully implements the NEXT-100 design concept. This is an asymmetric TPC, with an energy plane made of photomultipliers and a tracking plane made of silicon photomultipliers (SiPM) coated with TPB. The detector in this new configuration has been used to reconstruct the characteristic signature of electrons in dense gas, demonstrating the ability to identify the MIP and “blob” regions. Moreover, the SiPM tracking plane allows for the definition of a large fiducial region in which an excellent energy resolution of 1.82% FWHM at 511 keV has been measured (a value which extrapolates to 0.83% at the xenon Q(beta beta)).
|
NEXT Collaboration(Renner, J. et al), Benlloch-Rodriguez, J., Botas, A., Ferrario, P., Gomez-Cadenas, J. J., Alvarez, V., et al. (2017). Background rejection in NEXT using deep neural networks. J. Instrum., 12, T01004–21pp.
Abstract: We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Renner, J., Cervera-Villanueva, A., Hernando, J. A., Izmaylov, A., Monrabal, F., Muñoz, J., et al. (2015). Improved background rejection in neutrinoless double beta decay experiments using a magnetic field in a high pressure xenon TPC. J. Instrum., 10, P12020–19pp.
Abstract: We demonstrate that the application of an external magnetic field could lead to an improved background rejection in neutrinoless double-beta (0 nu beta beta) decay experiments using a high-pressure xenon (HPXe) TPC. HPXe chambers are capable of imaging electron tracks, a feature that enhances the separation between signal events (the two electrons emitted in the 0 nu beta beta decay of Xe-136) and background events, arising chiefly from single electrons of kinetic energy compatible with the end-point of the 0 nu beta beta decay (Q(beta beta)). Applying an external magnetic field of sufficiently high intensity (in the range of 0.5-1 Tesla for operating pressures in the range of 5-15 atmospheres) causes the electrons to produce helical tracks. Assuming the tracks can be properly reconstructed, the sign of the curvature can be determined at several points along these tracks, and such information can be used to separate signal (0 nu beta beta) events containing two electrons producing a track with two different directions of curvature from background (single-electron) events producing a track that should spiral in a single direction. Due to electron multiple scattering, this strategy is not perfectly efficient on an event-by-event basis, but a statistical estimator can be constructed which can be used to reject background events by one order of magnitude at a moderate cost (about 30%) in signal efficiency. Combining this estimator with the excellent energy resolution and topological signature identification characteristic of the HPXe TPC, it is possible to reach a background rate of less than one count per ton-year of exposure. Such a low background rate is an essential feature of the next generation of 0 nu beta beta experiments, aiming to fully explore the inverse hierarchy of neutrino masses.
|