LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Identification of charm jets at LHCb. J. Instrum., 17(2), P02028–23pp.
Abstract: The identification of charm jets is achieved at LHCb for data collected in 2015-2018 using a method based on the properties of displaced vertices reconstructed and matched with jets. The performance of this method is determined using a dijet calibration dataset recorded by the LHCb detector and selected such that the jets are unbiased in quantities used in the tagging algorithm. The charm-tagging efficiency is reported as a function of the transverse momentum of the jet. The measured efficiencies are compared to those obtained from simulation and found to be in good agreement.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo Gimenez, V., et al. (2021). The ATLAS Fast TracKer system. J. Instrum., 16(7), P07006–61pp.
Abstract: The ATLAS Fast TracKer (FTK) was designed to provide full tracking for the ATLAS high-level trigger by using pattern recognition based on Associative Memory (AM) chips and fitting in high-speed field programmable gate arrays. The tracks found by the FTK are based on inputs from all modules of the pixel and silicon microstrip trackers. The as-built FTK system and components are described, as is the online software used to control them while running in the ATLAS data acquisition system. Also described is the simulation of the FTK hardware and the optimization of the AM pattern banks. An optimization for long-lived particles with large impact parameter values is included. A test of the FTK system with the data playback facility that allowed the FTK to be commissioned during the shutdown between Run 2 and Run 3 of the LHC is reported. The resulting tracks from part of the FTK system covering a limited eta-phi region of the detector are compared with the output from the FTK simulation. It is shown that FTK performance is in good agreement with the simulation.
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Centrality determination in heavy-ion collisions with the LHCb detector. J. Instrum., 17(5), P05009–31pp.
Abstract: The centrality of heavy-ion collisions is directly related to the created medium in these interactions. A procedure to determine the centrality of collisions with the LHCb detector is implemented for lead-lead collisions root s(NN) = 5 TeV and lead-neon fixed-target collisions at root s(NN) = 69 GeV. The energy deposits in the electromagnetic calorimeter are used to determine and define the centrality classes. The correspondence between the number of participants and the centrality for the lead-lead collisions is in good agreement with the correspondence found in other experiments, and the centrality measurements for the lead-neon collisions presented here are performed for the first time in fixed-target collisions at the LHC.
|
Gololo, M. G. D., Carrio Argos, F., & Mellado, B. (2022). Tile Computer-on-Module for the ATLAS Tile Calorimeter Phase-II upgrades. J. Instrum., 17(6), P06020–14pp.
Abstract: The Tile PreProcessor (TilePPr) is the core element of the Tile Calorimeter (TileCal) off-detector electronics for High-luminosity Large Hadron Collider (HL-LHC). The TilePPr comprises FPGA-based boards to operate and read out the TileCal on-detector electronics. The Tile Computer on Module (TileCoM) mezzanine is embedded within TilePPr to carry out three main functionalities. These include remote configuration of on-detector electronics and TilePPr FPGAs, interface the TilePPr with the ATLAS Trigger and Data Acquisition (TDAQ) system, and interfacing the TilePPr with the ATLAS Detector Control System (DCS) by providing monitoring data. The TileCoM is a 10-layer board with a Zynq UltraScale+ ZU2CG for processing data, interface components to integrate with TilePPr and the power supply to be connected to the Advanced Telecommunication Computing Architecture carrier. A CentOS embedded Linux is deployed on the TileCoM to implement the required functionalities for the HL-LHC. In this paper we present the hardware and firmware developments of the TileCoM system in terms of remote programming, interface with ATLAS TDAQ system and DCS system.
|
Super-Kamiokande Collaboration(Abe, K. et al), & Molina Sedgwick, S. (2022). Neutron tagging following atmospheric neutrino events in a water Cherenkov detector. J. Instrum., 17(10), P10029–41pp.
Abstract: We present the development of neutron-tagging techniques in Super-Kamiokande IV using a neural network analysis. The detection efficiency of neutron capture on hydrogen is estimated to be 26%, with a mis-tag rate of 0.016 per neutrino event. The uncertainty of the tagging efficiency is estimated to be 9.0%. Measurement of the tagging efficiency with data from an Americium-Beryllium calibration agrees with this value within 10%. The tagging procedure was performed on 3,244.4 days of SK-IV atmospheric neutrino data, identifying 18,091 neutrons in 26,473 neutrino events. The fitted neutron capture lifetime was measured as 218 +/- 9 μs.
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
CALICE Collaboration(White, A. et al), & Irles, A. (2023). Design, construction and commissioning of a technological prototype of a highly granular SiPM-on-tile scintillator-steel hadronic calorimeter. J. Instrum., 18(11), P11018–39pp.
Abstract: The CALICE collaboration is developing highly granular electromagnetic and hadronic calorimeters for detectors at future energy frontier electron-positron colliders. After successful tests of a physics prototype, a technological prototype of the Analog Hadron Calorimeter has been built, based on a design and construction techniques scalable to a collider detector. The prototype consists of a steel absorber structure and active layers of small scintillator tiles that are individually read out by directly coupled SiPMs. Each layer has an active area of 72 x 72 cm2 and a tile size of 3 x 3 cm2. With 38 active layers, the prototype has nearly 22, 000 readout channels, and its total thickness amounts to 4.4 nuclear interaction lengths. The dedicated readout electronics provide time stamping of each hit with an expected resolution of about 1 ns. The prototype was constructed in 2017 and commissioned in beam tests at DESY. It recorded muons, hadron showers and electron showers at different energies in test beams at CERN in 2018. In this paper, the design of the prototype, its construction and commissioning are described. The methods used to calibrate the detector are detailed, and the performance achieved in terms of uniformity and stability is presented.
|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., Cabrera Urban, S., et al. (2023). Tools for estimating fake/non-prompt lepton backgrounds with the ATLAS detector at the LHC. J. Instrum., 18(11), T11004–61pp.
Abstract: Measurements and searches performed with the ATLAS detector at the CERN LHC often involve signatures with one or more prompt leptons. Such analyses are subject to 'fake/non-prompt' lepton backgrounds, where either a hadron or a lepton from a hadron decay or an electron from a photon conversion satisfies the prompt-lepton selection criteria. These backgrounds often arise within a hadronic jet because of particle decays in the showering process, particle misidentification or particle interactions with the detector material. As it is challenging to model these processes with high accuracy in simulation, their estimation typically uses data-driven methods. Three methods for carrying out this estimation are described, along with their implementation in ATLAS and their performance.
|
Agaras, M. N. et al, & Fiorini, L. (2023). Laser calibration of the ATLAS Tile Calorimeter during LHC Run 2. J. Instrum., 18(6), P06023–35pp.
Abstract: This article reports the laser calibration of the hadronic Tile Calorimeter of the ATLAS experiment in the LHC Run 2 data campaign. The upgraded Laser II calibration system is described. The system was commissioned during the first LHC Long Shutdown, exhibiting a stability better than 0.8% for the laser light monitoring. The methods employed to derive the detector calibration factors with data from the laser calibration runs are also detailed. These allowed to correct for the response fluctuations of the 9852 photomultiplier tubes of the Tile Calorimeter with a total uncertainty of 0.5% plus a luminosity-dependent sub-dominant term. Finally, we report the regular monitoring and performance studies using laser events in both standalone runs and during proton collisions. These studies include channel timing and quality inspection, and photomultiplier linearity and response dependence on anode current.
|
ATLAS Collaboration(Aad, G. et al), Akiot, A., Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., et al. (2023). Fast b-tagging at the high-level trigger of the ATLAS experiment in LHC Run 3. J. Instrum., 18(11), P11006–38pp.
Abstract: The ATLAS experiment relies on real-time hadronic jet reconstruction and b-tagging to record fully hadronic events containing b-jets. These algorithms require track reconstruction, which is computationally expensive and could overwhelm the high-level-trigger farm, even at the reduced event rate that passes the ATLAS first stage hardware-based trigger. In LHC Run 3, ATLAS has mitigated these computational demands by introducing a fast neural-network-based b-tagger, which acts as a low-precision filter using input from hadronic jets and tracks. It runs after a hardware trigger and before the remaining high-level-trigger reconstruction. This design relies on the negligible cost of neural-network inference as compared to track reconstruction, and the cost reduction from limiting tracking to specific regions of the detector. In the case of Standard Model HH -> b (b) over barb (b) over bar, a key signature relying on b-jet triggers, the filter lowers the input rate to the remaining high-level trigger by a factor of five at the small cost of reducing the overall signal efficiency by roughly 2%.
|