ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Performance of the ATLAS muon triggers in Run 2. J. Instrum., 15(9), P09015–57pp.
Abstract: The performance of the ATLAS muon trigger system is evaluated with proton-proton (pp) and heavy-ion (HI) collision data collected in Run 2 during 2015-2018 at the Large Hadron Collider. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons to cover the intermediate momentum range between 26 GeV and 100 GeV. Overall, the efficiency of the single-muon triggers is about 68% in the barrel region and 85% in the endcap region. The p(T) range for efficiency determination is extended by using muons from decays of J/psi mesons, W bosons, and top quarks. The performance in HI collision data is measured and shows good agreement with the results obtained in pp collisions. The muon trigger shows uniform and stable performance in good agreement with the prediction of a detailed simulation. Dedicated multi-muon triggers with kinematic selections provide the backbone to beauty, quarkonia, and low-mass physics studies. The design, evolution and performance of these triggers are discussed in detail.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Operation of the ATLAS trigger system in Run 2. J. Instrum., 15(10), P10004–59pp.
Abstract: The ATLAS experiment at the Large Hadron Collider employs a two-level trigger system to record data at an average rate of 1 kHz from physics collisions, starting from an initial bunch crossing rate of 40 MHz. During the LHC Run 2 (2015-2018), the ATLAS trigger system operated successfully with excellent performance and flexibility by adapting to the various run conditions encountered and has been vital for the ATLAS Run-2 physics programme For proton-proton running, approximately 1500 individual event selections were included in a trigger menu which specified the physics signatures and selection algorithms used for the data-taking, and the allocated event rate and bandwidth. The trigger menu must reflect the physics goals for a given data collection period, taking into account the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout, online processing farm, and offline storage. This document discusses the operation of the ATLAS trigger system during the nominal proton-proton data collection in Run 2 with examples of special data-taking runs. Aspects of software validation, evolution of the trigger selection algorithms during Run 2, monitoring of the trigger system and data quality as well as trigger configuration are presented.
|
Pierre Auger Collaboration(Abreu, P. et al), & Pastor, S. (2013). Techniques for measuring aerosol attenuation using the Central Laser Facility at the Pierre Auger Observatory. J. Instrum., 8, P04009–28pp.
Abstract: The Pierre Auger Observatory in Malargue, Argentina, is designed to study the properties of ultra-high energy cosmic rays with energies above 10(18) eV. It is a hybrid facility that employs a Fluorescence Detector to perform nearly calorimetric measurements of Extensive Air Shower energies. To obtain reliable calorimetric information from the FD, the atmospheric conditions at the observatory need to be continuously monitored during data acquisition. In particular, light attenuation due to aerosols is an important atmospheric correction. The aerosol concentration is highly variable, so that the aerosol attenuation needs to be evaluated hourly. We use light from the Central Laser Facility, located near the center of the observatory site, having an optical signature comparable to that of the highest energy showers detected by the FD. This paper presents two procedures developed to retrieve the aerosol attenuation of fluorescence light from CLF laser shots. Cross checks between the two methods demonstrate that results from both analyses are compatible, and that the uncertainties are well understood. The measurements of the aerosol attenuation provided by the two procedures are currently used at the Pierre Auger Observatory to reconstruct air shower data.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Helium identification with LHCb. J. Instrum., 19(2), P02010–23pp.
Abstract: The identification of helium nuclei at LHCb is achieved using a method based on measurements of ionisation losses in the silicon sensors and timing measurements in the Outer Tracker drift tubes. The background from photon conversions is reduced using the RICH detectors and an isolation requirement. The method is developed using pp collision data at root s = 13 TeV recorded by the LHCb experiment in the years 2016 to 2018, corresponding to an integrated luminosity of 5.5 fb(-1). A total of around 10(5) helium and antihelium candidates are identified with negligible background contamination. The helium identification efficiency is estimated to be approximately 50% with a corresponding background rejection rate of up to O(10(12)). These results demonstrate the feasibility of a rich programme of measurements of QCD and astrophysics interest involving light nuclei.
|