|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., Cabrera Urban, S., et al. (2023). Tools for estimating fake/non-prompt lepton backgrounds with the ATLAS detector at the LHC. J. Instrum., 18(11), T11004–61pp.
Abstract: Measurements and searches performed with the ATLAS detector at the CERN LHC often involve signatures with one or more prompt leptons. Such analyses are subject to 'fake/non-prompt' lepton backgrounds, where either a hadron or a lepton from a hadron decay or an electron from a photon conversion satisfies the prompt-lepton selection criteria. These backgrounds often arise within a hadronic jet because of particle decays in the showering process, particle misidentification or particle interactions with the detector material. As it is challenging to model these processes with high accuracy in simulation, their estimation typically uses data-driven methods. Three methods for carrying out this estimation are described, along with their implementation in ATLAS and their performance.
|
|
|
ATLAS Collaboration(Aad, G. et al), Akiot, A., Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., et al. (2023). Fast b-tagging at the high-level trigger of the ATLAS experiment in LHC Run 3. J. Instrum., 18(11), P11006–38pp.
Abstract: The ATLAS experiment relies on real-time hadronic jet reconstruction and b-tagging to record fully hadronic events containing b-jets. These algorithms require track reconstruction, which is computationally expensive and could overwhelm the high-level-trigger farm, even at the reduced event rate that passes the ATLAS first stage hardware-based trigger. In LHC Run 3, ATLAS has mitigated these computational demands by introducing a fast neural-network-based b-tagger, which acts as a low-precision filter using input from hadronic jets and tracks. It runs after a hardware trigger and before the remaining high-level-trigger reconstruction. This design relies on the negligible cost of neural-network inference as compared to track reconstruction, and the cost reduction from limiting tracking to specific regions of the detector. In the case of Standard Model HH -> b (b) over barb (b) over bar, a key signature relying on b-jet triggers, the filter lowers the input rate to the remaining high-level trigger by a factor of five at the small cost of reducing the overall signal efficiency by roughly 2%.
|
|
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Operation of the ATLAS trigger system in Run 2. J. Instrum., 15(10), P10004–59pp.
Abstract: The ATLAS experiment at the Large Hadron Collider employs a two-level trigger system to record data at an average rate of 1 kHz from physics collisions, starting from an initial bunch crossing rate of 40 MHz. During the LHC Run 2 (2015-2018), the ATLAS trigger system operated successfully with excellent performance and flexibility by adapting to the various run conditions encountered and has been vital for the ATLAS Run-2 physics programme For proton-proton running, approximately 1500 individual event selections were included in a trigger menu which specified the physics signatures and selection algorithms used for the data-taking, and the allocated event rate and bandwidth. The trigger menu must reflect the physics goals for a given data collection period, taking into account the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout, online processing farm, and offline storage. This document discusses the operation of the ATLAS trigger system during the nominal proton-proton data collection in Run 2 with examples of special data-taking runs. Aspects of software validation, evolution of the trigger selection algorithms during Run 2, monitoring of the trigger system and data quality as well as trigger configuration are presented.
|
|
|
KM3NeT Collaboration(Aiello, S. et al), Alves Garre, S., Calvo, D., Carretero, V., Colomer, M., Corredoira, I., et al. (2020). Event reconstruction for KM3NeT/ORCA using convolutional neural networks. J. Instrum., 15(10), P10005–39pp.
Abstract: The KM3NeT research infrastructure is currently under construction at two locations in the Mediterranean Sea. The KM3NeT/ORCA water-Cherenkov neutrino detector off the French coast will instrument several megatons of seawater with photosensors. Its main objective is the determination of the neutrino mass ordering. This work aims at demonstrating the general applicability of deep convolutional neural networks to neutrino telescopes, using simulated datasets for the KM3NeT/ORCA detector as an example. To this end, the networks are employed to achieve reconstruction and classification tasks that constitute an alternative to the analysis pipeline presented for KM3NeT/ORCA in the KM3NeT Letter of Intent. They are used to infer event reconstruction estimates for the energy, the direction, and the interaction point of incident neutrinos. The spatial distribution of Cherenkov light generated by charged particles induced in neutrino interactions is classified as shower- or track-like, and the main background processes associated with the detection of atmospheric neutrinos are recognized. Performance comparisons to machine-learning classification and maximum-likelihood reconstruction algorithms previously developed for KM3NeT/ORCA are provided. It is shown that this application of deep convolutional neural networks to simulated datasets for a large-volume neutrino telescope yields competitive reconstruction results and performance improvements with respect to classical approaches.
|
|
|
Super-Kamiokande Collaboration(Abe, K. et al), & Molina Sedgwick, S. (2022). Neutron tagging following atmospheric neutrino events in a water Cherenkov detector. J. Instrum., 17(10), P10029–41pp.
Abstract: We present the development of neutron-tagging techniques in Super-Kamiokande IV using a neural network analysis. The detection efficiency of neutron capture on hydrogen is estimated to be 26%, with a mis-tag rate of 0.016 per neutrino event. The uncertainty of the tagging efficiency is estimated to be 9.0%. Measurement of the tagging efficiency with data from an Americium-Beryllium calibration agrees with this value within 10%. The tagging procedure was performed on 3,244.4 days of SK-IV atmospheric neutrino data, identifying 18,091 neutrons in 26,473 neutrino events. The fitted neutron capture lifetime was measured as 218 +/- 9 μs.
|
|
|
T2K Collaboration(Abe, K. et al), Antonova, M., Cervera-Villanueva, A., Molina Bueno, L., & Novella, P. (2022). Scintillator ageing of the T2K near detectors fro 2010 to 2021. J. Instrum., 17(10), P10028–36pp.
Abstract: The T2K experiment widely uses plastic scintillator as a target for neutrino interactions and an active medium for the measurement of charged particles produced in neutrino interactions at its near detector complex. Over 10 years of operation the measured light yield recorded by the scintillator based subsystems has been observed to degrade by 0.9-2.2% per year. Extrapolation of the degradation rate through to 2040 indicates the recorded light yield should remain above the lower threshold used by the current reconstruction algorithms for all subsystems. This will allow the near detectors to continue contributing to important physics measurements during the T2K-II and Hyper-Kamiokande eras. Additionally, work to disentangle the degradation of the plastic scintillator and wavelength shifting fibres shows that the reduction in light yield can be attributed to the ageing of the plastic scintillator. The long component of the attenuation length of the wavelength shifting fibres was observed to degrade by 1.3-5.4% per year, while the short component of the attenuation length did not show any conclusive degradation.
|
|
|
Ahlburg, P. et al, & Marinas, C. (2020). EUDAQ – a data acquisition software framework for common beam telescopes. J. Instrum., 15(1), P01038–30pp.
Abstract: EUDAQ is a generic data acquisition software developed for use in conjunction with common beam telescopes at charged particle beam lines. Providing high-precision reference tracks for performance studies of new sensors, beam telescopes are essential for the research and development towards future detectors for high-energy physics. As beam time is a highly limited resource, EUDAQ has been designed with reliability and ease-of-use in mind. It enables flexible integration of different independent devices under test via their specific data acquisition systems into a top-level framework. EUDAQ controls all components globally, handles the data flow centrally and synchronises and records the data streams. Over the past decade, EUDAQ has been deployed as part of a wide range of successful test beam campaigns and detector development applications.
|
|
|
Poley, L., Stolzenberg, U., Schwenker, B., Frey, A., Gottlicher, P., Marinas, C., et al. (2021). Mapping the material distribution of a complex structure in an electron beam. J. Instrum., 16(1), P01010–33pp.
Abstract: The simulation and analysis of High Energy Physics experiments require a realistic simulation of the detector material and its distribution. The challenge is to describe all active and passive parts of large scale detectors like ATLAS in terms of their size, position and material composition. The common method for estimating the radiation length by weighing individual components, adding up their contributions and averaging the resulting material distribution over extended structures provides a good general estimate, but can deviate significantly from the material actually present. A method has been developed to assess its material distribution with high spatial resolution using the reconstructed scattering angles and hit positions of high energy electron tracks traversing an object under investigation. The study presented here shows measurements for an extended structure with a highly inhomogeneous material distribution. The structure under investigation is an End-of-Substructure-card prototype designed for the ATLAS Inner Tracker strip tracker – a PCB populated with components of a large range of material budgets and sizes. The measurements presented here summarise requirements for data samples and reconstructed electron tracks for reliable image reconstruction of large scale, inhomogeneous samples, choices of pixel sizes compared to the size of features under investigation as well as a bremsstrahlung correction for high material densities and thicknesses.
|
|
|
DUNE Collaboration(Abud, A. A. et al), Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., Fernandez Menendez, P., et al. (2022). Design, construction and operation of the ProtoDUNE-SP Liquid Argon TPC. J. Instrum., 17(1), P01005–111pp.
Abstract: The ProtoDUNE-SP detector is a single-phase liquid argon time projection chamber (LArTPC) that was constructed and operated in the CERN North Area at the end of the H4 beamline. This detector is a prototype for the first far detector module of the Deep Underground Neutrino Experiment (DUNE), which will be constructed at the Sandford Underground Research Facility (SURF) in Lead, South Dakota, U.S.A. The ProtoDUNE-SP detector incorporates full-size components as designed for DUNE and has an active volume of 7 x 6 x 7.2 m3. The H4 beam delivers incident particles with well-measured momenta and high-purity particle identification. ProtoDUNE-SP's successful operation between 2018 and 2020 demonstrates the effectiveness of the single-phase far detector design. This paper describes the design, construction, assembly and operation of the detector components.
Keywords: Noble liquid detectors (scintillation, ionization, double-phase); Photon detectors for UV; visible and IR photons (solid-state) (PIN diodes, APDs, Si-PMTs, G-APDs, CCDs, EBCCDs, EMCCDs, CMOS imagers, etc); Scintillators; scintillation and light emission processes (solid, gas and liquid scintillators); Time projection Chambers (TPC)
|
|
|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., et al. (2022). Operation and performance of the ATLAS semiconductor tracker in LHC Run 2. J. Instrum., 17(1), P01013–56pp.
Abstract: The semiconductor tracker (SCT) is one of the tracking systems for charged particles in the ATLAS detector. It consists of 4088 silicon strip sensor modules. During Run 2 (2015-2018) the Large Hadron Collider delivered an integrated luminosity of 156 fb(-1) to the ATLAS experiment at a centre-of-mass proton-proton collision energy of 13 TeV. The instantaneous luminosity and pile-up conditions were far in excess of those assumed in the original design of the SCT detector. Due to improvements to the data acquisition system, the SCT operated stably throughout Run 2. It was available for 99.9% of the integrated luminosity and achieved a data-quality efficiency of 99.85%. Detailed studies have been made of the leakage current in SCT modules and the evolution of the full depletion voltage, which are used to study the impact of radiation damage to the modules. '
|
|