ANTARES Collaboration(Adrian-Martinez, S. et al), Aguilar, J. A., Bigongiari, C., Dornic, D., Emanuele, U., Gomez-Gonzalez, J. P., et al. (2012). The positioning system of the ANTARES Neutrino Telescope. J. Instrum., 7, T08002–20pp.
Abstract: The ANTARES neutrino telescope, located 40km off the coast of Toulon in the Mediterranean Sea at a mooring depth of about 2475m, consists of twelve detection lines equipped typically with 25 storeys. Every storey carries three optical modules that detect Cherenkov light induced by charged secondary particles (typically muons) coming from neutrino interactions. As these lines are flexible structures fixed to the sea bed and held taut by a buoy, sea currents cause the lines to move and the storeys to rotate. The knowledge of the position of the optical modules with a precision better than 10cm is essential for a good reconstruction of particle tracks. In this paper the ANTARES positioning system is described. It consists of an acoustic positioning system, for distance triangulation, and a compass-tiltmeter system, for the measurement of the orientation and inclination of the storeys. Necessary corrections are discussed and the results of the detector alignment procedure are described.
Keywords: Timing detectors; Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission, etc); Detector alignment and calibration methods (lasers, sources, particle-beams); Detector control systems (detector and experiment monitoring and slow-control systems, architecture, hardware, algorithms, databases)
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Modelling radiation damage to pixel sensors in the ATLAS detector. J. Instrum., 14, P06012–52pp.
Abstract: Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment at the LHC. Given their close proximity to the interaction point, these detectors will be exposed to an unprecedented amount of radiation over their lifetime. The current pixel detector will receive damage from non-ionizing radiation in excess of 10(15) 1 MeV n(eq)/cm(2), while the pixel detector designed for the high-luminosity LHC must cope with an order of magnitude larger fluence. This paper presents a digitization model incorporating effects of radiation damage to the pixel sensors. The model is described in detail and predictions for the charge collection efficiency and Lorentz angle are compared with collision data collected between 2015 and 2017 (<= 10(15) 1 MeV n(eq)/cm(2)).
|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., et al. (2022). Operation and performance of the ATLAS semiconductor tracker in LHC Run 2. J. Instrum., 17(1), P01013–56pp.
Abstract: The semiconductor tracker (SCT) is one of the tracking systems for charged particles in the ATLAS detector. It consists of 4088 silicon strip sensor modules. During Run 2 (2015-2018) the Large Hadron Collider delivered an integrated luminosity of 156 fb(-1) to the ATLAS experiment at a centre-of-mass proton-proton collision energy of 13 TeV. The instantaneous luminosity and pile-up conditions were far in excess of those assumed in the original design of the SCT detector. Due to improvements to the data acquisition system, the SCT operated stably throughout Run 2. It was available for 99.9% of the integrated luminosity and achieved a data-quality efficiency of 99.85%. Detailed studies have been made of the leakage current in SCT modules and the evolution of the full depletion voltage, which are used to study the impact of radiation damage to the modules. '
|
ATLAS Collaboration(Aad, G. et al), Bernabeu Verdu, J., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., et al. (2014). Operation and performance of the ATLAS semiconductor tracker. J. Instrum., 9, P08009–73pp.
Abstract: The semiconductor tracker is a silicon microstrip detector forming part of the inner tracking system of the ATLAS experiment at the LHC. The operation and performance of the semiconductor tracker during the first years of LHC running are described. More than 99% of the detector modules were operational during this period, with an average intrinsic hit efficiency of (99.74 +/- 0.04)%. The evolution of the noise occupancy is discussed, and measurements of the Lorentz angle, delta-ray production and energy loss presented. The alignment of the detector is found to be stable at the few-micron level over long periods of time. Radiation damage measurements, which include the evolution of detector leakage currents, are found to be consistent with predictions and are used in the verification of radiation background simulations.
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|