|
ATLAS Collaboration(Aad, G. et al), Akiot, A., Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., et al. (2023). Fast b-tagging at the high-level trigger of the ATLAS experiment in LHC Run 3. J. Instrum., 18(11), P11006–38pp.
Abstract: The ATLAS experiment relies on real-time hadronic jet reconstruction and b-tagging to record fully hadronic events containing b-jets. These algorithms require track reconstruction, which is computationally expensive and could overwhelm the high-level-trigger farm, even at the reduced event rate that passes the ATLAS first stage hardware-based trigger. In LHC Run 3, ATLAS has mitigated these computational demands by introducing a fast neural-network-based b-tagger, which acts as a low-precision filter using input from hadronic jets and tracks. It runs after a hardware trigger and before the remaining high-level-trigger reconstruction. This design relies on the negligible cost of neural-network inference as compared to track reconstruction, and the cost reduction from limiting tracking to specific regions of the detector. In the case of Standard Model HH -> b (b) over barb (b) over bar, a key signature relying on b-jet triggers, the filter lowers the input rate to the remaining high-level trigger by a factor of five at the small cost of reducing the overall signal efficiency by roughly 2%.
|
|
|
Gololo, M. G. D., Carrio Argos, F., & Mellado, B. (2022). Tile Computer-on-Module for the ATLAS Tile Calorimeter Phase-II upgrades. J. Instrum., 17(6), P06020–14pp.
Abstract: The Tile PreProcessor (TilePPr) is the core element of the Tile Calorimeter (TileCal) off-detector electronics for High-luminosity Large Hadron Collider (HL-LHC). The TilePPr comprises FPGA-based boards to operate and read out the TileCal on-detector electronics. The Tile Computer on Module (TileCoM) mezzanine is embedded within TilePPr to carry out three main functionalities. These include remote configuration of on-detector electronics and TilePPr FPGAs, interface the TilePPr with the ATLAS Trigger and Data Acquisition (TDAQ) system, and interfacing the TilePPr with the ATLAS Detector Control System (DCS) by providing monitoring data. The TileCoM is a 10-layer board with a Zynq UltraScale+ ZU2CG for processing data, interface components to integrate with TilePPr and the power supply to be connected to the Advanced Telecommunication Computing Architecture carrier. A CentOS embedded Linux is deployed on the TileCoM to implement the required functionalities for the HL-LHC. In this paper we present the hardware and firmware developments of the TileCoM system in terms of remote programming, interface with ATLAS TDAQ system and DCS system.
|
|
|
Ahlburg, P. et al, & Marinas, C. (2020). EUDAQ – a data acquisition software framework for common beam telescopes. J. Instrum., 15(1), P01038–30pp.
Abstract: EUDAQ is a generic data acquisition software developed for use in conjunction with common beam telescopes at charged particle beam lines. Providing high-precision reference tracks for performance studies of new sensors, beam telescopes are essential for the research and development towards future detectors for high-energy physics. As beam time is a highly limited resource, EUDAQ has been designed with reliability and ease-of-use in mind. It enables flexible integration of different independent devices under test via their specific data acquisition systems into a top-level framework. EUDAQ controls all components globally, handles the data flow centrally and synchronises and records the data streams. Over the past decade, EUDAQ has been deployed as part of a wide range of successful test beam campaigns and detector development applications.
|
|
|
ATLAS Collaboration(Aad, G. et al), Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., Ferrer, A., et al. (2013). Triggers for displaced decays of long-lived neutral particles in the ATLAS detector. J. Instrum., 8, P07015–35pp.
Abstract: A set of three dedicated triggers designed to detect long-lived neutral particles decaying throughout the ATLAS detector to a pair of hadronic jets is described. The efficiencies of the triggers for selecting displaced decays as a function of the decay position are presented for simulated events. The effect of pile-up interactions on the trigger efficiencies and the dependence of the trigger rate on instantaneous luminosity during the 2012 data-taking period at the LHC are discussed.
|
|
|
AGATA Collaboration(Crespi, F. C. L. et al), & Gadea, A. (2013). Response of AGATA segmented HPGe detectors to gamma rays up to 15.1 MeV. Nucl. Instrum. Methods Phys. Res. A, 705, 47–54.
Abstract: The response of AGATA segmented HPGe detectors to gamma rays in the energy range 2-15 MeV was measured. The 15.1 MeV gamma rays were produced using the reaction d(B-11,n gamma)C-12 at E-beam=19.1 MeV, while gamma rays between 2 and 9 MeV were produced using an Am-Be-Fe radioactive source. The energy resolution and linearity were studied and the energy-to-pulse-height conversion resulted to be linear within 0.05%.Experimental interaction multiplicity distributions are discussed and compared with the results of Geant4 simulations. It is shown that the application of gamma-ray tracking allows a suppression of background radiation caused by n-capture in Ge nuclei. Finally the Doppler correction for the 15.1 MeV gamma line, performed using the position information extracted with Pulse-shape analysis is discussed.
|
|
|
AGATA Collaboration(Akkoyun, S. et al), Algora, A., Barrientos, D., Domingo-Pardo, C., Egea, F. J., Gadea, A., et al. (2012). AGATA-Advanced GAmma Tracking Array. Nucl. Instrum. Methods Phys. Res. A, 668, 26–58.
Abstract: The Advanced GAmma Tracking Array (AGATA) is a European project to develop and operate the next generation gamma-ray spectrometer. AGATA is based on the technique of gamma-ray energy tracking in electrically segmented high-purity germanium crystals. This technique requires the accurate determination of the energy, time and position of every interaction as a gamma ray deposits its energy within the detector volume. Reconstruction of the full interaction path results in a detector with very high efficiency and excellent spectral response. The realisation of gamma-ray tracking and AGATA is a result of many technical advances. These include the development of encapsulated highly segmented germanium detectors assembled in a triple cluster detector cryostat, an electronics system with fast digital sampling and a data acquisition system to process the data at a high rate. The full characterisation of the crystals was measured and compared with detector-response simulations. This enabled pulse-shape analysis algorithms, to extract energy, time and position, to be employed. In addition, tracking algorithms for event reconstruction were developed. The first phase of AGATA is now complete and operational in its first physics campaign. In the future AGATA will be moved between laboratories in Europe and operated in a series of campaigns to take advantage of the different beams and facilities available to maximise its science output. The paper reviews all the achievements made in the AGATA project including all the necessary infrastructure to operate and support the spectrometer.
|
|
|
NEXT Collaboration(Simon, A. et al), Gomez-Cadenas, J. J., Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., et al. (2017). Application and performance of an ML-EM algorithm in NEXT. J. Instrum., 12, P08009–22pp.
Abstract: The goal of the NEXT experiment is the observation of neutrinoless double beta decay in Xe-136 using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
|
|
|
ANTARES Collaboration(Adrian-Martinez, S. et al), Aguilar, J. A., Bigongiari, C., Dornic, D., Emanuele, U., Gomez-Gonzalez, J. P., et al. (2012). The positioning system of the ANTARES Neutrino Telescope. J. Instrum., 7, T08002–20pp.
Abstract: The ANTARES neutrino telescope, located 40km off the coast of Toulon in the Mediterranean Sea at a mooring depth of about 2475m, consists of twelve detection lines equipped typically with 25 storeys. Every storey carries three optical modules that detect Cherenkov light induced by charged secondary particles (typically muons) coming from neutrino interactions. As these lines are flexible structures fixed to the sea bed and held taut by a buoy, sea currents cause the lines to move and the storeys to rotate. The knowledge of the position of the optical modules with a precision better than 10cm is essential for a good reconstruction of particle tracks. In this paper the ANTARES positioning system is described. It consists of an acoustic positioning system, for distance triangulation, and a compass-tiltmeter system, for the measurement of the orientation and inclination of the storeys. Necessary corrections are discussed and the results of the detector alignment procedure are described.
Keywords: Timing detectors; Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission, etc); Detector alignment and calibration methods (lasers, sources, particle-beams); Detector control systems (detector and experiment monitoring and slow-control systems, architecture, hardware, algorithms, databases)
|
|
|
Aliaga, R. J. (2017). Real-Time Estimation of Zero Crossings of Sampled Signals for Timing Using Cubic Spline Interpolation. IEEE Trans. Nucl. Sci., 64(8), 2414–2422.
Abstract: A scheme is proposed for hardware estimation of the location of zero crossings of sampled signals with subsample resolution for timing applications, which consists of interpolating the signal with a cubic spline near the zero crossing and then finding the root of the resulting polynomial. An iterative algorithm based on the bisection method is presented that obtains one bit of the result per step and admits an efficient digital implementation using fixed-point representation. In particular, the root estimation iteration involves only two additions, and the initial values can be obtained from finite impulse response (FIR) filters with certain symmetry properties. It is shown that this allows online real-time estimation of timestamps in free-running sampling detector systems with improved accuracy with respect to the more common linear interpolation. The method is evaluated with simulations using ideal and real timing signals, and estimates are given for the resource usage and speed of its implementation.
|
|
|
Bouhova-Thacker, E., Kostyukhin, V., Koffas, T., Liebig, W., Limper, M., Piacquadio, G. N., et al. (2010). Expected Performance of Vertex Reconstruction in the ATLAS Experiment at the LHC. IEEE Trans. Nucl. Sci., 57(2), 760–767.
Abstract: In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10(34) cm(-2) s(-1)) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper is the expected performance of the vertex reconstruction used in the ATLAS experiment. The algorithms for the reconstruction of primary and secondary vertices as well as for finding photon conversions and vertex reconstruction in jets are described. The implementation of vertex algorithms which follows a very modular design based on object-oriented C++ is presented. A user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.
|
|