Gololo, M. G. D., Carrio Argos, F., & Mellado, B. (2022). Tile Computer-on-Module for the ATLAS Tile Calorimeter Phase-II upgrades. J. Instrum., 17(6), P06020–14pp.
Abstract: The Tile PreProcessor (TilePPr) is the core element of the Tile Calorimeter (TileCal) off-detector electronics for High-luminosity Large Hadron Collider (HL-LHC). The TilePPr comprises FPGA-based boards to operate and read out the TileCal on-detector electronics. The Tile Computer on Module (TileCoM) mezzanine is embedded within TilePPr to carry out three main functionalities. These include remote configuration of on-detector electronics and TilePPr FPGAs, interface the TilePPr with the ATLAS Trigger and Data Acquisition (TDAQ) system, and interfacing the TilePPr with the ATLAS Detector Control System (DCS) by providing monitoring data. The TileCoM is a 10-layer board with a Zynq UltraScale+ ZU2CG for processing data, interface components to integrate with TilePPr and the power supply to be connected to the Advanced Telecommunication Computing Architecture carrier. A CentOS embedded Linux is deployed on the TileCoM to implement the required functionalities for the HL-LHC. In this paper we present the hardware and firmware developments of the TileCoM system in terms of remote programming, interface with ATLAS TDAQ system and DCS system.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Operation of the ATLAS trigger system in Run 2. J. Instrum., 15(10), P10004–59pp.
Abstract: The ATLAS experiment at the Large Hadron Collider employs a two-level trigger system to record data at an average rate of 1 kHz from physics collisions, starting from an initial bunch crossing rate of 40 MHz. During the LHC Run 2 (2015-2018), the ATLAS trigger system operated successfully with excellent performance and flexibility by adapting to the various run conditions encountered and has been vital for the ATLAS Run-2 physics programme For proton-proton running, approximately 1500 individual event selections were included in a trigger menu which specified the physics signatures and selection algorithms used for the data-taking, and the allocated event rate and bandwidth. The trigger menu must reflect the physics goals for a given data collection period, taking into account the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout, online processing farm, and offline storage. This document discusses the operation of the ATLAS trigger system during the nominal proton-proton data collection in Run 2 with examples of special data-taking runs. Aspects of software validation, evolution of the trigger selection algorithms during Run 2, monitoring of the trigger system and data quality as well as trigger configuration are presented.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Performance of the ATLAS muon triggers in Run 2. J. Instrum., 15(9), P09015–57pp.
Abstract: The performance of the ATLAS muon trigger system is evaluated with proton-proton (pp) and heavy-ion (HI) collision data collected in Run 2 during 2015-2018 at the Large Hadron Collider. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons to cover the intermediate momentum range between 26 GeV and 100 GeV. Overall, the efficiency of the single-muon triggers is about 68% in the barrel region and 85% in the endcap region. The p(T) range for efficiency determination is extended by using muons from decays of J/psi mesons, W bosons, and top quarks. The performance in HI collision data is measured and shows good agreement with the results obtained in pp collisions. The muon trigger shows uniform and stable performance in good agreement with the prediction of a detailed simulation. Dedicated multi-muon triggers with kinematic selections provide the backbone to beauty, quarkonia, and low-mass physics studies. The design, evolution and performance of these triggers are discussed in detail.
|
KM3NeT Collaboration(Aiello, S. et al), Calvo, D., Coleiro, A., Colomer, M., Gozzini, S. R., Hernandez-Rey, J. J., et al. (2020). The Control Unit of the KM3NeT Data Acquisition System. Comput. Phys. Commun., 256, 107433–16pp.
Abstract: The KM3NeT Collaboration runs a multi-site neutrino observatory in the Mediterranean Sea. Water Cherenkov particle detectors, deep in the sea and far off the coasts of France and Italy, are already taking data while incremental construction progresses. Data Acquisition Control software is operating off-shore detectors as well as testing and qualification stations for their components. The software, named Control Unit, is highly modular. It can undergo upgrades and reconfiguration with the acquisition running. Interplay with the central database of the Collaboration is obtained in a way that allows for data taking even if Internet links fail. In order to simplify the management of computing resources in the long term, and to cope with possible hardware failures of one or more computers, the KM3NeT Control Unit software features a custom dynamic resource provisioning and failover technology, which is especially important for ensuring continuity in case of rare transient events in multi-messenger astronomy. The software architecture relies on ubiquitous tools and broadly adopted technologies and has been successfully tested on several operating systems.
|
Ahlburg, P. et al, & Marinas, C. (2020). EUDAQ – a data acquisition software framework for common beam telescopes. J. Instrum., 15(1), P01038–30pp.
Abstract: EUDAQ is a generic data acquisition software developed for use in conjunction with common beam telescopes at charged particle beam lines. Providing high-precision reference tracks for performance studies of new sensors, beam telescopes are essential for the research and development towards future detectors for high-energy physics. As beam time is a highly limited resource, EUDAQ has been designed with reliability and ease-of-use in mind. It enables flexible integration of different independent devices under test via their specific data acquisition systems into a top-level framework. EUDAQ controls all components globally, handles the data flow centrally and synchronises and records the data streams. Over the past decade, EUDAQ has been deployed as part of a wide range of successful test beam campaigns and detector development applications.
|