Home | << 1 2 3 4 5 6 7 >> |
Carrio, F. (2022). The Data Acquisition System for the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator. IEEE Trans. Nucl. Sci., 69(4), 687–695.
Abstract: The tile calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the large hadron collider (LHC). In 2025, the LHC will be upgraded leading to the high luminosity LHC (HL-LHC). The HL-LHC will deliver an instantaneous luminosity up to seven times larger than the LHC nominal luminosity. The ATLAS Phase-II upgrade (2025-2027) will accommodate the subdetectors to the HL-LHC requirements. As part of this upgrade, the majority of the TileCal on-detector and off-detector electronics will be replaced using a new readout strategy, where the on-detector electronics will digitize and transmit digitized detector data to the off-detector electronics at the bunch crossing frequency (40 MHz). In the counting rooms, the off-detector electronics will compute reconstructed trigger objects for the first-level trigger and will store the digitized samples in pipelined buffers until the reception of a trigger acceptance signal. The off-detector electronics will also distribute the LHC clock to the on-detector electronics embedded within the digital data stream. The TileCal Phase-II upgrade project has undertaken an extensive research and development program that includes the development of a Demonstrator module to evaluate the performance of the new clock and readout architecture envisaged for the HL-LHC. The Demonstrator module equipped with the latest version of the on-detector electronics was built and inserted into the ATLAS experiment. The Demonstrator module is operated and read out using a Tile PreProcessor (TilePPr) Demonstrator which enables backward compatibility with the present ATLAS Trigger and Data AcQuisition (TDAQ), and the timing, trigger, and command (TTC) systems. This article describes in detail the main hardware and firmware components of the clock distribution and data acquisition systems for the Demonstrator module, focusing on the TilePPr Demonstrator.
|
Valdes-Cortez, C., Niatsetski, Y., Perez-Calatayud, J., Ballester, F., & Vijande, J. (2022). A Monte Carlo study of the relative biological effectiveness in surface brachytherapy. Med. Phys., 49, 5576–5588.
Abstract: Purpose This work aims to simulate clustered DNA damage from ionizing radiation and estimate the relative biological effectiveness (RBE) for radionuclide (rBT)- and electronic (eBT)-based surface brachytherapy through a hybrid Monte Carlo (MC) approach, using realistic models of the sources and applicators. Methods Damage from ionizing radiation has been studied using the Monte Carlo Damage Simulation algorithm using as input the primary electron fluence simulated using a state-of-the-art MC code, PENELOPE-2018. Two Ir-192 rBT applicators, Valencia and Leipzig, one Co-60 source with a Freiburg Flap applicator (reference source), and two eBT systems, Esteya and INTRABEAM, have been included in this study implementing full realizations of their geometries as disclosed by the manufacturer. The role played by filtration and tube kilovoltage has also been addressed. Results For rBT, an RBE value of about 1.01 has been found for the applicators and phantoms considered. In the case of eBT, RBE values for the Esteya system show an almost constant RBE value of about 1.06 for all depths and materials. For INTRABEAM, variations in the range of 1.12-1.06 are reported depending on phantom composition and depth. Modifications in the Esteya system, filtration, and tube kilovoltage give rise to variations in the same range. Conclusions Current clinical practice does not incorporate biological effects in surface brachytherapy. Therefore, the same absorbed dose is administered to the patients independently on the particularities of the rBT or eBT system considered. The almost constant RBE values reported for rBT support that assumption regardless of the details of the patient geometry, the presence of a flattening filter in the applicator design, or even significant modifications in the photon energy spectra above 300 keV. That is not the case for eBT, where a clear dependence on the eBT system and the characteristics of the patient geometry are reported. A complete study specific for each eBT system, including detailed applicator characteristics (size, shape, filtering, among others) and common anatomical locations, should be performed before adopting an existing RBE value.
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
Keywords: astronomical databases; miscellaneous; methods; data analysis; stars; imaging; techniques; image processing
|
SCiMMA and SNEWS Collaborations(Baxter, A. L. et al), & Colomer, M. (2022). Collaborative experience between scientific software projects using Agile Scrum development. Softw.-Pract. Exp., 52, 2077–2096.
Abstract: Developing sustainable software for the scientific community requires expertise in software engineering and domain science. This can be challenging due to the unique needs of scientific software, the insufficient resources for software engineering practices in the scientific community, and the complexity of developing for evolving scientific contexts. While open-source software can partially address these concerns, it can introduce complicating dependencies and delay development. These issues can be reduced if scientists and software developers collaborate. We present a case study wherein scientists from the SuperNova Early Warning System collaborated with software developers from the Scalable Cyberinfrastructure for Multi-Messenger Astrophysics project. The collaboration addressed the difficulties of open-source software development, but presented additional risks to each team. For the scientists, there was a concern of relying on external systems and lacking control in the development process. For the developers, there was a risk in supporting a user-group while maintaining core development. These issues were mitigated by creating a second Agile Scrum framework in parallel with the developers' ongoing Agile Scrum process. This Agile collaboration promoted communication, ensured that the scientists had an active role in development, and allowed the developers to evaluate and implement the scientists' software requirements. The collaboration provided benefits for each group: the scientists actuated their development by using an existing platform, and the developers utilized the scientists' use-case to improve their systems. This case study suggests that scientists and software developers can avoid scientific computing issues by collaborating and that Agile Scrum methods can address emergent concerns.
|
Hueso-Gonzalez, F., Casaña Copado, J. V., Fernandez Prieto, A., Gallas Torreira, A., Lemos Cid, E., Ros Garcia, A., et al. (2022). A dead-time-free data acquisition system for prompt gamma-ray measurements during proton therapy treatments. Nucl. Instrum. Methods Phys. Res. A, 1033, 166701–9pp.
Abstract: In cancer patients undergoing proton therapy, a very intense secondary radiation is produced during the treatment, which lasts around one minute. About one billion prompt gamma-rays are emitted per second, and their detection with fast scintillation detectors is useful for monitoring a correct beam delivery. To cope with the expected count rate and pile-up, as well as the scarce statistics due to the short treatment duration, we developed an eidetic data acquisition system capable of continuously digitizing the detector signal with a high sampling rate and without any dead time. By streaming the fully unprocessed waveforms to the computer, complex pile-up decomposition algorithms can be applied and optimized offline. We describe the data acquisition architecture and the multiple experimental tests designed to verify the sustained data throughput speed and the absence of dead time. While the system is tailored for the proton therapy environment, the methodology can be deployed in any other field requiring the recording of raw waveforms at high sampling rates with zero dead time.
Keywords: Data acquisition; Dead time; Pile-up; Digital signal processing
|
NEXT Collaboration(Jones, B. J. P. et al), Carcel, S., Carrion, J. V., Diaz, J., Martin-Albo, J., Martinez, A., et al. (2022). The dynamics of ions on phased radio-frequency carpets in high pressure gases and application for barium tagging in xenon gas time projection chambers. Nucl. Instrum. Methods Phys. Res. A, 1039, 167000–19pp.
Abstract: Radio-frequency (RF) carpets with ultra-fine pitches are examined for ion transport in gases at atmospheric pressures and above. We develop new analytic and computational methods for modeling RF ion transport at densities where dynamics are strongly influenced by buffer gas collisions. An analytic description of levitating and sweeping forces from phased arrays is obtained, then thermodynamic and kinetic principles are used to calculate ion loss rates in the presence of collisions. This methodology is validated against detailed microscopic SIMION simulations. We then explore a parameter space of special interest for neutrinoless double beta decay experiments: transport of barium ions in xenon at pressures from 1 to 10 bar. Our computations account for molecular ion formation and pressure dependent mobility as well as finite temperature effects. We discuss the challenges associated with achieving suitable operating conditions, which lie beyond the capabilities of existing devices, using presently available or near-future manufacturing techniques.
|
T2K Collaboration(Abe, K. et al), Antonova, M., Cervera-Villanueva, A., Molina Bueno, L., & Novella, P. (2022). Scintillator ageing of the T2K near detectors fro 2010 to 2021. J. Instrum., 17(10), P10028–36pp.
Abstract: The T2K experiment widely uses plastic scintillator as a target for neutrino interactions and an active medium for the measurement of charged particles produced in neutrino interactions at its near detector complex. Over 10 years of operation the measured light yield recorded by the scintillator based subsystems has been observed to degrade by 0.9-2.2% per year. Extrapolation of the degradation rate through to 2040 indicates the recorded light yield should remain above the lower threshold used by the current reconstruction algorithms for all subsystems. This will allow the near detectors to continue contributing to important physics measurements during the T2K-II and Hyper-Kamiokande eras. Additionally, work to disentangle the degradation of the plastic scintillator and wavelength shifting fibres shows that the reduction in light yield can be attributed to the ageing of the plastic scintillator. The long component of the attenuation length of the wavelength shifting fibres was observed to degrade by 1.3-5.4% per year, while the short component of the attenuation length did not show any conclusive degradation.
|
Angles-Castillo, A., Perucho, M., Marti, J. M., & Laing, R. A. (2021). On the deceleration of Fanaroff-Riley Class I jets: mass loading of magnetized jets by stellar winds. Mon. Not. Roy. Astron. Soc., 500(1), 1512–1530.
Abstract: In this paper, we present steady-state relativistic magnetohydrodynamic simulations that include a mass-load term to study the process of jet deceleration. The mass load mimics the injection of a proton-electron plasma from stellar winds within the host galaxy into initially pair plasma jets, with mean stellar mass-losses ranging from 10(-14) to 10(-9) M-circle dot yr(-1). The spatial jet evolution covers similar to 500 pc from jet injection in the grid at 10 pc from the jet nozzle. Our simulations use a relativistic gas equation of state and a pressure profile for the ambient medium. We compare these simulations with previous dynamical simulations of relativistic, non-magnetized jets. Our results show that toroidal magnetic fields can prevent fast jet expansion and the subsequent embedding of further stars via magnetic tension. In this sense, magnetic fields avoid a runaway deceleration process. Furthermore, when the mass load is large enough to increase the jet density and produce fast, differential jet expansion, the conversion of magnetic energy flux into kinetic energy flux (i.e. magnetic acceleration), helps to delay the deceleration process with respect to non-magnetized jets. We conclude that the typical stellar population in elliptical galaxies cannot explain jet deceleration in classical Fanaroff-Riley type I radio galaxies. However, we observe a significant change in the jet composition, thermodynamical parameters, and energy dissipation along its evolution, even for moderate values of the mass load.
Keywords: relativistic processes; stars: winds; outflows; galaxies: active; galaxies: jets
|
Al Kharusi, S. et al, & Colomer, M. (2021). SNEWS 2.0: a next-generation supernova early warning system for multi-messenger astronomy. New J. Phys., 23(3), 031201–34pp.
Abstract: The next core-collapse supernova in the Milky Way or its satellites will represent a once-in-a-generation opportunity to obtain detailed information about the explosion of a star and provide significant scientific insight for a variety of fields because of the extreme conditions found within. Supernovae in our galaxy are not only rare on a human timescale but also happen at unscheduled times, so it is crucial to be ready and use all available instruments to capture all possible information from the event. The first indication of a potential stellar explosion will be the arrival of a bright burst of neutrinos. Its observation by multiple detectors worldwide can provide an early warning for the subsequent electromagnetic fireworks, as well as signal to other detectors with significant backgrounds so they can store their recent data. The supernova early warning system (SNEWS) has been operating as a simple coincidence between neutrino experiments in automated mode since 2005. In the current era of multi-messenger astronomy there are new opportunities for SNEWS to optimize sensitivity to science from the next galactic supernova beyond the simple early alert. This document is the product of a workshop in June 2019 towards design of SNEWS 2.0, an upgraded SNEWS with enhanced capabilities exploiting the unique advantages of prompt neutrino detection to maximize the science gained from such a valuable event.
|
Hall, O. et al, Agramunt, J., Algora, A., Domingo-Pardo, C., Morales, A. I., Rubio, B., et al. (2021). beta-delayed neutron emission of r-process nuclei at the N=82 shell closure. Phys. Lett. B, 816, 136266–7pp.
Abstract: Theoretical models of beta-delayed neutron emission are used as crucial inputs in r-process calculations. Benchmarking the predictions of these models is a challenge due to a lack of currently available experimental data. In this work the beta-delayed neutron emission probabilities of 33 nuclides in the important mass regions south and south-west of Sn-132 are presented, 16 for the first time. The measurements were performed at RIKEN using the Advanced Implantation Detector Array (AIDA) and the BRIKEN neutron detector array. The P-1n values presented constrain the predictions of theoretical models in the region, affecting the final abundance distribution of the second r-process peak at A approximate to 130.
Keywords: beta-delayed neutron emission; r-processimportant
|