Courtoy, A., Noguera, S., & Scopetta, S. (2019). Double parton distributions in the pion in the Nambu-Jona-Lasinio model. J. High Energy Phys., 12(12), 045–26pp.
Abstract: Two-parton correlations in the pion, a non perturbative information encoded in double parton distribution functions, are investigated in the Nambu-Jona-Lasinio model. It is found that double parton distribution functions expose novel dynamical information on the structure of the pion, not accessible through one-body parton distributions, as it happens in several estimates for the proton target and in a previous evaluation for the pion, in a light-cone framework. Expressions and predictions are given for double parton distributions corresponding to leading-twist Dirac operators in the quark vertices, and to different regularization methods for the Nambu-Jona-Lasinio model. These results are particularly relevant in view of forthcoming lattice data.
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Helium identification with LHCb. J. Instrum., 19(2), P02010–23pp.
Abstract: The identification of helium nuclei at LHCb is achieved using a method based on measurements of ionisation losses in the silicon sensors and timing measurements in the Outer Tracker drift tubes. The background from photon conversions is reduced using the RICH detectors and an isolation requirement. The method is developed using pp collision data at root s = 13 TeV recorded by the LHCb experiment in the years 2016 to 2018, corresponding to an integrated luminosity of 5.5 fb(-1). A total of around 10(5) helium and antihelium candidates are identified with negligible background contamination. The helium identification efficiency is estimated to be approximately 50% with a corresponding background rejection rate of up to O(10(12)). These results demonstrate the feasibility of a rich programme of measurements of QCD and astrophysics interest involving light nuclei.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Hueso-Gonzalez, F., Casaña Copado, J. V., Fernandez Prieto, A., Gallas Torreira, A., Lemos Cid, E., Ros Garcia, A., et al. (2022). A dead-time-free data acquisition system for prompt gamma-ray measurements during proton therapy treatments. Nucl. Instrum. Methods Phys. Res. A, 1033, 166701–9pp.
Abstract: In cancer patients undergoing proton therapy, a very intense secondary radiation is produced during the treatment, which lasts around one minute. About one billion prompt gamma-rays are emitted per second, and their detection with fast scintillation detectors is useful for monitoring a correct beam delivery. To cope with the expected count rate and pile-up, as well as the scarce statistics due to the short treatment duration, we developed an eidetic data acquisition system capable of continuously digitizing the detector signal with a high sampling rate and without any dead time. By streaming the fully unprocessed waveforms to the computer, complex pile-up decomposition algorithms can be applied and optimized offline. We describe the data acquisition architecture and the multiple experimental tests designed to verify the sustained data throughput speed and the absence of dead time. While the system is tailored for the proton therapy environment, the methodology can be deployed in any other field requiring the recording of raw waveforms at high sampling rates with zero dead time.
|
Ahlburg, P. et al, & Marinas, C. (2020). EUDAQ – a data acquisition software framework for common beam telescopes. J. Instrum., 15(1), P01038–30pp.
Abstract: EUDAQ is a generic data acquisition software developed for use in conjunction with common beam telescopes at charged particle beam lines. Providing high-precision reference tracks for performance studies of new sensors, beam telescopes are essential for the research and development towards future detectors for high-energy physics. As beam time is a highly limited resource, EUDAQ has been designed with reliability and ease-of-use in mind. It enables flexible integration of different independent devices under test via their specific data acquisition systems into a top-level framework. EUDAQ controls all components globally, handles the data flow centrally and synchronises and records the data streams. Over the past decade, EUDAQ has been deployed as part of a wide range of successful test beam campaigns and detector development applications.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Operation of the ATLAS trigger system in Run 2. J. Instrum., 15(10), P10004–59pp.
Abstract: The ATLAS experiment at the Large Hadron Collider employs a two-level trigger system to record data at an average rate of 1 kHz from physics collisions, starting from an initial bunch crossing rate of 40 MHz. During the LHC Run 2 (2015-2018), the ATLAS trigger system operated successfully with excellent performance and flexibility by adapting to the various run conditions encountered and has been vital for the ATLAS Run-2 physics programme For proton-proton running, approximately 1500 individual event selections were included in a trigger menu which specified the physics signatures and selection algorithms used for the data-taking, and the allocated event rate and bandwidth. The trigger menu must reflect the physics goals for a given data collection period, taking into account the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout, online processing farm, and offline storage. This document discusses the operation of the ATLAS trigger system during the nominal proton-proton data collection in Run 2 with examples of special data-taking runs. Aspects of software validation, evolution of the trigger selection algorithms during Run 2, monitoring of the trigger system and data quality as well as trigger configuration are presented.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Performance of the ATLAS muon triggers in Run 2. J. Instrum., 15(9), P09015–57pp.
Abstract: The performance of the ATLAS muon trigger system is evaluated with proton-proton (pp) and heavy-ion (HI) collision data collected in Run 2 during 2015-2018 at the Large Hadron Collider. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons to cover the intermediate momentum range between 26 GeV and 100 GeV. Overall, the efficiency of the single-muon triggers is about 68% in the barrel region and 85% in the endcap region. The p(T) range for efficiency determination is extended by using muons from decays of J/psi mesons, W bosons, and top quarks. The performance in HI collision data is measured and shows good agreement with the results obtained in pp collisions. The muon trigger shows uniform and stable performance in good agreement with the prediction of a detailed simulation. Dedicated multi-muon triggers with kinematic selections provide the backbone to beauty, quarkonia, and low-mass physics studies. The design, evolution and performance of these triggers are discussed in detail.
|
Villanueva-Domingo, P., & Ichiki, K. (2023). 21 cm forest constraints on primordial black holes. Publ. Astron. Soc. Jpn., 75(SP1), S33–S49.
Abstract: Primordial black holes (PBHs) as part of the dark matter (DM) would modify the evolution of large-scale structures and the thermal history of the universe. Future 21 cm forest observations, sensitive to small scales and the thermal state of the intergalactic medium (IGM), could probe the existence of such PBHs. In this article, we show that the shot noise isocurvature mode on small scales induced by the presence of PBHs can enhance the amount of low-mass halos, or minihalos, and thus, the number of 21 cm absorption lines. However, if the mass of PBHs is as large as M-PBH greater than or similar to 10 M-circle dot, with an abundant enough fraction of PBHs as DM, f(PBH), the IGM heating due to accretion on to the PBHs counteracts the enhancement due to the isocurvature mode, reducing the number of absorption lines instead. The concurrence of both effects imprints distinctive signatures on the number of absorbers, allowing the abundance of PBHs to be bound. We compute the prospects for constraining PBHs with future 21 cm forest observations, finding achievable competitive upper limits on the abundance as low as f(PBH) similar to 10(-3) at M-PBH = 100 M-circle dot, or even lower at larger masses, in regions of the parameter space unexplored by current probes. The impact of astrophysical X-ray sources on the IGM temperature is also studied, which could potentially weaken the bounds.
|
Choi, K. Y., Gong, J. O., Joh, J., Park, W. I., & Seto, O. (2023). Light cold dark matter from non-thermal decay. Phys. Lett. B, 845, 138126–8pp.
Abstract: We investigate the mass range and the corresponding free-streaming length scale of dark matter produced non-thermally from decay of heavy objects which can be either dominant or sub-dominant at the moment of decay. We show that the resulting dark matter could be very light well below keV scale with a free-streaming length satisfying the Lyman-alpha constraints. We demonstrate two explicit examples for such light cold dark matter.
|
Zornoza, J. D. (2021). Review on Indirect Dark Matter Searches with Neutrino Telescopes. Universe, 7(11), 415–10pp.
Abstract: The search for dark matter is one of the hottest topics in Physics today. The fact that about 80% of the matter of the Universe is of unknown nature has triggered an intense experimental activity to detect this kind of matter and a no less intense effort on the theory side to explain it. Given the fact that we do not know the properties of dark matter well, searches from different fronts are mandatory. Neutrino telescopes are part of this experimental quest and offer specific advantages. Among the targets to look for dark matter, the Sun and the Galactic Center are the most promising ones. Considering models of dark matter densities in the Sun, neutrino telescopes have put the best limits on spin-dependent cross section of proton-WIMP scattering. Moreover, they are competitive in the constraints on the thermally averaged annihilation cross-section for high WIMP masses when looking at the Galactic Centre. Other results are also reviewed.
|