Davesne, D., Pastore, A., & Navarro, J. (2014). Fitting (NLO)-L-3 pseudo-potentials through central plus tensor Landau parameters. J. Phys. G, 41(6), 065104–12pp.
Abstract: Landau parameters determined from phenomenological finite-range interactions are used to get an estimation of next-to-next-to-next-to-leading order ((NLO)-L-3) pseudo-potentials parameters. The parameter sets obtained in this way are shown to lead to consistent results concerning saturation properties. The uniqueness of this procedure is discussed, and an estimate of the error induced by the truncation at (NLO)-L-3 is given.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Ferreira, M. N., & Papavassiliou, J. (2023). Gauge Sector Dynamics in QCD. Particles, 6(1), 312–363.
Abstract: The dynamics of the QCD gauge sector give rise to non-perturbative phenomena that are crucial for the internal consistency of the theory; most notably, they account for the generation of a gluon mass through the action of the Schwinger mechanism, the taming of the Landau pole, the ensuing stabilization of the gauge coupling, and the infrared suppression of the three-gluon vertex. In the present work, we review some key advances in the ongoing investigation of this sector within the framework of the continuum Schwinger function methods, supplemented by results obtained from lattice simulations.
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Helium identification with LHCb. J. Instrum., 19(2), P02010–23pp.
Abstract: The identification of helium nuclei at LHCb is achieved using a method based on measurements of ionisation losses in the silicon sensors and timing measurements in the Outer Tracker drift tubes. The background from photon conversions is reduced using the RICH detectors and an isolation requirement. The method is developed using pp collision data at root s = 13 TeV recorded by the LHCb experiment in the years 2016 to 2018, corresponding to an integrated luminosity of 5.5 fb(-1). A total of around 10(5) helium and antihelium candidates are identified with negligible background contamination. The helium identification efficiency is estimated to be approximately 50% with a corresponding background rejection rate of up to O(10(12)). These results demonstrate the feasibility of a rich programme of measurements of QCD and astrophysics interest involving light nuclei.
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
Cabrera, M. E., Casas, J. A., Mitsou, V. A., Ruiz de Austri, R., & Terron, J. (2012). Histogram comparison tools for the search of new physics at LHC. Application to the CMSSM. J. High Energy Phys., 04(4), 133–27pp.
Abstract: We propose a rigorous and effective way to compare experimental and theoretical histograms, incorporating the different sources of statistical and systematic uncertainties. This is a useful tool to extract as much information as possible from the comparison between experimental data with theoretical simulations, optimizing the chances of identifying New Physics at the LHC. We illustrate this by showing how a search in the CMSSM parameter space, using Bayesian techniques, can effectively find the correct values of the CMSSM parameters by comparing histograms of events with multijets + missing transverse momentum displayed in the effective-mass variable. The procedure is in fact very efficient to identify the true supersymmetric model, in the case supersymmetry is really there and accessible to the LHC.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2015). Identification of beauty and charm quark jets at LHCb. J. Instrum., 10, P06013–29pp.
Abstract: Identification of jets originating from beauty and charm quarks is important for measuring Standard Model processes and for searching for new physics. The performance of algorithms developed to select b- and c-quark jets is measured using data recorded by LHCb from proton-proton collisions at root s = 7TeV in 2011 and at root s = 8TeV in 2012. The efficiency for identifying a b (c) jet is about 65%(25%) with a probability for misidentifying a light-parton jet of 0.3% for jets with transverse momentum pT > 20GeV and pseudorapidity 2 : 2 < eta < 4.2. The dependence of the performance on the pT and eta of the jet is also measured.
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Identification of charm jets at LHCb. J. Instrum., 17(2), P02028–23pp.
Abstract: The identification of charm jets is achieved at LHCb for data collected in 2015-2018 using a method based on the properties of displaced vertices reconstructed and matched with jets. The performance of this method is determined using a dijet calibration dataset recorded by the LHCb detector and selected such that the jets are unbiased in quantities used in the tagging algorithm. The charm-tagging efficiency is reported as a function of the transverse momentum of the jet. The measured efficiencies are compared to those obtained from simulation and found to be in good agreement.
|
Panes, B., Eckner, C., Hendriks, L., Caron, S., Dijkstra, K., Johannesson, G., et al. (2021). Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challenge. Astron. Astrophys., 656, A62–18pp.
Abstract: Context. At GeV energies, the sky is dominated by the interstellar emission from the Galaxy. With limited statistics and spatial resolution, accurately separating point sources is therefore challenging. Aims. Here we present the first application of deep learning based algorithms to automatically detect and classify point sources from gamma-ray data. For concreteness we refer to this approach as AutoSourceID. Methods. To detect point sources, we utilized U-shaped convolutional networks for image segmentation and k-means for source clustering and localization. We also explored the Centroid-Net algorithm, which is designed to find and count objects. Using two algorithms allows for a cross check of the results, while a combination of their results can be used to improve performance. The training data are based on 9.5 years of exposure from The Fermi Large Area Telescope (Fermi-LAT) and we used source properties of active galactic nuclei (AGNs) and pulsars (PSRs) from the fourth Fermi-LAT source catalog in addition to several models of background interstellar emission. The results of the localization algorithm are fed into a classification neural network that is trained to separate the three general source classes (AGNs, PSRs, and FAKE sources). Results. We compared our localization algorithms qualitatively with traditional methods and find them to have similar detection thresholds. We also demonstrate the robustness of our source localization algorithms to modifications in the interstellar emission models, which presents a clear advantage over traditional methods. The classification network is able to discriminate between the three classes with typical accuracy of similar to 70%, as long as balanced data sets are used in classification training. We published online our training data sets and analysis scripts and invite the community to join the data challenge aimed to improve the localization and classification of gamma-ray point sources.
|
Renner, J., Cervera-Villanueva, A., Hernando, J. A., Izmaylov, A., Monrabal, F., Muñoz, J., et al. (2015). Improved background rejection in neutrinoless double beta decay experiments using a magnetic field in a high pressure xenon TPC. J. Instrum., 10, P12020–19pp.
Abstract: We demonstrate that the application of an external magnetic field could lead to an improved background rejection in neutrinoless double-beta (0 nu beta beta) decay experiments using a high-pressure xenon (HPXe) TPC. HPXe chambers are capable of imaging electron tracks, a feature that enhances the separation between signal events (the two electrons emitted in the 0 nu beta beta decay of Xe-136) and background events, arising chiefly from single electrons of kinetic energy compatible with the end-point of the 0 nu beta beta decay (Q(beta beta)). Applying an external magnetic field of sufficiently high intensity (in the range of 0.5-1 Tesla for operating pressures in the range of 5-15 atmospheres) causes the electrons to produce helical tracks. Assuming the tracks can be properly reconstructed, the sign of the curvature can be determined at several points along these tracks, and such information can be used to separate signal (0 nu beta beta) events containing two electrons producing a track with two different directions of curvature from background (single-electron) events producing a track that should spiral in a single direction. Due to electron multiple scattering, this strategy is not perfectly efficient on an event-by-event basis, but a statistical estimator can be constructed which can be used to reject background events by one order of magnitude at a moderate cost (about 30%) in signal efficiency. Combining this estimator with the excellent energy resolution and topological signature identification characteristic of the HPXe TPC, it is possible to reach a background rate of less than one count per ton-year of exposure. Such a low background rate is an essential feature of the next generation of 0 nu beta beta experiments, aiming to fully explore the inverse hierarchy of neutrino masses.
|