Tonev, D. et al, & Gadea, A. (2021). Transition probabilities in P-31 and S-31: A test for isospin symmetry. Phys. Lett. B, 821, 136603–6pp.
Abstract: Excited states in the mirror nuclei P-31 and S-31 were populated in the 1p and 1n exit channels of the reaction Ne-20 + C-12, at a beam energy of 33 MeV. The Ne-20 beam was delivered for the first time by the Piave-Alpi accelerator of the Laboratori Nazionali di Legnaro. Angular correlations of coincident gamma-rays and Doppler-shift attenuation lifetime measurements were performed using the multi-detector array GASP in conjunction with the EUCLIDES charged particle detector. In the observed B(E1) strengths, the isoscalar component, amounting to 24% of the isovector one, provides strong evidence for breaking of the isospin symmetry in the A = 31 mass region. Self-consistent beyond mean field calculations using Equation of Motion method based on a chiral potential and including two- and three-body forces reproduce well the experimental B(E1) strengths, reinforcing our conclusion. Coherent mixing from higher-lying states involving the Giant Isovector Monopole Resonance accounts well for the effect observed. The breaking of the isospin symmetry originates from the violation of the charge symmetry of the two- and three-body parts of the potential, only related to the Coulomb interaction.
|
Diklic, J. et al, & Jurado, M. (2023). Transfer reactions in 206Pb+118Sn: From quasielastic to deep-inelastic processes. Phys. Rev. C, 107(1), 014619–8pp.
Abstract: We measured multinucleon transfer reactions for the 206Pb + 118Sn system at Elab = 1200 MeV by employing the large solid angle magnetic spectrometer PRISMA. Differential and total cross sections and Q-value distri-butions have been obtained for a variety of neutron and proton pick-up and stripping channels. The Q-value distributions show how the quasielastic and deep inelastic processes depend on the mass and charge of the transfer products. The corresponding cross sections have been compared with calculations performed with the GRAZING code. An overall good agreement is found for most of the few nucleon transfer channels. The underestimation of the data for channels involving a large number of transferred nucleons indicates that more complicated processes populate the given isotopes.
|
Hirn, J., Sanz, V., Garcia Navarro, J. E., Goberna, M., Montesinos-Navarro, A., Navarro-Cano, J. A., et al. (2024). Transfer learning of species co-occurrence patterns between plant communities. Ecol. Inform., 83, 102826–8pp.
Abstract: Aim: The use of neural networks (NNs) is spreading to all areas of life, and Ecology is no exception. However, the data-hungry nature of NNs can leave out many small, valuable datasets. Here we show how to apply transfer learning to rescue small datasets that can be invaluable in understanding patterns of species co-occurrence. Location: Semiarid plant communities in Spain and Me<acute accent>xico. Time period: 2016-2022. Major taxa studied: Angiosperms. Methods: Based on a large sample of plant species co-occurrence in vegetation patches in a semi-arid area of eastern Spain, we fit a generative artificial intelligence (AI) model that correctly reproduces which species live with which in these patches. Subsequently, we train the same type of model on two communities for which we only have smaller datasets (another semi-arid community in eastern Spain, and a tropical community in Mexico). Results: When we transfer the knowledge learnt from the large dataset directly to the other two, the predictions improve for the community more similar to our reference one. As for the more dissimilar community, improving the accuracy of the transfer requires a further tuning of the model to the local data. In particular, the knowledge transferred relates primarily to species frequency and, to a lesser extent, to their phylogenetic relationships, which are known to be determinants of species interaction patterns. Main conclusions: This AI-based approach can be performed for communities similar or not so similar to the reference community, opening the door to systematic transfer learning for accurate predictions on small datasets. Interestingly, this transfer operates by matching unrelated species between the origin and target datasets, implying that arbitrary datasets can then be transferred to, or even combined in order to augment each other, irrespective of the species involved, potentially allowing such models to be applied to a wide range of plant communities in different climates.
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Libralon, S., Martinez-Vidal, F., Oyanguren, A., et al. (2024). Tracking of charged particles with nanosecond lifetimes at LHCb. Eur. Phys. J. C, 84(7), 761–16pp.
Abstract: A method is presented to reconstruct charged particles with lifetimes between 10 ps and 10 ns, which considers a combination of their decay products and the partial tracks created by the initial charged particle. Using the Xi(-) baryon as a benchmark, the method is demonstrated with simulated events and proton-proton collision data at root s = 13 TeV, corresponding to an integrated luminosity of 2.0 fb(-1) collected with the LHCb detector in 2018. Significant improvements in the angular resolution and the signal purity are obtained. The method is implemented as part of the LHCb Run 3 event trigger in a set of requirements to select detached hyperons. This is the first demonstration of the applicability of this approach at the LHC, and the first to show its scaling with instantaneous luminosity.
|
Caron, S., Dobreva, N., Ferrer Sanchez, A., Martin-Guerrero, J. D., Odyurt, U., Ruiz de Austri, R., et al. (2025). Trackformers: in search of transformer-based particle tracking for the high-luminosity LHC era. Eur. Phys. J. C, 85(4), 460–20pp.
Abstract: High-Energy Physics experiments are facing a multi-fold data increase with every new iteration. This is certainly the case for the upcoming High-Luminosity LHC upgrade. Such increased data processing requirements forces revisions to almost every step of the data processing pipeline. One such step in need of an overhaul is the task of particle track reconstruction, a.k.a., tracking. A Machine Learning-assisted solution is expected to provide significant improvements, since the most time-consuming step in tracking is the assignment of hits to particles or track candidates. This is the topic of this paper. We take inspiration from large language models. As such, we consider two approaches: the prediction of the next word in a sentence (next hit point in a track), as well as the one-shot prediction of all hits within an event. In an extensive design effort, we have experimented with three models based on the Transformer architecture and one model based on the U-Net architecture, performing track association predictions for collision event hit points. In our evaluation, we consider a spectrum of simple to complex representations of the problem, eliminating designs with lower metrics early on. We report extensive results, covering both prediction accuracy (score) and computational performance. We have made use of the REDVID simulation framework, as well as reductions applied to the TrackML data set, to compose five data sets from simple to complex, for our experiments. The results highlight distinct advantages among different designs in terms of prediction accuracy and computational performance, demonstrating the efficiency of our methodology. Most importantly, the results show the viability of a one-shot encoder-classifier based Transformer solution as a practical approach for the task of tracking.
|
Abreu, L. M., Wang, W. F., & Oset, E. (2023). Traces of the new alpha(0)(1780) resonance in the J/Psi ->phi K+ K-(K-0 K_(0)) reaction. Eur. Phys. J. C, 83(3), 243–11pp.
Abstract: We study the J/Psi ->phi K+ K- decay, looking for differences in the production rates of K+K- or K-0 K-(0) in the region of 1700-1800 MeV, where two resonances appear dynamically generated from the vector-vector interaction. Two resonances are known experimentally in that region, the f(0)(1710) and a new resonance reported by the BABAR and BESIII collaborations. The K K should be produced with I = 0 in that reaction, but due to the different K*(0) and K*(+) masses some isospin violation appears. Yet, due to the large width of the K*, the violation obtained is very small and the rates of K+K- or K-0 K-0 production are equal within 5%. However, we also find that due to the step needed to convert two vectors into K K, a shape can appear in the K K mass distribution that can mimic the a0 production around the K* K* threshold, and is simply a threshold effect.
|
Lerendegui-Marco, J., Balibrea-Correa, J., Babiano-Suarez, V., Ladarescu, I., & Domingo-Pardo, C. (2022). Towards machine learning aided real-time range imaging in proton therapy. Sci Rep, 12(1), 2735–17pp.
Abstract: Compton imaging represents a promising technique for range verification in proton therapy treatments. In this work, we report on the advantageous aspects of the i-TED detector for proton-range monitoring, based on the results of the first Monte Carlo study of its applicability to this field. i-TED is an array of Compton cameras, that have been specifically designed for neutron-capture nuclear physics experiments, which are characterized by gamma-ray energies spanning up to 5-6 MeV, rather low gamma-ray emission yields and very intense neutron induced gamma-ray backgrounds. Our developments to cope with these three aspects are concomitant with those required in the field of hadron therapy, especially in terms of high efficiency for real-time monitoring, low sensitivity to neutron backgrounds and reliable performance at the high gamma-ray energies. We find that signal-to-background ratios can be appreciably improved with i-TED thanks to its light-weight design and the low neutron-capture cross sections of its LaCl3 crystals, when compared to other similar systems based on LYSO, CdZnTe or LaBr3. Its high time-resolution (CRT similar to 500 ps) represents an additional advantage for background suppression when operated in pulsed HT mode. Each i-TED Compton module features two detection planes of very large LaCl3 monolithic crystals, thereby achieving a high efficiency in coincidence of 0.2% for a point-like 1 MeV gamma-ray source at 5 cm distance. This leads to sufficient statistics for reliable image reconstruction with an array of four i-TED detectors assuming clinical intensities of 10(8) protons per treatment point. The use of a two-plane design instead of three-planes has been preferred owing to the higher attainable efficiency for double time-coincidences than for threefold events. The loss of full-energy events for high energy gamma-rays is compensated by means of machine-learning based algorithms, which allow one to enhance the signal-to-total ratio up to a factor of 2.
|
Holz, S., Plenter, J., Xiao, C. W., Dato, T., Hanhart, C., Kubis, B., et al. (2021). Towards an improved understanding of eta -> gamma*gamma *. Eur. Phys. J. C, 81(11), 1002–15pp.
Abstract: We argue that high-quality data on the reaction e(+)e(-) -> pi(+) pi(-) eta will allow one to determine the doubly-virtual form factor eta -> gamma*gamma* in a model-independent way with controlled accuracy. This is an important step towards a reliable evaluation of the hadronic light-by-light scattering contribution to the anomalous magnetic moment of themuon. When analyzing the existing data for e(+) e(-) -> pi(+) pi(-) eta for total energies squared k(2) > 1GeV(2), we demonstrate that the effect of the a(2) meson provides a natural breaking mechanism for the commonly employed factorization ansatz in the doubly-virtual form factor F-eta gamma*gamma* (q(2), k(2)). However, better data are needed to draw firm conclusions.
|
Baeza-Ballesteros, J., Donini, A., Molina-Terriza, G., Monrabal, F., & Simon, A. (2024). Towards a realistic setup for a dynamical measurement of deviations from Newton's 1/r2 law: the impact of air viscosity. Eur. Phys. J. C, 84(6), 596–20pp.
Abstract: A novel experimental setup to measure deviations from the 1/r(2) distance dependence of Newtonian gravity was proposed in Donini and Marimon (Eur Phys J C 76:696, 2016). The underlying theoretical idea was to study the orbits of a microscopically-sized planetary system composed of a “Satellite”, with mass m(S) similar to O(10-9) g, and a “Planet”, with mass M-P similar to O(10-5) g at an initial distance of hundreds of microns. The detection of precession of the orbit in this system would be an unambiguous indication of a central potential with terms that scale with the distance differently from 1/r. This is a huge advantage with respect to the measurement of the absolute strength of the attraction between two bodies, as most electrically-induced background potentials do indeed scale as 1/r. Detection of orbit precession is unaffected by these effects, allowing for better sensitivities. In Baeza-Ballesteros et al. (Eur Phys J C 82:154, 2022), the impact of other subleading backgrounds that may induce orbit precession, such as, e.g., the electrical Casimir force or general relativity, was studied in detail. It was found that the proposed setup could test Yukawa-like corrections, alpha x exp(-r/lambda), to the 1/r potential with couplings as low as alpha similar to 10(-2) for distances as small as lambda similar to 10 μm, improving by roughly an order of magnitude present bounds. In this paper, we start to move from a theoretical study of the proposal to a more realistic implementation of the experimental setup. As a first step, we study the impact of air viscosity on the proposed setup and see how the setup should be modified in order to preserve the theoretical sensitivity achieved in Donini and Marimon (2016) and Baeza-Ballesteros et al. (2022).
|
Bennett, J. J., Buldgen, G., de Salas, P. F., Drewes, M., Gariazzo, S., Pastor, S., et al. (2021). Towards a precision calculation of the effective number of neutrinos N-eff in the Standard Model. Part II. Neutrino decoupling in the presence of flavour oscillations and finite-temperature QED. J. Cosmol. Astropart. Phys., 04(4), 073–33pp.
Abstract: We present in this work a new calculation of the standard-model benchmark value for the effective number of neutrinos, N-eff(SM), that quantifies the cosmological neutrinoto-photon energy densities. The calculation takes into account neutrino flavour oscillations, finite-temperature effects in the quantum electrodynamics plasma to O(e(3)), where e is the elementary electric charge, and a full evaluation of the neutrino-neutrino collision integral. We provide furthermore a detailed assessment of the uncertainties in the benchmark N(eff)(SM )value, through testing the value's dependence on (i) optional approximate modelling of the weak collision integrals, (ii) measurement errors in the physical parameters of the weak sector, and (iii) numerical convergence, particularly in relation to momentum discretisation. Our new, recommended standard-model benchmark is N-eff(SM) 3.0440 +/- 0.0002, where the nominal uncertainty is attributed predominantly to errors incurred in the numerical solution procedure (vertical bar delta N-eff vertical bar similar to 10(-4)), augmented by measurement errors in the solar mixing angle sin(2) theta(12) (vertical bar delta N-eff vertical bar similar to 10(-4)).
|