| |
Dalla Brida, M., Höllwieser, R., Knechtli, F., Korzec, T., Ramos, A., Sint, S., et al. (2026). High-precision calculation of the quark-gluon coupling from lattice QCD. Nature, 652(8109), 328–334.
Abstract: The outcomes of modern particle physics experiments, such as proton-proton collisions at the Large Hadron Collider at CERN (European Organization for Nuclear Research), depend crucially on the precise description of the scattering processes in terms of the fundamental forces. Among all the known forces that contribute, the limited understanding of the strong nuclear force is a key source of inaccuracy. At the fundamental level, the strong force is described by quantum chromodynamics, the theory of quarks and gluons. Their coupling, alpha s, becomes weaker at high energies (asymptotic freedom), enabling power series expansions in alpha s, but the confinement of quarks in hadronic bound states usually requires additional model assumptions. Consequently, determinations of alpha s from experiment mostly remain with large systematic theory errors1,2. Here we report the model-free determination of alpha s with unprecedented precision from low-energy experimental input combined with large-scale numerical simulations of the first-principles formulation of quantum chromodynamics on a space-time lattice. The uncertainty, about half that of all other results combined3, originates predominantly from the statistical Monte Carlo evaluation and has a clear probabilistic interpretation. The result for alpha s describes both low-energy hadronic physics with the help of lattice quantum chromodynamics and high-energy scattering using the perturbative expansion. By removing a source of theoretical uncertainty, our estimate of alpha s could enable markedly improved analyses of many high-energy experiments4. This will contribute to the likelihood that small effects of yet unknown physics are uncovered, as well as enable stringent precision tests of the Standard Model.
|
|
LHCb Collaboration(Aaij, R. et al), Fernandez Casani, A., Jaimes Elles, S. J., Jashal, B. K., Libralon, S., Lucio Martinez, M., et al. (2026). Search for heavy neutral leptons in B-meson decays. J. High Energy Phys., 03(3), 178–30pp.
Abstract: A search for long-lived heavy neutral leptons produced in B-meson decays and decaying to a mu(+/-)pi -/+ final state is performed with data collected by the LHCb experiment in proton-proton collisions at a centre-of-mass energy of 13TeV, corresponding to an integrated luminosity of 5 fb(-1). The results are interpreted in both lepton-number-conserving and lepton-number-violating scenarios. No significant excess is observed. Constraints are placed on the squared mixing element |U-mu N|(2) to the active muon neutrino, under the assumption that couplings to other lepton flavours are negligible, in the mass range of 1.6-5.5 GeV.
|
|
LHCb Collaboration(Aaij, R. et al), Fernandez Casani, A., Jaimes Elles, S. J., Jashal, B. K., Libralon, S., Lucio Martinez, M., et al. (2026). Measurement of the W → μν cross-sections as a function of the muon transverse momentum in pp collisions at 5.02 TeV. J. High Energy Phys., 03(3), 148–37pp.
Abstract: The pp -> W-+/-(-> mu(+/-) nu(mu)) X cross-sections are measured at a proton-proton centre-of-mass energy root s = 5.02TeV using a dataset corresponding to an integrated luminosity of 100 pb(-1) recorded by the LHCb experiment. Considering muons in the pseudorapidity range 2.2 < eta < 4.4, the cross-sections are measured differentially in twelve intervals of muon transverse momentum between 28 < p(T) < 52 GeV. Integrated over p(T), the measured cross-sections are sigma(W+) (->mu+ (nu) over bar mu) = 300.9 +/- 2.4 +/- 3.8 +/- 6.0 pb, sigma(W-) (->mu- (nu) over bar mu) = 236.9 +/- 2.1 +/- 2.7 +/- 4.7 pb, where the first uncertainties are statistical, the second are systematic, and the third are associated with the luminosity calibration. These integrated results are consistent with theoretical predictions. This analysis introduces a new method to determine the W-boson mass using the measured differential cross-sections corrected for detector effects. The measurement is performed on this statistically limited dataset as a proof of principle and yields m(W) = 80369 +/- 130 +/- 33 MeV, where the first uncertainty is experimental and the second is theoretical.
|
|
ATLAS Collaboration(Aad, G. et al), Aikot, A., Amos, K. R., Bouchhar, N., Cabrera Urban, S., Cantero, J., et al. (2025). Total Cost of Ownership and Evaluation of Google Cloud Resources for the ATLAS Experiment at the LHC. Comput. Softw. Big Sci., 9, 2–35pp.
Abstract: The ATLAS Google Project was established as part of an ongoing evaluation of the use of commercial clouds by the ATLAS Collaboration, in anticipation of the potential future adoption of such resources by WLCG grid sites to fulfil or complement their computing pledges. Seamless integration of Google cloud resources into the worldwide ATLAS distributed computing infrastructure was achieved at large scale and for an extended period of time, and hence cloud resources are shown to be an effective mechanism to provide additional, flexible computing capacity to ATLAS. For the first time a total cost of ownership analysis has been performed, to identify the dominant cost drivers and explore effective mechanisms for cost control. Network usage significantly impacts the costs of certain ATLAS workflows, underscoring the importance of implementing such mechanisms. Resource bursting has been successfully demonstrated, whilst exposing the true cost of this type of activity. A follow-up to the project is underway to investigate methods for improving the integration of cloud resources in data-intensive distributed computing environments and reducing costs related to network connectivity, which represents the primary expense when extensively utilising cloud resources.
|
|
Folgado, M. G., Sanz, V., Hirn, J., Lorenzo-Saez, E., & Urchueguia, J. F. (2025). Towards Predictive Pollution Control Through Traffic Flux Forecasting With Deep Learning: A Case Study in the City of Valencia. Applied AI Lett., 6(1), e106–15pp.
Abstract: Traffic congestion represents a significant urban challenge, with notable implications for public health and environmental well-being. Consequently, urban decision-makers prioritize the mitigation of congestion. This study delves into the efficacy of harnessing extensive data on urban traffic dynamics, coupled with comprehensive knowledge of road networks, to enable Artificial Intelligence (AI) in forecasting traffic flux well in advance. Such forecasts hold promise for informing emission reduction measures, particularly those aligned with Low Emission Zone policies. The investigation centers on Valencia, leveraging its robust traffic sensor infrastructure, one of the most densely deployed worldwide, encompassing approximately 3500 sensors strategically positioned across the city. Employing historical data spanning 2016 and 2017, we undertake the task of training and characterizing a Long Short-Term Memory (LSTM) Neural Network for the prediction of temporal traffic patterns. Our findings demonstrate the LSTM's efficacy in real-time forecasting of traffic flow evolution, facilitated by its ability to discern salient patterns within the dataset.
|
|
|