
Albandea, D., Del Debbio, L., Hernandez, P., Kenway, R., Marsh Rossney, J., & Ramos, A. (2023). Learning trivializing flows. Eur. Phys. J. C, 83(7), 676–14pp.
Abstract: The recent introduction of machine learning techniques, especially normalizing flows, for the sampling of lattice gauge theories has shed some hope on improving the sampling efficiency of the traditional hybrid Monte Carlo (HMC) algorithm. In this work we study a modified HMC algorithm that draws on the seminal work on trivializing flows by L & uuml;scher. Autocorrelations are reduced by sampling from a simpler action that is related to the original action by an invertible mapping realised through Normalizing Flows models with a minimal set of training parameters. We test the algorithm in a f(4) theory in 2D where we observe reduced autocorrelation times compared with HMC, and demonstrate that the training can be done at small unphysical volumes and used in physical conditions. We also study the scaling of the algorithm towards the continuum limit under various assumptions on the network architecture.



Albandea, D., Hernandez, P., Ramos, A., & RomeroLopez, F. (2021). Topological sampling through windings. Eur. Phys. J. C, 81(10), 873–12pp.
Abstract: We propose a modification of the Hybrid Monte Carlo (HMC) algorithm that overcomes the topological freezing of a twodimensional U(1) gauge theory with and without fermion content. This algorithm includes reversible jumps between topological sectors – winding steps – combined with standard HMC steps. The full algorithm is referred to as winding HMC (wHMC), and it shows an improved behaviour of the autocorrelation time towards the continuum limit. We find excellent agreement between the wHMC estimates of the plaquette and topological susceptibility and the analytical predictions in the U(1) pure gauge theory, which are known even at finite beta. We also study the expectation values in fixed topological sectors using both HMC and wHMC, with and without fermions. Even when topology is frozen in HMC – leading to significant deviations in topological as well as nontopological quantities – the two algorithms agree on the fixedtopology averages. Finally, we briefly compare the wHMC algorithm results to those obtained with masterfield simulations of size L similar to 8 x 10(3).



Bribian, E. I., Dasilva Golan, J., Garcia Perez, M., & Ramos, A. (2021). Memory efficient finite volume schemes with twisted boundary conditions. Eur. Phys. J. C, 81(10), 951–25pp.
Abstract: In this paper we explore a finite volume renormalization scheme that combines three main ingredients: a coupling based on the gradient flow, the use of twisted boundary conditions and a particular asymmetric geometry, that for SU (N) gauge theories consists on a hypercubic box of size l(2) x (Nl)(2), a choice motivated by the study of volume independence in large N gauge theories. We argue that this scheme has several advantages that make it particularly suited for precision determinations of the strong coupling, among them translational invariance, an analytic expansion in the coupling and a reduced memory footprint with respect to standard simulations on symmetric lattices, allowing for a more efficient use of current GPU clusters. We test this scheme numerically with a determination of the A parameter in the SU (3) pure gauge theory. We show that the use of an asymmetric geometry has no significant impact in the size of scaling violations, obtaining a value Lambda((MS) over bar)root 8t(0) = 0.603(17) in good agreement with the existing literature. The role of topology freezing, that is relevant for the determination of the coupling in this particular scheme and for large N applications, is discussed in detail.



Catumba, G., Ramos, A., & Zaldivar, B. (2025). Stochastic automatic differentiation for Monte Carlo processes. Comput. Phys. Commun., 307, 109396–13pp.
Abstract: Monte Carlo methods represent a cornerstone of computer science. They allow sampling high dimensional distribution functions in an efficient way. In this paper we consider the extension of Automatic Differentiation (AD) techniques to Monte Carlo processes, addressing the problem of obtaining derivatives (and in general, the Taylor series) of expectation values. Borrowing ideas from the lattice field theory community, we examine two approaches. One is based on reweighting while the other represents an extension of the Hamiltonian approach typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show that the Hamiltonian approach can be understood as a change of variables of the reweighting approach, resulting in much reduced variances of the coefficients of the Taylor series. This work opens the door to finding other variance reduction techniques for derivatives of expectation values.



Dalla Brida, M., Hollwieser, R., Knechtli, F., Korzec, T., Nada, A., Ramos, A., et al. (2022). Determination of a(s )(mZ) by the nonperturbative decoupling method. Eur. Phys. J. C, 82(12), 1092–38pp.
Abstract: We present the details and first results of a new strategy for the determination of alpha s(mZ) (ALPHA Collaboration et al. in Phys. Lett. B 807:135571, 2020). By simultaneously decoupling 3 fictitious heavy quarks we establish a relation between the Aparameters of threeflavor QCD and pure gauge theory. Very precise recent results in the pure gauge theory (Dalla Brida and Ramos in Eur. Phys. J. C 79(8):720, 2019; Nada and Ramos in Eur Phys J C 81(1):1, 2021) can thus be leveraged to obtain the three flavour Aparameter in units of a common decoupling scale. Connecting this scale to hadronic physics in 3flavour QCD leads to our result in physical units, A(3)/MS = 336(12) MeV, which translates to alpha s(m(Z)) = 0.11823(84). This is compatible with both the FLAG average (Aoki et al. in FLAG review 2021. arXiv:2111.09849 [heplat]) and the previous ALPHA result (ALPHA Collaboration et al., Phys. Rev. Lett. 119(10):102001, 2017), with a comparable, yet still statistics dominated, error. This constitutes a highly nontrivial check, as the decoupling strategy is conceptually very different from the 3flavour QCD stepscaling method, and so are their systematic errors. These include the uncertainties of the combined decoupling and continuum limits, which we discuss in some detail. We also quantify the correlation between both results, due to some common elements, such as the scale determination in physical units and the definition of the energy scale where we apply decoupling.



Del Debbio, L., & Ramos, A. (2021). Lattice determinations of the strong coupling. Phys. Rep.Rev. Sec. Phys. Lett., 920, 1–71.
Abstract: Lattice QCD has reached a mature status. State of the art lattice computations include u, d, s (and even the c) sea quark effects, together with an estimate of electromagnetic and isospin breaking corrections for hadronic observables. This precise and first principles description of the standard model at low energies allows the determination of multiple quantities that are essential inputs for phenomenology and not accessible to perturbation theory. One of the fundamental parameters that are determined from simulations of lattice QCD is the strong coupling constant, which plays a central role in the quest for precision at the LHC. Lattice calculations currently provide its best determinations, and will play a central role in future phenomenological studies. For this reason we believe that it is timely to provide a pedagogical introduction to the lattice determinations of the strong coupling. Rather than analysing individual studies, the emphasis will be on the methodologies and the systematic errors that arise in these determinations. We hope that these notes will help lattice practitioners, and QCD phenomenologists at large, by providing a selfcontained introduction to the methodology and the possible sources of systematic error. The limiting factors in the determination of the strong coupling turn out to be different from the ones that limit other lattice precision observables. We hope to collect enough information here to allow the reader to appreciate the challenges that arise in order to improve further our knowledge of a quantity that is crucial for LHC phenomenology. Crown Copyright & nbsp;(c) 2021 Published by Elsevier B.V. All rights reserved.



Dorigo, T. et al, Ramos, A., & Ruiz de Austri, R. (2023). Toward the endtoend optimization of particle physics instruments with differentiable programming. Rev. Phys., 10, 100085– pp.
Abstract: The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a superhuman task, due to the large dimensionality of the space of possible choices for geometry, detection technology, materials, dataacquisition, and informationextraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, “experiencedriven” layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized through a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters.



Feijoo, A., Gazda, D., Magas, V., & Ramos, A. (2021). The (K)overbarN Interaction in Higher Partial Waves. SymmetryBasel, 13(8), 1434–22pp.
Abstract: We present a chiral (K) over barN interaction model that has been developed and optimized in order to account for the experimental data of inelastic (K) over barN reaction channels that open at higher energies. In particular, we study the effect of the higher partial waves, which originate directly from the chiral Lagrangian, as they could supersede the role of highspin resonances employed in earlier phenomenological models to describe mesonbaryon cross sections in the 2 GeV region. We present a detailed derivation of the partial wave amplitudes that emerge from the chiral SU(3) mesonbaryon Lagrangian up to the dwaves and nexttoleading order in the chiral expansion. We implement a nonperturbative unitarization in coupled channels and optimize the model parameters to a large pool of experimental data in the relevant energy range where these new contributions are expected to be important. The obtained results are encouraging. They indicate the ability of the chiral higher partial waves to extend the description of the scattering data to higher energies and to account for structures in the reaction crosssections that cannot be accommodated by theoretical models limited to the swaves.



Feijoo, A., Magas, V. K., Ramos, A., & Oset, E. (2016). A hiddencharm S =1 pentaquark from the decay Lambda(b) into J/psi eta Lambda states. Eur. Phys. J. C, 76(8), 446–12pp.
Abstract: The hiddencharm pentaquark Pc(4450) observed recently by the LHCb collaboration may be of molecular nature, as advocated by some unitary approaches that also predict pentaquark partners in the strangeness S = 1 sector. In this work we argue that a hiddencharm strange pentaquark could be seen from the decay of the Lambda b, just as in the case of the nonstrange Pc(4450), but looking into the J/psi eta Lambda decay mode and forming the invariant mass spectrum of J/psi Lambda pairs. In the model presented here, which assumes a standard weak decay topology and incorporates the hadronization process and finalstate interaction effects, we find the J/psi eta Lambda final states to be populated with similar strength as the J/psi K p states employed for the observation of the nonstrange pentaquark. This makes the Lambda b > J/psi eta Lambda decay to be an interesting process to observe a possible strange partner of the Pc(4450). We study the dependence of the J/psi Lambda mass spectra on various model ingredients and on the unknown properties of the strange pentaquark.



Feijoo, A., Magas, V. K., Ramos, A., & Oset, E. (2015). Lambda(b) > J/psi K Xi decay and the higher order chiral terms of the meson baryon interaction. Phys. Rev. D, 92(7), 076015–10pp.
Abstract: We study the weak decay of the Lambda(b) into J/psi K Xi. and J/psi eta Lambda states, and relate these processes to the Lambda(b) > J/psi(K) over barN decay mode. The elementary weak transition at the quark level proceeds via the creation of a J/psi meson and an excited sud system with I = 0, which upon hadronization leads to (K) over barN or eta Lambda pairs. These states undergo finalstate interaction in coupled channels and produce a final mesonbaryon pair. The K. state only occurs via rescattering, hence making the Lambda(b) > J/psi K Xi process very sensitive to the details of the mesonbaryon interaction in strangeness S = 1 and isospin I = 0. We show that the corresponding invariant mass distribution is dominated by the nexttoleadingorder terms of the chiral interaction. The I = 0 selectivity of this decay, and its large sensitivity to the higherorder terms, makes its measurement very useful and complementary to the K p > K Xi cross section data. The rates of the Lambda(b) > J/psi K Xi and Lambda(b) > J/psi eta Lambda invariant mass distributions are sizable compared to those of the Lambda(b) > J/psi(K) over barN decay, which is measured experimentally, and thus, we provide arguments for an experimental determination of these decay modes that will help us understand better the chiral dynamics at higher energies.

