|
Feijoo, A., Gazda, D., Magas, V., & Ramos, A. (2021). The (K)over-barN Interaction in Higher Partial Waves. Symmetry-Basel, 13(8), 1434–22pp.
Abstract: We present a chiral (K) over barN interaction model that has been developed and optimized in order to account for the experimental data of inelastic (K) over barN reaction channels that open at higher energies. In particular, we study the effect of the higher partial waves, which originate directly from the chiral Lagrangian, as they could supersede the role of high-spin resonances employed in earlier phenomenological models to describe meson-baryon cross sections in the 2 GeV region. We present a detailed derivation of the partial wave amplitudes that emerge from the chiral SU(3) meson-baryon Lagrangian up to the d-waves and next-to-leading order in the chiral expansion. We implement a nonperturbative unitarization in coupled channels and optimize the model parameters to a large pool of experimental data in the relevant energy range where these new contributions are expected to be important. The obtained results are encouraging. They indicate the ability of the chiral higher partial waves to extend the description of the scattering data to higher energies and to account for structures in the reaction cross-sections that cannot be accommodated by theoretical models limited to the s-waves.
|
|
|
Albandea, D., Hernandez, P., Ramos, A., & Romero-Lopez, F. (2021). Topological sampling through windings. Eur. Phys. J. C, 81(10), 873–12pp.
Abstract: We propose a modification of the Hybrid Monte Carlo (HMC) algorithm that overcomes the topological freezing of a two-dimensional U(1) gauge theory with and without fermion content. This algorithm includes reversible jumps between topological sectors – winding steps – combined with standard HMC steps. The full algorithm is referred to as winding HMC (wHMC), and it shows an improved behaviour of the autocorrelation time towards the continuum limit. We find excellent agreement between the wHMC estimates of the plaquette and topological susceptibility and the analytical predictions in the U(1) pure gauge theory, which are known even at finite beta. We also study the expectation values in fixed topological sectors using both HMC and wHMC, with and without fermions. Even when topology is frozen in HMC – leading to significant deviations in topological as well as non-topological quantities – the two algorithms agree on the fixed-topology averages. Finally, we briefly compare the wHMC algorithm results to those obtained with master-field simulations of size L similar to 8 x 10(3).
|
|
|
Bribian, E. I., Dasilva Golan, J., Garcia Perez, M., & Ramos, A. (2021). Memory efficient finite volume schemes with twisted boundary conditions. Eur. Phys. J. C, 81(10), 951–25pp.
Abstract: In this paper we explore a finite volume renormalization scheme that combines three main ingredients: a coupling based on the gradient flow, the use of twisted boundary conditions and a particular asymmetric geometry, that for SU (N) gauge theories consists on a hypercubic box of size l(2) x (Nl)(2), a choice motivated by the study of volume independence in large N gauge theories. We argue that this scheme has several advantages that make it particularly suited for precision determinations of the strong coupling, among them translational invariance, an analytic expansion in the coupling and a reduced memory footprint with respect to standard simulations on symmetric lattices, allowing for a more efficient use of current GPU clusters. We test this scheme numerically with a determination of the A parameter in the SU (3) pure gauge theory. We show that the use of an asymmetric geometry has no significant impact in the size of scaling violations, obtaining a value Lambda((MS) over bar)root 8t(0) = 0.603(17) in good agreement with the existing literature. The role of topology freezing, that is relevant for the determination of the coupling in this particular scheme and for large N applications, is discussed in detail.
|
|
|
Flavour Lattice Averaging Group(Aoki, Y. et al), Hernandez, P., & Ramos, A. (2022). FLAG Review 2021. Eur. Phys. J. C, 82(10), 869–296pp.
Abstract: We review lattice results related to pion, kaon, D-meson, B-meson, and nucleon physics with the aim of making them easily accessible to the nuclear and particle physics communities. More specifically, we report on the determination of the light-quark masses, the form factor f(+) (0) arising in the semileptonic K -> pi transition at zero momentum transfer, as well as the decay constant ratio Alf, and its consequences for the CKM matrix elements V-us and V-ud. Furthermore, we describe the results obtained on the lattice for some of the low-energy constants of SU(2)(L) x SU(2)(R) and SU(3)(L) x SU(3)(R) Chiral Perturbation Theory. We review the determination of the B-K parameter of neutral kaon mixing as well as the additional four B parameters that arise in theories of physics beyond the Standard Model. For the heavy-quark sector, we provide results for m(c) and m(b) as well as those for the decay constants, form factors, and mixing parameters of charmed and bottom mesons and baryons. These are the heavy-quark quantities most relevant for the determination of CKM matrix elements and the global CKM unitarity-triangle fit. We review the status of lattice determinations of the strong coupling constant alpha(s). We consider nucleon matrix elements, and review the determinations of the axial, scalar and tensor bilinears, both isovector and flavor diagonal. Finally, in this review we have added a new section reviewing determinations of scale-setting quantities.
|
|
|
Dalla Brida, M., Hollwieser, R., Knechtli, F., Korzec, T., Nada, A., Ramos, A., et al. (2022). Determination of a(s )(mZ) by the non-perturbative decoupling method. Eur. Phys. J. C, 82(12), 1092–38pp.
Abstract: We present the details and first results of a new strategy for the determination of alpha s(mZ) (ALPHA Collaboration et al. in Phys. Lett. B 807:135571, 2020). By simultaneously decoupling 3 fictitious heavy quarks we establish a relation between the A-parameters of three-flavor QCD and pure gauge theory. Very precise recent results in the pure gauge theory (Dalla Brida and Ramos in Eur. Phys. J. C 79(8):720, 2019; Nada and Ramos in Eur Phys J C 81(1):1, 2021) can thus be leveraged to obtain the three flavour A-parameter in units of a common decoupling scale. Connecting this scale to hadronic physics in 3-flavour QCD leads to our result in physical units, A(3)/MS = 336(12) MeV, which translates to alpha s(m(Z)) = 0.11823(84). This is compatible with both the FLAG average (Aoki et al. in FLAG review 2021. arXiv:2111.09849 [hep-lat]) and the previous ALPHA result (ALPHA Collaboration et al., Phys. Rev. Lett. 119(10):102001, 2017), with a comparable, yet still statistics dominated, error. This constitutes a highly non-trivial check, as the decoupling strategy is conceptually very different from the 3-flavour QCD step-scaling method, and so are their systematic errors. These include the uncertainties of the combined decoupling and continuum limits, which we discuss in some detail. We also quantify the correlation between both results, due to some common elements, such as the scale determination in physical units and the definition of the energy scale where we apply decoupling.
|
|
|
Albandea, D., Del Debbio, L., Hernandez, P., Kenway, R., Marsh Rossney, J., & Ramos, A. (2023). Learning trivializing flows. Eur. Phys. J. C, 83(7), 676–14pp.
Abstract: The recent introduction of machine learning techniques, especially normalizing flows, for the sampling of lattice gauge theories has shed some hope on improving the sampling efficiency of the traditional hybrid Monte Carlo (HMC) algorithm. In this work we study a modified HMC algorithm that draws on the seminal work on trivializing flows by L & uuml;scher. Autocorrelations are reduced by sampling from a simpler action that is related to the original action by an invertible mapping realised through Normalizing Flows models with a minimal set of training parameters. We test the algorithm in a f(4) theory in 2D where we observe reduced autocorrelation times compared with HMC, and demonstrate that the training can be done at small unphysical volumes and used in physical conditions. We also study the scaling of the algorithm towards the continuum limit under various assumptions on the network architecture.
|
|
|
Gross, F. et al, Ramos, A., & Vos, M. (2023). 50 Years of quantum chromodynamics. Eur. Phys. J. C, 83(12), 1125–636pp.
Abstract: Quantum Chromodynamics, the theory of quarks and gluons, whose interactions can be described by a local SU(3) gauge symmetry with charges called “color quantum numbers”, is reviewed; the goal of this review is to provide advanced Ph.D. students a comprehensive handbook, helpful for their research. When QCD was “discovered” 50 years ago, the idea that quarks could exist, but not be observed, left most physicists unconvinced. Then, with the discovery of charmonium in 1974 and the explanation of its excited states using the Cornell potential, consisting of the sum of a Coulomb-like attraction and a long range linear confining potential, the theory was suddenly widely accepted. This paradigm shift is now referred to as the November revolution. It had been anticipated by the observation of scaling in deep inelastic scattering, and was followed by the discovery of gluons in three-jet events. The parameters of QCD include the running coupling constant, as (Q(2)), that varies with the energy scale Q(2) characterising the interaction, and six quark masses. QCD cannot be solved analytically, at least not yet, and the large value of alpha(s) at low momentum transfers limits perturbative calculations to the high-energy region where Q(2) >>Lambda(QCD) (2) similar or equal to (250 MeV)(2). Lattice QCD (LQCD), numerical calculations on a discretized space-time lattice, is discussed in detail, the dynamics of the QCD vacuum is visualized, and the expected spectra of mesons and baryons are displayed. Progress in lattice calculations of the structure of nucleons and of quantities related to the phase diagram of dense and hot (or cold) hadronic matter are reviewed. Methods and examples of how to calculate hadronic corrections to weak matrix elements on a lattice are outlined. The wide variety of analytical approximations currently in use, and the accuracy of these approximations, are reviewed. Thesemethods range from the Bethe-Salpeter, Dyson-Schwinger coupled relativistic equations, which are formulated in bothMinkowski or Euclidean spaces, to expansions of multi-quark states in a set of basis functions using light-front coordinates, to the AdS/QCD method that imbeds 4-dimensionalQCDin a 5-dimensional deSitter space, allowing confinement and spontaneous chiral symmetry breaking to be described in a novel way. Models that assume the number of colors is very large, i.e. make use of the large Nclimit, give unique insights. Many other techniques that are tailored to specific problems, such as perturbative expansions for high energy scattering or approximate calculations using the operator product expansion are discussed. The very powerful effective field theory techniques that are successful for low energy nuclear systems (chiral effective theory), or for non-relativistic systems involving heavy quarks, or the treatment of gluon exchanges between energetic, collinear partons encountered in jets, are discussed. The spectroscopy of mesons and baryons has played an important historical role in the development of QCD. The famous X,Y,Z states – and the discovery of pentaquarks – have revolutionized hadron spectroscopy; their status and interpretation are reviewed as well as recent progress in the identification of glueballs and hybrids in light-meson spectroscopy. These exotic states add to the spectrum of expected q ($) over barq mesons and qqq baryons. The progress in understanding excitations of light and heavy baryons is discussed. The nucleon as the lightest baryon is discussed extensively, its form factors, its partonic structure and the status of the attempt to determine a three-dimensional picture of the parton distribution. An experimental program to study the phase diagram of QCD at high temperature and density started with fixed target experiments in various laboratories in the second half of the 1980s, and then, in this century, with colliders. QCD thermodynamics at high temperature became accessible to LQCD, and numerical results on chiral and deconfinement transitions and properties of the deconfined and chirally restored form of strongly interacting matter, called the Quark-Gluon Plasma (QGP), have become very precise by now. These results can now be confronted with experimental data that are sensitive to the nature of the phase transition. There is clear evidence that the QGP phase is created. This phase of QCD matter can already be characterized by some properties that indicate, within a temperature range of a few times the pseudocritical temperature, the medium behaves like a near ideal liquid. Experimental observables are presented that demonstrate deconfinement. High and ultrahigh density QCD matter at moderate and low temperatures shows interesting features and new phases that are of astrophysical relevance. They are reviewed here and some of the astrophysical implications are discussed. Perturbative QCD and methods to describe the different aspects of scattering processes are discussed. The primary partonparton scattering in a collision is calculated in perturbative QCD with increasing complexity. The radiation of soft gluons can spoil the perturbative convergence, this can be cured by resummation techniques, which are also described here. Realistic descriptions of QCD scattering events need to model the cascade of quark and gluon splittings until hadron formation sets in, which is done by parton showers. The full event simulation can be performed with Monte Carlo event
|
|
|
Dorigo, T. et al, Ramos, A., & Ruiz de Austri, R. (2023). Toward the end-to-end optimization of particle physics instruments with differentiable programming. Rev. Phys., 10, 100085– pp.
Abstract: The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, due to the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, “experience-driven” layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized through a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters.
|
|
|
Mantovani Sarti, V., Feijoo, A., Vidana, I., Ramos, A., Giacosa, F., Hyodo, T., et al. (2024). Constraining the low-energy S =-2 meson-baryon interaction with two-particle correlations. Phys. Rev. D, 110(1), L011505–8pp.
Abstract: In this paper we present a novel method to extract information on hadron-hadron interactions using for the first time femtoscopic data to constrain the low-energy constants of a QCD effective Lagrangian. This method offers a new way to investigate the nonperturbative regime of QCD in sectors where scattering experiments are not feasible, such as the multistrange and charm ones. As an example of its application, we use the very precise K-Lambda correlation function data, recently measured in pp collisions at LHC, to constrain the strangeness S = -2 meson-baryon interaction. The model obtained delivers new insights on the molecular nature of the Xi(1620) and Xi(1690) states.
|
|