Home | [1–10] << 11 12 13 14 15 16 17 18 19 >> |
![]() |
Dorigo, T. et al, Ramos, A., & Ruiz de Austri, R. (2023). Toward the end-to-end optimization of particle physics instruments with differentiable programming. Rev. Phys., 10, 100085– pp.
Abstract: The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, due to the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, “experience-driven” layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized through a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters.
|
Ferrer-Sanchez, A., Martin-Guerrero, J., Ruiz de Austri, R., Torres-Forne, A., & Font, J. A. (2024). Gradient-annihilated PINNs for solving Riemann problems: Application to relativistic hydrodynamics. Comput. Meth. Appl. Mech. Eng., 424, 116906–18pp.
Abstract: We present a novel methodology based on Physics-Informed Neural Networks (PINNs) for solving systems of partial differential equations admitting discontinuous solutions. Our method, called Gradient-Annihilated PINNs (GA-PINNs), introduces a modified loss function that forces the model to partially ignore high-gradients in the physical variables, achieved by introducing a suitable weighting function. The method relies on a set of hyperparameters that control how gradients are treated in the physical loss. The performance of our methodology is demonstrated by solving Riemann problems in special relativistic hydrodynamics, extending earlier studies with PINNs in the context of the classical Euler equations. The solutions obtained with the GA-PINN model correctly describe the propagation speeds of discontinuities and sharply capture the associated jumps. We use the relative l(2) error to compare our results with the exact solution of special relativistic Riemann problems, used as the reference ''ground truth'', and with the corresponding error obtained with a second-order, central, shock-capturing scheme. In all problems investigated, the accuracy reached by the GA-PINN model is comparable to that obtained with a shock-capturing scheme, achieving a performance superior to that of the baseline PINN algorithm in general. An additional benefit worth stressing is that our PINN-based approach sidesteps the costly recovery of the primitive variables from the state vector of conserved variables, a well-known drawback of grid-based solutions of the relativistic hydrodynamics equations. Due to its inherent generality and its ability to handle steep gradients, the GA-PINN methodology discussed in this paper could be a valuable tool to model relativistic flows in astrophysics and particle physics, characterized by the prevalence of discontinuous solutions.
|
Roszkowski, L., Ruiz de Austri, R., & Trotta, R. (2010). Efficient reconstruction of constrained MSSM parameters from LHC data: A case study. Phys. Rev. D, 82(5), 055003–12pp.
Abstract: We present an efficient method of reconstructing the parameters of the constrained MSSM from assumed future LHC data, applied both on their own right and in combination with the cosmological determination of the relic dark matter abundance. Focusing on the ATLAS SU3 benchmark point, we demonstrate that our simple Gaussian approximation can recover the values of its parameters remarkably well. We examine two popular noninformative priors and obtain very similar results, although when we use an informative, naturalness-motivated prior, we find some sizeable differences. We show that a further strong improvement in reconstructing the SU3 parameters can by achieved by applying additional information about the relic abundance at the level of WMAP accuracy, although the expected data from Planck will have only a very limited additional impact. Further external data may be required to break some remaining degeneracies. We argue that the method presented here is applicable to a wide class of low-energy effective supersymmetric models, as it does not require one to deal with purely experimental issues, e.g., detector performance, and has the additional advantages of computational efficiency. Furthermore, our approach allows one to distinguish the effect of the model's internal structure and of the external data on the final parameters constraints.
|
Bertone, G., Cerdeño, D. G., Fornasa, M., Ruiz de Austri, R., & Trotta, R. (2010). Identification of dark matter particles with LHC and direct detection data. Phys. Rev. D, 82(5), 055008–7pp.
Abstract: Dark matter (DM) is currently searched for with a variety of detection strategies. Accelerator searches are particularly promising, but even if weakly interacting massive particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the DM in the Universe Omega(DM). We show that a significantly better reconstruction of the DM properties can be obtained with a combined analysis of LHC and direct detection data, by making a simple Ansatz on the weakly interacting massive particles local density rho(0)((chi) over bar1), i.e., by assuming that the local density scales with the cosmological relic abundance, (rho(0)((chi) over bar1)/rho(DM)) = (Omega(0)((chi) over bar1)/Omega(DM)). We demonstrate this method in an explicit example in the context of a 24-parameter supersymmetric model, with a neutralino lightest supersymmetric particle in the stau coannihilation region. Our results show that future ton-scale direct detection experiments will allow to break degeneracies in the supersymmetric parameter space and achieve a significantly better reconstruction of the neutralino composition and its relic density than with LHC data alone.
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|