MoEDAL Collaboration(Acharya, B. et al), Bernabeu, J., Mamuzic, J., Mitsou, V. A., Papavassiliou, J., Ruiz de Austri, R., et al. (2019). Magnetic Monopole Search with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions Interpreted in Photon-Fusion and Drell-Yan Production. Phys. Rev. Lett., 123(2), 021802–7pp.
Abstract: MoEDAL is designed to identify new physics in the form of stable or pseudostable highly ionizing particles produced in high-energy Large Hadron Collider (LHC) collisions. Here we update our previous search for magnetic monopoles in Run 2 using the full trapping detector with almost four times more material and almost twice more integrated luminosity. For the first time at the LHC, the data were interpreted in terms of photon-fusion monopole direct production in addition to the Drell-Yan-like mechanism. The MoEDAL trapping detector, consisting of 794 kg of aluminum samples installed in the forward and lateral regions, was exposed to 4.0 fb(-1) of 13 TeV proton-proton collisions at the LHCb interaction point and analyzed by searching for induced persistent currents after passage through a superconducting magnetometer. Magnetic charges equal to or above the Dirac charge are excluded in all samples. Monopole spins 0, 1/2, and 1 are considered and both velocity-independent and-dependent couplings are assumed. This search provides the best current laboratory constraints for monopoles with magnetic charges ranging from two to five times the Dirac charge.
|
Otten, S., Caron, S., de Swart, W., van Beekveld, M., Hendriks, L., van Leeuwen, C., et al. (2021). Event generation and statistical sampling for physics with deep generative models and a density information buffer. Nat. Commun., 12(1), 2985–16pp.
Abstract: Simulating nature and in particular processes in particle physics require expensive computations and sometimes would take much longer than scientists can afford. Here, we explore ways to a solution for this problem by investigating recent advances in generative modeling and present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but to also ensure that these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with several generative machine learning models to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e(+)e(-)-> Z -> l(+)l(-) and pp -> tt<mml:mo><overbar></mml:mover> including the decay of the top quarks and a simulation of the detector response. By buffering density information of encoded Monte Carlo events given the encoder of a Variational Autoencoder we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g., for the phase space integration of matrix elements in quantum field theories. Here, the authors report buffered-density variational autoencoders for the generation of physical events. This method is computationally less expensive over other traditional methods and beyond accelerating the data generation process, it can help to steer the generation and to detect anomalies.
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
Domingo, F., Kim, J. S., Martin Lozano, V., Martin-Ramiro, P., & Ruiz de Austri, R. (2020). Confronting the neutralino and chargino sector of the NMSSM with the multilepton searches at the LHC. Phys. Rev. D, 101(7), 075010–29pp.
Abstract: We test the impact of the ATLAS and CMS multilepton searches performed at the LHC with 8 as well as 13 TeV center-of-mass energy (using only the pre-2018 results) on the chargino and neutralino sector of the next-to-minimal supersymmetric Standard Model (NMSSM). Our purpose consists in analyzing the actual reach of these searches for a full model and in emphasizing effects beyond the minimal supersymmetric Standard Model (MSSM) that affect the performance of current (MSSM-inspired) electroweakino searches. To this end, we consider several scenarios characterizing specific features of the NMSSM electroweakino sector. We then perform a detailed collider study, generating Monte Carlo events through PYTHIA and testing against current LHC constraints implemented in the public tool CheckMATE. We find e.g., that supersymmetric decay chains involving intermediate singlino or Higgs-singlet states can modify the naive MSSM-like picture of the constraints by inducing final states with softer or less easily identifiable SM particles-reversely, a compressed configuration with singlino next-to-lightest supersymmetric particle occasionally induces final states that are rich with photons, which could provide complementary search channels.
|
Otten, S., Rolbiecki, K., Caron, S., Kim, J. S., Ruiz de Austri, R., & Tattersall, J. (2020). DeepXS: fast approximation of MSSM electroweak cross sections at NLO. Eur. Phys. J. C, 80(1), 12–9pp.
Abstract: We present a deep learning solution to the prediction of particle production cross sections over a complicated, high-dimensional parameter space. We demonstrate the applicability by providing state-of-the-art predictions for the production of charginos and neutralinos at the Large Hadron Collider (LHC) at the next-to-leading order in the phenomenological MSSM-19 and explicitly demonstrate the performance for pp ->(chi) over tilde (+)(1)(chi) over tilde (-)(1), (chi) over tilde (0)(2)(chi) over tilde (0)(2) and (chi) over tilde (0)(2)(chi) over tilde (+/-)(1) as a proof of concept which will be extended to all SUSY electroweak pairs. We obtain errors that are lower than the uncertainty from scale and parton distribution functions with mean absolute percentage errors of well below 0.5% allowing a safe inference at the next-to-leading order with inference times that improve the Monte Carlo integration procedures that have been available so far by a factor of O(10(7)) from O(min) to O(mu s) per evaluation.
|
Begone, G., Deisenroth, M. P., Kim, J. S., Liem, S., Ruiz de Austri, R., & Welling, M. (2019). Accelerating the BSM interpretation of LHC data with machine learning. Phys. Dark Universe, 24, 100293–5pp.
Abstract: The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators. Performing statistically convergent scans of high-dimensional BSM theories is consequently challenging, and in practice unfeasible for very high-dimensional BSM theories. We present here a new machine learning method that accelerates the interpretation of LHC data, by learning the relationship between BSM theory parameters and data. As a proof-of-concept, we demonstrate that this technique accurately predicts natural SUSY signal events in two signal regions at the High Luminosity LHC, up to four orders of magnitude faster than standard techniques. The new approach makes it possible to rapidly and accurately reconstruct the theory parameters of complex BSM theories, should an excess in the data be discovered at the LHC.
|
Bertone, G., Cerdeño, D. G., Fornasa, M., Ruiz de Austri, R., & Trotta, R. (2010). Identification of dark matter particles with LHC and direct detection data. Phys. Rev. D, 82(5), 055008–7pp.
Abstract: Dark matter (DM) is currently searched for with a variety of detection strategies. Accelerator searches are particularly promising, but even if weakly interacting massive particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the DM in the Universe Omega(DM). We show that a significantly better reconstruction of the DM properties can be obtained with a combined analysis of LHC and direct detection data, by making a simple Ansatz on the weakly interacting massive particles local density rho(0)((chi) over bar1), i.e., by assuming that the local density scales with the cosmological relic abundance, (rho(0)((chi) over bar1)/rho(DM)) = (Omega(0)((chi) over bar1)/Omega(DM)). We demonstrate this method in an explicit example in the context of a 24-parameter supersymmetric model, with a neutralino lightest supersymmetric particle in the stau coannihilation region. Our results show that future ton-scale direct detection experiments will allow to break degeneracies in the supersymmetric parameter space and achieve a significantly better reconstruction of the neutralino composition and its relic density than with LHC data alone.
|
Boubekeur, L., Choi, K. Y., Ruiz de Austri, R., & Vives, O. (2010). The degenerate gravitino scenario. J. Cosmol. Astropart. Phys., 04(4), 005–26pp.
Abstract: In this work, we explore the “degenerate gravitino” scenario where the mass difference between the gravitino and the lightest MSSM particle is much smaller than the gravitino mass itself. In this case, the energy released in the decay of the next to lightest sypersymmetric particle (NLSP) is reduced. Consequently the cosmological and astrophysical constraints on the gravitino abundance, and hence on the reheating temperature, become softer than in the usual case. On the other hand, such small mass splittings generically imply a much longer lifetime for the NLSP. We find that, in the constrained MSSM (CMSSM), for neutralino LSP or NLSP, reheating temperatures compatible with thermal leptogenesis are reached for small splittings of order 10(-2) GeV. While for stau NLSP, temperatures of T-RH similar or equal to 4 x 10(9) GeV can be obtained even for splittings of order of tens of GeVs. This “degenerate gravitino” scenario offers a possible way out to the gravitino problem for thermal leptogenesis in supersymmetric theories.
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|
Roszkowski, L., Ruiz de Austri, R., & Trotta, R. (2010). Efficient reconstruction of constrained MSSM parameters from LHC data: A case study. Phys. Rev. D, 82(5), 055003–12pp.
Abstract: We present an efficient method of reconstructing the parameters of the constrained MSSM from assumed future LHC data, applied both on their own right and in combination with the cosmological determination of the relic dark matter abundance. Focusing on the ATLAS SU3 benchmark point, we demonstrate that our simple Gaussian approximation can recover the values of its parameters remarkably well. We examine two popular noninformative priors and obtain very similar results, although when we use an informative, naturalness-motivated prior, we find some sizeable differences. We show that a further strong improvement in reconstructing the SU3 parameters can by achieved by applying additional information about the relic abundance at the level of WMAP accuracy, although the expected data from Planck will have only a very limited additional impact. Further external data may be required to break some remaining degeneracies. We argue that the method presented here is applicable to a wide class of low-energy effective supersymmetric models, as it does not require one to deal with purely experimental issues, e.g., detector performance, and has the additional advantages of computational efficiency. Furthermore, our approach allows one to distinguish the effect of the model's internal structure and of the external data on the final parameters constraints.
|