|
van Beekveld, M., Beenakker, W., Caron, S., Peeters, R., & Ruiz de Austri, R. (2017). Supersymmetry with dark matter is still natural. Phys. Rev. D, 96(3), 035015–7pp.
Abstract: We identify the parameter regions of the phenomenological minimal supersymmetric standard model (pMSSM) with the minimal possible fine-tuning. We show that the fine-tuning of the pMSSM is not large, nor under pressure by LHC searches. Low sbottom, stop and gluino masses turn out to be less relevant for low fine-tuning than commonly assumed. We show a link between low fine-tuning and the dark matter relic density. Fine-tuning arguments point to models with a dark matter candidate yielding the correct dark matter relic density: a bino-higgsino particle with a mass of 35-155 GeV. Some of these candidates are compatible with recent hints seen in astrophysics experiments such as Fermi-LAT and AMS-02. We argue that upcoming direct search experiments, such as XENON1T, will test all of the most natural solutions in the next few years due to the sensitivity of these experiments on the spin-dependent WIMP-nucleon cross section.
|
|
|
Lara, I., Lopez-Fogliani, D. E., Muñoz, C., Nagata, N., Otono, H., & Ruiz de Austri, R. (2018). Looking for the left sneutrino LSP with displaced-vertex searches. Phys. Rev. D, 98(7), 075004–17pp.
Abstract: We analyze a displaced dilepton signal expected at the LHC for a tau left sneutrino as the lightest supersymmetric particle with a mass in the range 45-100 GeV. The sneutrinos are pair produced via a virtual W, Z or gamma in the s channel and, given the large value of the tau Yukawa coupling, their decays into two dileptons or a dilepton plus missing transverse energy from neutrinos can be significant. The discussion is carried out in the framework of the μnu SSM, where the presence of R-parity violating couplings involving right-handed neutrinos solves the μproblem and can reproduce the neutrino data. To probe the tau left sneutrinos we compare the predictions of this scenario with the ATLAS search for long-lived particles using displaced lepton pairs in pp collisions at root s = 8 TeV, allowing us to constrain the parameter space of the model. We also consider an optimization of the trigger requirements used in existing displaced-vertex searches by means of a high level trigger that exploits tracker information. This optimization is generically useful for a light metastable particle decaying into soft charged leptons. The constraints on the sneutrino turn out to be more stringent. We finally discuss the prospects for the 13 TeV LHC searches as well as further potential optimizations.
|
|
|
Domingo, F., Kim, J. S., Martin Lozano, V., Martin-Ramiro, P., & Ruiz de Austri, R. (2020). Confronting the neutralino and chargino sector of the NMSSM with the multilepton searches at the LHC. Phys. Rev. D, 101(7), 075010–29pp.
Abstract: We test the impact of the ATLAS and CMS multilepton searches performed at the LHC with 8 as well as 13 TeV center-of-mass energy (using only the pre-2018 results) on the chargino and neutralino sector of the next-to-minimal supersymmetric Standard Model (NMSSM). Our purpose consists in analyzing the actual reach of these searches for a full model and in emphasizing effects beyond the minimal supersymmetric Standard Model (MSSM) that affect the performance of current (MSSM-inspired) electroweakino searches. To this end, we consider several scenarios characterizing specific features of the NMSSM electroweakino sector. We then perform a detailed collider study, generating Monte Carlo events through PYTHIA and testing against current LHC constraints implemented in the public tool CheckMATE. We find e.g., that supersymmetric decay chains involving intermediate singlino or Higgs-singlet states can modify the naive MSSM-like picture of the constraints by inducing final states with softer or less easily identifiable SM particles-reversely, a compressed configuration with singlino next-to-lightest supersymmetric particle occasionally induces final states that are rich with photons, which could provide complementary search channels.
|
|
|
Kim, J. S., Lopez-Fogliani, D. E., Perez, A. D., & Ruiz de Austri, R. (2022). The new (g-2)(mu) and right-handed sneutrino dark matter. Nucl. Phys. B, 974, 115637–23pp.
Abstract: In this paper we investigate the (g – 2)(mu) discrepancy in the context of the R-parity conserving next-to minimal supersymmetric Standard Model plus right-handed neutrinos superfields. The model has the ability to reproduce neutrino physics data and includes the interesting possibility to have the right-handed sneutrino as the lightest supersymmetric particle and a viable dark matter candidate. Since right-handed sneutrinos are singlets, no new contributions for delta a(mu) with respect to the MSSM and NMSSM are present. However, the possibility to have the right-handed sneutrino as the lightest supersymmetric particle opens new ways to escape Large Hadron Collider and direct detection constraints. In particular, we find that dark matter masses within 10 less than or similar to m((upsilon) over tildeR) less than or similar to 600 GeV are fully compatible with current experimental constraints. Remarkably, not only spectra with light sleptons are needed, but we obtain solutions with m((mu) over tilde) greater than or similar to 600 GeV in the entire dark matter mass range that could be probed by new (g – 2)(mu) data in the near future. In addition, dark matter direct detection experiments will be able to explore a sizable portion of the allowed parameter space with mvR < 300 GeV, while indirect detection experiments will be able to probe a much smaller fraction within 200 less than or similar to m((nu)over tilde>R) less than or similar to 350 GeV.
|
|
|
Otten, S., Caron, S., de Swart, W., van Beekveld, M., Hendriks, L., van Leeuwen, C., et al. (2021). Event generation and statistical sampling for physics with deep generative models and a density information buffer. Nat. Commun., 12(1), 2985–16pp.
Abstract: Simulating nature and in particular processes in particle physics require expensive computations and sometimes would take much longer than scientists can afford. Here, we explore ways to a solution for this problem by investigating recent advances in generative modeling and present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but to also ensure that these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with several generative machine learning models to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e(+)e(-)-> Z -> l(+)l(-) and pp -> tt<mml:mo><overbar></mml:mover> including the decay of the top quarks and a simulation of the detector response. By buffering density information of encoded Monte Carlo events given the encoder of a Variational Autoencoder we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g., for the phase space integration of matrix elements in quantum field theories. Here, the authors report buffered-density variational autoencoders for the generation of physical events. This method is computationally less expensive over other traditional methods and beyond accelerating the data generation process, it can help to steer the generation and to detect anomalies.
|
|
|
MoEDAL Collaboration(Acharya, B. et al), Mitsou, V. A., Papavassiliou, J., Ruiz de Austri, R., Santra, A., Vento, V., et al. (2022). Search for magnetic monopoles produced via the Schwinger mechanism. Nature, 602(7895), 63–67.
Abstract: Electrically charged particles can be created by the decay of strong enough electric fields, a phenomenon known as the Schwinger mechanism(1). By electromagnetic duality, a sufficiently strong magnetic field would similarly produce magnetic monopoles, if they exist(2). Magnetic monopoles are hypothetical fundamental particles that are predicted by several theories beyond the standard model(3-7) but have never been experimentally detected. Searching for the existence of magnetic monopoles via the Schwinger mechanism has not yet been attempted, but it is advantageous, owing to the possibility of calculating its rate through semi-classical techniques without perturbation theory, as well as that the production of the magnetic monopoles should be enhanced by their finite size(8,9) and strong coupling to photons(2,10). Here we present a search for magnetic monopole production by the Schwinger mechanism in Pb-Pb heavy ion collisions at the Large Hadron Collider, producing the strongest known magnetic fields in the current Universe(11). It was conducted by the MoEDAL experiment, whose trapping detectors were exposed to 0.235 per nanobarn, or approximately 1.8 x 10(9), of Pb-Pb collisions with 5.02-teraelectronvolt center-of-mass energy per collision in November 2018. A superconducting quantum interference device (SQUID) magnetometer scanned the trapping detectors of MoEDAL for the presence of magnetic charge, which would induce a persistent current in the SQUID. Magnetic monopoles with integer Dirac charges of 1, 2 and 3 and masses up to 75 gigaelectronvolts per speed of light squared were excluded by the analysis at the 95% confidence level. This provides a lower mass limit for finite-size magnetic monopoles from a collider search and greatly extends previous mass bounds.
|
|
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|
|
|
Bridges, M., Cranmer, K., Feroz, F., Hobson, M., Ruiz de Austri, R., & Trotta, R. (2011). A coverage study of the CMSSM based on ATLAS sensitivity using fast neural networks techniques. J. High Energy Phys., 03(3), 012–23pp.
Abstract: We assess the coverage properties of confidence and credible intervals on the CMSSM parameter space inferred from a Bayesian posterior and the profile likelihood based on an ATLAS sensitivity study. In order to make those calculations feasible, we introduce a new method based on neural networks to approximate the mapping between CMSSM parameters and weak-scale particle masses. Our method reduces the computational effort needed to sample the CMSSM parameter space by a factor of similar to 10(4) with respect to conventional techniques. We find that both the Bayesian posterior and the profile likelihood intervals can significantly over-cover and identify the origin of this effect to physical boundaries in the parameter space. Finally, we point out that the effects intrinsic to the statistical procedure are conflated with simplifications to the likelihood functions from the experiments themselves.
|
|
|
Casas, J. A., Moreno, J. M., Rius, N., Ruiz de Austri, R., & Zaldivar, B. (2011). Fair scans of the seesaw. Consequences for predictions on LFV processes. J. High Energy Phys., 03(3), 034–22pp.
Abstract: We give a straightforward procedure to scan the seesaw parameter-space, using the common “R-parametrization”, in a complete way. This includes a very simple rule to incorporate the perturbativity requirement as a condition for the entries of the R-matrix. As a relevant application, we show that the somewhat propagated belief that BR(mu -> e, gamma) in supersymmetric seesaw models depends strongly on the value of theta(13) is an “optical effect” produced by incomplete scans, and does not hold after a careful analytical and numerical study. When the complete scan is done, BR(mu -> e, gamma) gets very insensitive to theta(13). This holds even if the right-handed neutrino masses are kept constant or under control (as is required for succesful leptogenesis). In most cases the values of BR(mu -> e, gamma) are larger than the experimental upper bound. Including (unflavoured) leptogenesis does not introduce any further dependence on theta(13), although decreases the typical value of BR(mu -> e, gamma).
|
|
|
Feroz, F., Cranmer, K., Hobson, M., Ruiz de Austri, R., & Trotta, R. (2011). Challenges of profile likelihood evaluation in multi-dimensional SUSY scans. J. High Energy Phys., 06(6), 042–23pp.
Abstract: Statistical inference of the fundamental parameters of supersymmetric theories is a challenging and active endeavor. Several sophisticated algorithms have been employed to this end. While Markov-Chain Monte Carlo (MCMC) and nested sampling techniques are geared towards Bayesian inference, they have also been used to estimate frequentist confidence intervals based on the profile likelihood ratio. We investigate the performance and appropriate configuration of MULTINEST, a nested sampling based algorithm, when used for profile likelihood-based analyses both on toy models and on the parameter space of the Constrained MSSM. We find that while the standard configuration previously used in the literarture is appropriate for an accurate reconstruction of the Bayesian posterior, the profile likelihood is poorly approximated. We identify a more appropriate MULTINEST configuration for profile likelihood analyses, which gives an excellent exploration of the profile likelihood (albeit at a larger computational cost), including the identification of the global maximum likelihood value. We conclude that with the appropriate configuration MULTINEST is a suitable tool for profile likelihood studies, indicating previous claims to the contrary are not well founded.
|
|