|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|
|
|
Aguilar-Saavedra, J. A., Casas, J. A., Quilis, J., & Ruiz de Austri, R. (2020). Multilepton dark matter signals. J. High Energy Phys., 04(4), 069–24pp.
Abstract: The signatures of dark matter at the LHC commonly involve, in simplified scenarios, the production of a single particle plus large missing energy, from the undetected dark matter. However, in Z ' -portal scenarios anomaly cancellation requires the presence of extra dark leptons in the dark sector. We investigate the signatures of the minimal scenarios of this kind, which involve cascade decays of the extra Z ' boson into the dark leptons, identifying a four-lepton signal as the most promising one. We estimate the sensitivity to this signal at the LHC, the high-luminosity LHC upgrade, a possible high-energy upgrade, as well as a future circular collider. For Z ' couplings compatible with current dijet constraints the multilepton signals can reach the 5 sigma level already at Run 2 of the LHC. At future colliders, couplings two orders of magnitude smaller than the electroweak coupling can be probed with 5 sigma sensitivity.
|
|
|
Liem, S., Bertone, G., Calore, F., Ruiz de Austri, R., Tait, T. M. P., Trotta, R., et al. (2016). Effective field theory of dark matter: a global analysis. J. High Energy Phys., 09(9), 077–22pp.
Abstract: We present global fits of an effective field theory description of real, and complex scalar dark matter candidates. We simultaneously take into account all possible dimension 6 operators consisting of dark matter bilinears and gauge invariant combinations of quark and gluon fields. We derive constraints on the free model parameters for both the real (five parameters) and complex (seven) scalar dark matter models obtained by combining Planck data on the cosmic microwave background, direct detection limits from LUX, and indirect detection limits from the Fermi Large Area Telescope. We find that for real scalars indirect dark matter searches disfavour a dark matter particle mass below 100 GeV. For the complex scalar dark matter particle current data have a limited impact due to the presence of operators that lead to p-wave annihilation, and also do not contribute to the spin-independent scattering cross-section. Although current data are not informative enough to strongly constrain the theory parameter space, we demonstrate the power of our formalism to reconstruct the theoretical parameters compatible with an actual dark matter detection, by assuming that the excess of gamma rays observed by the Fermi Large Area Telescope towards the Galactic centre is entirely due to dark matter annihilations. Please note that the excess can very well be due to astrophysical sources such as millisecond pulsars. We find that scalar dark matter interacting via effective field theory operators can in principle explain the Galactic centre excess, but that such interpretation is in strong tension with the non-detection of gamma rays from dwarf galaxies in the real scalar case. In the complex scalar case there is enough freedom to relieve the tension.
|
|
|
Caron, S., Casas, J. A., Quilis, J., & Ruiz de Austri, R. (2018). Anomaly-free dark matter with harmless direct detection constraints. J. High Energy Phys., 12(12), 126–24pp.
Abstract: Dark matter (DM) interacting with the SM fields via a Z-boson (Z-portal') remains one of the most attractive WIMP scenarios, both from the theoretical and the phenomenological points of view. In order to avoid the strong constraints from direct detection and dilepton production, it is highly convenient that the Z has axial coupling to DM and leptophobic couplings to the SM particles, respectively. The latter implies that the associated U(1) coincides with baryon number in the SM sector. In this paper we completely classify the possible anomaly-free leptophobic Z with minimal dark sector, including the cases where the coupling to DM is axial. The resulting scenario is very predictive and perfectly viable from the present constraints from DM detection, EW observables and LHC data (di-lepton, di-jet and mono-jet production). We analyze all these constraints, obtaining the allowed areas in the parameter space, which generically prefer mZ less than or similar to 500 GeV, apart from resonant regions. The best chances to test these viable areas come from future LHC measurements.
|
|
|
Trotta, R., Johannesson, G., Moskalenko, I. V., Porter, T. A., Ruiz de Austri, R., & Strong, A. W. (2011). Constraints on Cosmic-Ray Propagation Models from a Global Bayesian Analysis. Astrophys. J., 729(2), 106–16pp.
Abstract: Research in many areas of modern physics such as, e. g., indirect searches for dark matter and particle acceleration in supernova remnant shocks rely heavily on studies of cosmic rays (CRs) and associated diffuse emissions (radio, microwave, X-rays, gamma-rays). While very detailed numerical models of CR propagation exist, a quantitative statistical analysis of such models has been so far hampered by the large computational effort that those models require. Although statistical analyses have been carried out before using semi-analytical models (where the computation is much faster), the evaluation of the results obtained from such models is difficult, as they necessarily suffer from many simplifying assumptions. The main objective of this paper is to present a working method for a full Bayesian parameter estimation for a numerical CR propagation model. For this study, we use the GALPROP code, the most advanced of its kind, which uses astrophysical information, and nuclear and particle data as inputs to self-consistently predict CRs, gamma-rays, synchrotron, and other observables. We demonstrate that a full Bayesian analysis is possible using nested sampling and Markov Chain Monte Carlo methods (implemented in the SuperBayeS code) despite the heavy computational demands of a numerical propagation code. The best-fit values of parameters found in this analysis are in agreement with previous, significantly simpler, studies also based on GALPROP.
|
|
|
Johannesson, G., Ruiz de Austri, R., Vincent, A. C., Moskalenko, I. V., Orlando, E., Porter, T. A., et al. (2016). Bayesian analysis of cosmic-ray propagation: evidence against homogeneous diffusion. Astrophys. J., 824(1), 16–19pp.
Abstract: We present the results of the most complete scan of the parameter space for cosmic ray (CR) injection and propagation. We perform a Bayesian search of the main GALPROP parameters, using the MultiNest nested sampling algorithm, augmented by the BAMBI neural network machine-learning package. This is the first study to separate out low-mass isotopes (p, (p) over bar and He) from the usual light elements (Be, B, C, N, and O). We find that the propagation parameters that best-fit p, (p) over bar, and He data are significantly different from those that fit light elements, including the B/C and Be-10/Be-9 secondary-to-primary ratios normally used to calibrate propagation parameters. This suggests that each set of species is probing a very different interstellar medium, and that the standard approach of calibrating propagation parameters using B/C can lead to incorrect results. We present posterior distributions and best-fit parameters for propagation of both sets of nuclei, as well as for the injection abundances of elements from H to Si. The input GALDEF files with these new parameters will be included in an upcoming public GALPROP update.
|
|
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
|
|
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|
|
|
Roszkowski, L., Ruiz de Austri, R., & Trotta, R. (2010). Efficient reconstruction of constrained MSSM parameters from LHC data: A case study. Phys. Rev. D, 82(5), 055003–12pp.
Abstract: We present an efficient method of reconstructing the parameters of the constrained MSSM from assumed future LHC data, applied both on their own right and in combination with the cosmological determination of the relic dark matter abundance. Focusing on the ATLAS SU3 benchmark point, we demonstrate that our simple Gaussian approximation can recover the values of its parameters remarkably well. We examine two popular noninformative priors and obtain very similar results, although when we use an informative, naturalness-motivated prior, we find some sizeable differences. We show that a further strong improvement in reconstructing the SU3 parameters can by achieved by applying additional information about the relic abundance at the level of WMAP accuracy, although the expected data from Planck will have only a very limited additional impact. Further external data may be required to break some remaining degeneracies. We argue that the method presented here is applicable to a wide class of low-energy effective supersymmetric models, as it does not require one to deal with purely experimental issues, e.g., detector performance, and has the additional advantages of computational efficiency. Furthermore, our approach allows one to distinguish the effect of the model's internal structure and of the external data on the final parameters constraints.
|
|
|
Bertone, G., Cerdeño, D. G., Fornasa, M., Ruiz de Austri, R., & Trotta, R. (2010). Identification of dark matter particles with LHC and direct detection data. Phys. Rev. D, 82(5), 055008–7pp.
Abstract: Dark matter (DM) is currently searched for with a variety of detection strategies. Accelerator searches are particularly promising, but even if weakly interacting massive particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the DM in the Universe Omega(DM). We show that a significantly better reconstruction of the DM properties can be obtained with a combined analysis of LHC and direct detection data, by making a simple Ansatz on the weakly interacting massive particles local density rho(0)((chi) over bar1), i.e., by assuming that the local density scales with the cosmological relic abundance, (rho(0)((chi) over bar1)/rho(DM)) = (Omega(0)((chi) over bar1)/Omega(DM)). We demonstrate this method in an explicit example in the context of a 24-parameter supersymmetric model, with a neutralino lightest supersymmetric particle in the stau coannihilation region. Our results show that future ton-scale direct detection experiments will allow to break degeneracies in the supersymmetric parameter space and achieve a significantly better reconstruction of the neutralino composition and its relic density than with LHC data alone.
|
|