van Beekveld, M., Beenakker, W., Caron, S., Kip, J., Ruiz de Austri, R., & Zhang, Z. Y. (2023). Non-standard neutrino spectra from annihilating neutralino dark matter. SciPost Phys. Core, 6(1), 006–23pp.
Abstract: Neutrino telescope experiments are rapidly becoming more competitive in indirect de-tection searches for dark matter. Neutrino signals arising from dark matter annihilations are typically assumed to originate from the hadronisation and decay of Standard Model particles. Here we showcase a supersymmetric model, the BLSSMIS, that can simulta-neously obey current experimental limits while still providing a potentially observable non-standard neutrino spectrum from dark matter annihilation.
|
van Beekveld, M., Caron, S., Hendriks, L., Jackson, P., Leinweber, A., Otten, S., et al. (2021). Combining outlier analysis algorithms to identify new physics at the LHC. J. High Energy Phys., 09(9), 024–33pp.
Abstract: The lack of evidence for new physics at the Large Hadron Collider so far has prompted the development of model-independent search techniques. In this study, we compare the anomaly scores of a variety of anomaly detection techniques: an isolation forest, a Gaussian mixture model, a static autoencoder, and a beta-variational autoencoder (VAE), where we define the reconstruction loss of the latter as a weighted combination of regression and classification terms. We apply these algorithms to the 4-vectors of simulated LHC data, but also investigate the performance when the non-VAE algorithms are applied to the latent space variables created by the VAE. In addition, we assess the performance when the anomaly scores of these algorithms are combined in various ways. Using supersymmetric benchmark points, we find that the logical AND combination of the anomaly scores yielded from algorithms trained in the latent space of the VAE is the most effective discriminator of all methods tested.
|
van Beekveld, M., Caron, S., & Ruiz de Austri, R. (2020). The current status of fine-tuning in supersymmetry. J. High Energy Phys., 01(1), 147–41pp.
Abstract: In this paper, we minimize and compare two different fine-tuning measures in four high-scale supersymmetric models that are embedded in the MSSM. In addition, we determine the impact of current and future dark matter direct detection and collider experiments on the fine-tuning. We then compare the low-scale electroweak measure with the high-scale Barbieri-Giudice measure. We find that they reduce to the same value when the higgsino parameter drives the degree of fine-tuning. We also find spectra where the high-scale measure turns out to be lower than the low-scale measure. Depending on the high-scale model and fine-tuning definition, we find a minimal fine-tuning of 3-38 (corresponding to O(10-1)%) for the low-scale measure, and 63-571 (corresponding to O(1-0.1)%) for the high-scale measure. We stress that it is too early to conclude on the fate of supersymmetry, based only on the fine-tuning paradigm.
|
van Beekveld, M., Beenakker, W., Caron, S., & Ruiz de Austri, R. (2016). The case for 100 GeV bino dark matter: a dedicated LHC tri-lepton search. J. High Energy Phys., 04(4), 154–26pp.
Abstract: Global fit studies performed in the pMSSM and the photon excess signal originating from the Galactic Center seem to suggest compressed electroweak supersymmetric spectra with a similar to 100 GeV bino-like dark matter particle. We find that these scenarios are not probed by traditional electroweak supersymmetry searches at the LHC. We propose to extend the ATLAS and CMS electroweak supersymmetry searches with an improved strategy for bino-like dark matter, focusing on chargino plus next-to-lightest neutralino production, with a subsequent decay into a tri-lepton final state. We explore the sensitivity for pMSSM scenarios with Delta m = m(NLSP) – m(LSF) similar to(5 – 50) GeV in the root s = 14 TeV run of the LHC. Counterintuitively, we find that the requirement of low missing transverse energy increases the sensitivity compared to the current ATLAS and CMS searches. With 300 fb(-1) of data we expect the LHC experiments to be able to discover these supersymmetric spectra with mass gaps down to Am 9 GeV for DM masses between 40 and 140 GeV. We stress the importance of a dedicated search strategy that targets precisely these favored pMSSM spectra.
|
van Beekveld, M., Beenakker, W., Caron, S., Peeters, R., & Ruiz de Austri, R. (2017). Supersymmetry with dark matter is still natural. Phys. Rev. D, 96(3), 035015–7pp.
Abstract: We identify the parameter regions of the phenomenological minimal supersymmetric standard model (pMSSM) with the minimal possible fine-tuning. We show that the fine-tuning of the pMSSM is not large, nor under pressure by LHC searches. Low sbottom, stop and gluino masses turn out to be less relevant for low fine-tuning than commonly assumed. We show a link between low fine-tuning and the dark matter relic density. Fine-tuning arguments point to models with a dark matter candidate yielding the correct dark matter relic density: a bino-higgsino particle with a mass of 35-155 GeV. Some of these candidates are compatible with recent hints seen in astrophysics experiments such as Fermi-LAT and AMS-02. We argue that upcoming direct search experiments, such as XENON1T, will test all of the most natural solutions in the next few years due to the sensitivity of these experiments on the spin-dependent WIMP-nucleon cross section.
|
Trotta, R., Johannesson, G., Moskalenko, I. V., Porter, T. A., Ruiz de Austri, R., & Strong, A. W. (2011). Constraints on Cosmic-Ray Propagation Models from a Global Bayesian Analysis. Astrophys. J., 729(2), 106–16pp.
Abstract: Research in many areas of modern physics such as, e. g., indirect searches for dark matter and particle acceleration in supernova remnant shocks rely heavily on studies of cosmic rays (CRs) and associated diffuse emissions (radio, microwave, X-rays, gamma-rays). While very detailed numerical models of CR propagation exist, a quantitative statistical analysis of such models has been so far hampered by the large computational effort that those models require. Although statistical analyses have been carried out before using semi-analytical models (where the computation is much faster), the evaluation of the results obtained from such models is difficult, as they necessarily suffer from many simplifying assumptions. The main objective of this paper is to present a working method for a full Bayesian parameter estimation for a numerical CR propagation model. For this study, we use the GALPROP code, the most advanced of its kind, which uses astrophysical information, and nuclear and particle data as inputs to self-consistently predict CRs, gamma-rays, synchrotron, and other observables. We demonstrate that a full Bayesian analysis is possible using nested sampling and Markov Chain Monte Carlo methods (implemented in the SuperBayeS code) despite the heavy computational demands of a numerical propagation code. The best-fit values of parameters found in this analysis are in agreement with previous, significantly simpler, studies also based on GALPROP.
|
Strege, C., Bertone, G., Besjes, G. J., Caron, S., Ruiz de Austri, R., Strubig, A., et al. (2014). Profile likelihood maps of a 15-dimensional MSSM. J. High Energy Phys., 09(9), 081–59pp.
Abstract: We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter, the Planck relic density measurement and data from accelerator searches. We provide a detailed analysis of the rich phenomenology of this model, and determine the SUSY mass spectrum and dark matter properties that are preferred by current experimental constraints. We evaluate the impact of the measurement of the anomalous magnetic moment of the muon (g – 2) on our results, and provide an analysis of scenarios in which the lightest neutralino is a subdominant component of the dark matter. The MSSM-15 parameters are relatively weakly constrained by current data sets, with the exception of the parameters related to dark matter phenomenology (M-1, M-2, mu), which are restricted to the sub-TeV regime, mainly due to the relic density constraint. The mass of the lightest neutralino is found to be < 1.5TeV at 99% C.L., but can extend up to 3 TeV when excluding the g – 2 constraint from the analysis. Low-mass bino-like neutralinos are strongly favoured, with spin-independent scattering cross-sections extending to very small values, similar to 10(-20) pb. ATLAS SUSY null searches strongly impact on this mass range, and thus rule out a region of parameter space that is outside the reach of any current or future direct detection experiment. The best-fit point obtained after inclusion of all data corresponds to a squark mass of 2.3 TeV, a gluino mass of 2.1 TeV and a 130 GeV neutralino with a spin-independent cross-section of 2.4 x 10(-10) pb, which is within the reach of future multi-ton scale direct detection experiments and of the upcoming LHC run at increased centre-of-mass energy.
|
Strege, C., Bertone, G., Cerdeño, D. G., Fornasa, M., Ruiz de Austri, R., & Trotta, R. (2012). Updated global fits of the cMSSM including the latest LHC SUSY and Higgs searches and XENON100 data. J. Cosmol. Astropart. Phys., 03(3), 030–22pp.
Abstract: We present new global fits of the constrained Minimal Supersymmetric Standard Model (cMSSM), including LHC 1/fb integrated luminosity SUSY exclusion limits, recent LHC 5/fb constraints on the mass of the Higgs boson and XENON100 direct detection data. Our analysis fully takes into account astrophysical and hadronic uncertainties that enter the analysis when translating direct detection limits into constraints on the cMSSM parameter space. We provide results for both a Bayesian and a Frequentist statistical analysis. We find that LHC 2011 constraints in combination with XENON100 data can rule out a significant portion of the cMSSM parameter space. Our results further emphasise the complementarity of collider experiments and direct detection searches in constraining extensions of Standard Model physics. The LHC 2011 exclusion limit strongly impacts on low-mass regions of cMSSM parameter space, such as the stau co-annihilation region, while direct detection data can rule out regions of high SUSY masses, such as the Focus-Point region, which is unreachable for the LHC in the near future. We show that, in addition to XENON100 data, the experimental constraint on the anomalous magnetic moment of the muon plays a dominant role in disfavouring large scalar and gaugino masses. We find that, should the LHC 2011 excess hinting towards a Higgs boson at 126 GeV be confirmed, currently favoured regions of the cMSSM parameter space will be robustly ruled out from both a Bayesian and a profile likelihood statistical perspective.
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|