|
van Beekveld, M., Beenakker, W., Caron, S., Peeters, R., & Ruiz de Austri, R. (2017). Supersymmetry with dark matter is still natural. Phys. Rev. D, 96(3), 035015–7pp.
Abstract: We identify the parameter regions of the phenomenological minimal supersymmetric standard model (pMSSM) with the minimal possible fine-tuning. We show that the fine-tuning of the pMSSM is not large, nor under pressure by LHC searches. Low sbottom, stop and gluino masses turn out to be less relevant for low fine-tuning than commonly assumed. We show a link between low fine-tuning and the dark matter relic density. Fine-tuning arguments point to models with a dark matter candidate yielding the correct dark matter relic density: a bino-higgsino particle with a mass of 35-155 GeV. Some of these candidates are compatible with recent hints seen in astrophysics experiments such as Fermi-LAT and AMS-02. We argue that upcoming direct search experiments, such as XENON1T, will test all of the most natural solutions in the next few years due to the sensitivity of these experiments on the spin-dependent WIMP-nucleon cross section.
|
|
|
Otten, S., Caron, S., de Swart, W., van Beekveld, M., Hendriks, L., van Leeuwen, C., et al. (2021). Event generation and statistical sampling for physics with deep generative models and a density information buffer. Nat. Commun., 12(1), 2985–16pp.
Abstract: Simulating nature and in particular processes in particle physics require expensive computations and sometimes would take much longer than scientists can afford. Here, we explore ways to a solution for this problem by investigating recent advances in generative modeling and present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but to also ensure that these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with several generative machine learning models to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e(+)e(-)-> Z -> l(+)l(-) and pp -> tt<mml:mo><overbar></mml:mover> including the decay of the top quarks and a simulation of the detector response. By buffering density information of encoded Monte Carlo events given the encoder of a Variational Autoencoder we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g., for the phase space integration of matrix elements in quantum field theories. Here, the authors report buffered-density variational autoencoders for the generation of physical events. This method is computationally less expensive over other traditional methods and beyond accelerating the data generation process, it can help to steer the generation and to detect anomalies.
|
|
|
Caron, S., Kim, J. S., Rolbiecki, K., Ruiz de Austri, R., & Stienen, B. (2017). The BSM-AI project: SUSY-AI-generalizing LHC limits on supersymmetry with machine learning. Eur. Phys. J. C, 77(4), 257–25pp.
Abstract: A key research question at the Large Hadron Collider is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: it requires time consuming generation of scattering events, simulation of the detector response, event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiments. In the BSM-AI project we approach this challenge with a new idea. A machine learning tool is devised to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300,000 pMSSM model sets – each tested against 200 signal regions by ATLAS – have been used to train and validate SUSY-AI. The code is currently able to reproduce theATLAS exclusion regions in 19 dimensions with an accuracy of at least 93%. It has been validated further within the constrained MSSM and the minimal natural supersymmetric model, again showing high accuracy. SUSY-AI and its future BSM derivatives will help to solve the problem of recasting LHC results for any model of new physics. SUSY-AI can be downloaded from http://susyai.hepforge.org/. An on-line interface to the program for quick testing purposes can be found at http://www.susy-ai.org/.
|
|
|
van Beekveld, M., Beenakker, W., Caron, S., & Ruiz de Austri, R. (2016). The case for 100 GeV bino dark matter: a dedicated LHC tri-lepton search. J. High Energy Phys., 04(4), 154–26pp.
Abstract: Global fit studies performed in the pMSSM and the photon excess signal originating from the Galactic Center seem to suggest compressed electroweak supersymmetric spectra with a similar to 100 GeV bino-like dark matter particle. We find that these scenarios are not probed by traditional electroweak supersymmetry searches at the LHC. We propose to extend the ATLAS and CMS electroweak supersymmetry searches with an improved strategy for bino-like dark matter, focusing on chargino plus next-to-lightest neutralino production, with a subsequent decay into a tri-lepton final state. We explore the sensitivity for pMSSM scenarios with Delta m = m(NLSP) – m(LSF) similar to(5 – 50) GeV in the root s = 14 TeV run of the LHC. Counterintuitively, we find that the requirement of low missing transverse energy increases the sensitivity compared to the current ATLAS and CMS searches. With 300 fb(-1) of data we expect the LHC experiments to be able to discover these supersymmetric spectra with mass gaps down to Am 9 GeV for DM masses between 40 and 140 GeV. We stress the importance of a dedicated search strategy that targets precisely these favored pMSSM spectra.
|
|
|
van Beekveld, M., Caron, S., & Ruiz de Austri, R. (2020). The current status of fine-tuning in supersymmetry. J. High Energy Phys., 01(1), 147–41pp.
Abstract: In this paper, we minimize and compare two different fine-tuning measures in four high-scale supersymmetric models that are embedded in the MSSM. In addition, we determine the impact of current and future dark matter direct detection and collider experiments on the fine-tuning. We then compare the low-scale electroweak measure with the high-scale Barbieri-Giudice measure. We find that they reduce to the same value when the higgsino parameter drives the degree of fine-tuning. We also find spectra where the high-scale measure turns out to be lower than the low-scale measure. Depending on the high-scale model and fine-tuning definition, we find a minimal fine-tuning of 3-38 (corresponding to O(10-1)%) for the low-scale measure, and 63-571 (corresponding to O(1-0.1)%) for the high-scale measure. We stress that it is too early to conclude on the fate of supersymmetry, based only on the fine-tuning paradigm.
|
|
|
Caron, S., Casas, J. A., Quilis, J., & Ruiz de Austri, R. (2018). Anomaly-free dark matter with harmless direct detection constraints. J. High Energy Phys., 12(12), 126–24pp.
Abstract: Dark matter (DM) interacting with the SM fields via a Z-boson (Z-portal') remains one of the most attractive WIMP scenarios, both from the theoretical and the phenomenological points of view. In order to avoid the strong constraints from direct detection and dilepton production, it is highly convenient that the Z has axial coupling to DM and leptophobic couplings to the SM particles, respectively. The latter implies that the associated U(1) coincides with baryon number in the SM sector. In this paper we completely classify the possible anomaly-free leptophobic Z with minimal dark sector, including the cases where the coupling to DM is axial. The resulting scenario is very predictive and perfectly viable from the present constraints from DM detection, EW observables and LHC data (di-lepton, di-jet and mono-jet production). We analyze all these constraints, obtaining the allowed areas in the parameter space, which generically prefer mZ less than or similar to 500 GeV, apart from resonant regions. The best chances to test these viable areas come from future LHC measurements.
|
|
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
|
|
|
Stoppa, F., Bhattacharyya, S., Ruiz de Austri, R., Vreeswijk, P., Caron, S., Zaharijas, G., et al. (2023). AutoSourceID-Classifier Star-galaxy classification using a convolutional neural network with spatial information. Astron. Astrophys., 680, A109–16pp.
Abstract: Aims. Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification's reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.Methods. The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.Results. We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor. While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced error propagation inherent in ASID-C's direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy.
|
|
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|
|
|
Strege, C., Bertone, G., Besjes, G. J., Caron, S., Ruiz de Austri, R., Strubig, A., et al. (2014). Profile likelihood maps of a 15-dimensional MSSM. J. High Energy Phys., 09(9), 081–59pp.
Abstract: We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter, the Planck relic density measurement and data from accelerator searches. We provide a detailed analysis of the rich phenomenology of this model, and determine the SUSY mass spectrum and dark matter properties that are preferred by current experimental constraints. We evaluate the impact of the measurement of the anomalous magnetic moment of the muon (g – 2) on our results, and provide an analysis of scenarios in which the lightest neutralino is a subdominant component of the dark matter. The MSSM-15 parameters are relatively weakly constrained by current data sets, with the exception of the parameters related to dark matter phenomenology (M-1, M-2, mu), which are restricted to the sub-TeV regime, mainly due to the relic density constraint. The mass of the lightest neutralino is found to be < 1.5TeV at 99% C.L., but can extend up to 3 TeV when excluding the g – 2 constraint from the analysis. Low-mass bino-like neutralinos are strongly favoured, with spin-independent scattering cross-sections extending to very small values, similar to 10(-20) pb. ATLAS SUSY null searches strongly impact on this mass range, and thus rule out a region of parameter space that is outside the reach of any current or future direct detection experiment. The best-fit point obtained after inclusion of all data corresponds to a squark mass of 2.3 TeV, a gluino mass of 2.1 TeV and a 130 GeV neutralino with a spin-independent cross-section of 2.4 x 10(-10) pb, which is within the reach of future multi-ton scale direct detection experiments and of the upcoming LHC run at increased centre-of-mass energy.
|
|