|
MoEDAL Collaboration(Acharya, B. et al), Bernabeu, J., Mamuzic, J., Mitsou, V. A., Papavassiliou, J., Ruiz de Austri, R., et al. (2021). First Search for Dyons with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions. Phys. Rev. Lett., 126(7), 071801–7pp.
Abstract: The MoEDAL trapping detector consists of approximately 800 kg of aluminum volumes. It was exposed during run 2 of the LHC program to 6.46 fb(-1) of 13 TeV proton-proton collisions at the LHCb interaction point. Evidence for dyons (particles with electric and magnetic charge) captured in the trapping detector was sought by passing the aluminum volumes comprising the detector through a superconducting quantum interference device (SQUID) magnetometer. The presence of a trapped dyon would be signaled by a persistent current induced in the SQUID magnetometer. On the basis of a Drell-Yan production model, we exclude dyons with a magnetic charge ranging up to five Dirac charges (5g(D)) and an electric charge up to 200 times the fundamental electric charge for mass limits in the range 870-3120 GeV and also monopoles with magnetic charge up to and including 5g(D) with mass limits in the range 870-2040 GeV.
|
|
|
Allanach, B. C., Bednyakov, A., & Ruiz de Austri, R. (2015). Higher order corrections and unification in the minimal supersymmetric standard model: SOFTSUSY3.5. Comput. Phys. Commun., 189, 192–206.
Abstract: We explore the effects of three-loop minimal supersymmetric standard model renormalisation group equation terms and some leading two-loop threshold corrections on gauge and Yukawa unification: each being one loop higher order than current public spectrum calculators. We also explore the effect of the higher order terms (often 2-3 GeV) on the lightest CP even Higgs mass prediction. We illustrate our results in the constrained minimal supersymmetric standard model. Neglecting threshold corrections at the grand unified scale, the discrepancy between the unification scale alpha(s) and the other two unified gauge couplings changes by 0.1% due to the higher order corrections and the difference between unification scale bottom-tau Yukawa couplings neglecting unification scale threshold corrections changes by up to 1%. The difference between unification scale bottom and top Yukawa couplings changes by a few percent. Differences due to the higher order corrections also give an estimate of the size of theoretical uncertainties in the minimal supersymmetric standard model spectrum. We use these to provide estimates of theoretical uncertainties in predictions of the dark matter relic density (which can be of order one due to its strong dependence on sparticle masses) and the LHC sparticle production cross-section (often around 30%). The additional higher order corrections have been incorporated into SOFTSUSY, and we provide details on how to compile and use the program. We also provide a summary of the approximations used in the higher order corrections. Program Summary Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Program title: SOFTSUSY Catalogue identifier: ADPMv50 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADPMv50.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 240528 No. of bytes in distributed program, including test data, etc.: 2597933 Distribution format: tar.gz Programming language: C++, Fortran. Computer: Personal computer. Operating system: Tested on Linux 3.4.6. Word size: 64 bits. Classification: 11.1, 11.6. External routines: At least GiNaC1.3.5 [1] and CLN1.3.1 (both freely obtainable from http://www.ginac.de). Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPMv40 Journal reference of previous version: Comput. Phys. Comm. 185 (2014) 2322 Solution method: Nested iterative algorithm. Reasons for new version: Extension to include additional two and three-loop terms. Summary of revisions: All quantities in the minimal supersymmetric standard model are extended to have three-loop renormalisation group equations (including 3-family mixing) in the limit of real parameters and some leading two-loop threshold corrections are incorporated to the third family Yukawa couplings and the strong gauge coupling. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. The higher order corrections included are for the real R-parity conserving minimal supersymmetric standard model (MSSM) only. Running time: A minute per parameter point. The tests provided with the package only take a few seconds to run.
|
|
|
Allanach, B. C., Martin, S. P., Robertson, D. G., & Ruiz de Austri, R. (2017). The inclusion of two-loop SUSYQCD corrections to gluino and squark pole masses in the minimal and next-to-minimal supersymmetric standard model: SOFTSUSY3.7. Comput. Phys. Commun., 219, 339–345.
Abstract: We describe an extension of the SOFTSUSY spectrum calculator to include two-loop supersymmetric QCD (SUSYQCD) corrections of order O(alpha(2)(s)) to gluino and squark pole masses, either in the minimal supersymmetric standard model (MSSM) or the next-to-minimal supersymmetric standard model (NMSSM). This document provides an overview of the program and acts as a manual for the new version of SOFTSUSY, which includes the increase in accuracy in squark and gluino pole mass predictions. Program summary Program title: SOFTSUSY Program Files doi: http://dx.doLorg/10.17632/sh77x9j7hs.1 Licensing provisions: GNU GPLv3 Programming language: C++, fortran, C Nature of problem: Calculating supersymmetric particle spectrum, mixing parameters and couplings in the MSSM or the NMSSM. The solution to the renormalization group equations must be consistent with theoretical boundary conditions on supersymmetry breaking parameters, as well as a weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested fixed point iteration. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is nonphysical for some reason (for example because the electroWeak potential does not have an acceptable minimum), SOFTSUSY returns an error message. The higher order corrections included are for the MSSM (R-parity conserving or violating) or the real R-parity conserving NMSSM only. Journal reference of previous version: Comput. Phys. Comm. 189 (2015) 192. Does the new version supersede the previous version?: Yes. Reasons for the new version: It is desirable to improve the accuracy of the squark and gluinos mass predictions, since they strongly affect supersymmetric particle production cross-sections at colliders. Summary of revisions: The calculation of the squark and gluino pole masses is extended to be of next-to next-to leading order in SUSYQCD, i.e. including terms up to O(g(s)(4)/(16 pi(2))(2)). Additional comments: Program obtainable from http://softsusy.hepforge.org/
|
|
|
Ferrer-Sanchez, A., Martin-Guerrero, J., Ruiz de Austri, R., Torres-Forne, A., & Font, J. A. (2024). Gradient-annihilated PINNs for solving Riemann problems: Application to relativistic hydrodynamics. Comput. Meth. Appl. Mech. Eng., 424, 116906–18pp.
Abstract: We present a novel methodology based on Physics-Informed Neural Networks (PINNs) for solving systems of partial differential equations admitting discontinuous solutions. Our method, called Gradient-Annihilated PINNs (GA-PINNs), introduces a modified loss function that forces the model to partially ignore high-gradients in the physical variables, achieved by introducing a suitable weighting function. The method relies on a set of hyperparameters that control how gradients are treated in the physical loss. The performance of our methodology is demonstrated by solving Riemann problems in special relativistic hydrodynamics, extending earlier studies with PINNs in the context of the classical Euler equations. The solutions obtained with the GA-PINN model correctly describe the propagation speeds of discontinuities and sharply capture the associated jumps. We use the relative l(2) error to compare our results with the exact solution of special relativistic Riemann problems, used as the reference ''ground truth'', and with the corresponding error obtained with a second-order, central, shock-capturing scheme. In all problems investigated, the accuracy reached by the GA-PINN model is comparable to that obtained with a shock-capturing scheme, achieving a performance superior to that of the baseline PINN algorithm in general. An additional benefit worth stressing is that our PINN-based approach sidesteps the costly recovery of the primitive variables from the state vector of conserved variables, a well-known drawback of grid-based solutions of the relativistic hydrodynamics equations. Due to its inherent generality and its ability to handle steep gradients, the GA-PINN methodology discussed in this paper could be a valuable tool to model relativistic flows in astrophysics and particle physics, characterized by the prevalence of discontinuous solutions.
|
|
|
MoEDAL Collaboration(Acharya, B. et al), Mitsou, V. A., Papavassiliou, J., Ruiz de Austri, R., Santra, A., Vento, V., et al. (2022). Search for magnetic monopoles produced via the Schwinger mechanism. Nature, 602(7895), 63–67.
Abstract: Electrically charged particles can be created by the decay of strong enough electric fields, a phenomenon known as the Schwinger mechanism(1). By electromagnetic duality, a sufficiently strong magnetic field would similarly produce magnetic monopoles, if they exist(2). Magnetic monopoles are hypothetical fundamental particles that are predicted by several theories beyond the standard model(3-7) but have never been experimentally detected. Searching for the existence of magnetic monopoles via the Schwinger mechanism has not yet been attempted, but it is advantageous, owing to the possibility of calculating its rate through semi-classical techniques without perturbation theory, as well as that the production of the magnetic monopoles should be enhanced by their finite size(8,9) and strong coupling to photons(2,10). Here we present a search for magnetic monopole production by the Schwinger mechanism in Pb-Pb heavy ion collisions at the Large Hadron Collider, producing the strongest known magnetic fields in the current Universe(11). It was conducted by the MoEDAL experiment, whose trapping detectors were exposed to 0.235 per nanobarn, or approximately 1.8 x 10(9), of Pb-Pb collisions with 5.02-teraelectronvolt center-of-mass energy per collision in November 2018. A superconducting quantum interference device (SQUID) magnetometer scanned the trapping detectors of MoEDAL for the presence of magnetic charge, which would induce a persistent current in the SQUID. Magnetic monopoles with integer Dirac charges of 1, 2 and 3 and masses up to 75 gigaelectronvolts per speed of light squared were excluded by the analysis at the 95% confidence level. This provides a lower mass limit for finite-size magnetic monopoles from a collider search and greatly extends previous mass bounds.
|
|
|
Panes, B., Eckner, C., Hendriks, L., Caron, S., Dijkstra, K., Johannesson, G., et al. (2021). Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challenge. Astron. Astrophys., 656, A62–18pp.
Abstract: Context. At GeV energies, the sky is dominated by the interstellar emission from the Galaxy. With limited statistics and spatial resolution, accurately separating point sources is therefore challenging. Aims. Here we present the first application of deep learning based algorithms to automatically detect and classify point sources from gamma-ray data. For concreteness we refer to this approach as AutoSourceID. Methods. To detect point sources, we utilized U-shaped convolutional networks for image segmentation and k-means for source clustering and localization. We also explored the Centroid-Net algorithm, which is designed to find and count objects. Using two algorithms allows for a cross check of the results, while a combination of their results can be used to improve performance. The training data are based on 9.5 years of exposure from The Fermi Large Area Telescope (Fermi-LAT) and we used source properties of active galactic nuclei (AGNs) and pulsars (PSRs) from the fourth Fermi-LAT source catalog in addition to several models of background interstellar emission. The results of the localization algorithm are fed into a classification neural network that is trained to separate the three general source classes (AGNs, PSRs, and FAKE sources). Results. We compared our localization algorithms qualitatively with traditional methods and find them to have similar detection thresholds. We also demonstrate the robustness of our source localization algorithms to modifications in the interstellar emission models, which presents a clear advantage over traditional methods. The classification network is able to discriminate between the three classes with typical accuracy of similar to 70%, as long as balanced data sets are used in classification training. We published online our training data sets and analysis scripts and invite the community to join the data challenge aimed to improve the localization and classification of gamma-ray point sources.
|
|
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
|
|
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|
|
|
Stoppa, F., Bhattacharyya, S., Ruiz de Austri, R., Vreeswijk, P., Caron, S., Zaharijas, G., et al. (2023). AutoSourceID-Classifier Star-galaxy classification using a convolutional neural network with spatial information. Astron. Astrophys., 680, A109–16pp.
Abstract: Aims. Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification's reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.Methods. The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.Results. We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor. While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced error propagation inherent in ASID-C's direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy.
|
|
|
Trotta, R., Johannesson, G., Moskalenko, I. V., Porter, T. A., Ruiz de Austri, R., & Strong, A. W. (2011). Constraints on Cosmic-Ray Propagation Models from a Global Bayesian Analysis. Astrophys. J., 729(2), 106–16pp.
Abstract: Research in many areas of modern physics such as, e. g., indirect searches for dark matter and particle acceleration in supernova remnant shocks rely heavily on studies of cosmic rays (CRs) and associated diffuse emissions (radio, microwave, X-rays, gamma-rays). While very detailed numerical models of CR propagation exist, a quantitative statistical analysis of such models has been so far hampered by the large computational effort that those models require. Although statistical analyses have been carried out before using semi-analytical models (where the computation is much faster), the evaluation of the results obtained from such models is difficult, as they necessarily suffer from many simplifying assumptions. The main objective of this paper is to present a working method for a full Bayesian parameter estimation for a numerical CR propagation model. For this study, we use the GALPROP code, the most advanced of its kind, which uses astrophysical information, and nuclear and particle data as inputs to self-consistently predict CRs, gamma-rays, synchrotron, and other observables. We demonstrate that a full Bayesian analysis is possible using nested sampling and Markov Chain Monte Carlo methods (implemented in the SuperBayeS code) despite the heavy computational demands of a numerical propagation code. The best-fit values of parameters found in this analysis are in agreement with previous, significantly simpler, studies also based on GALPROP.
|
|