Home | [1–10] << 11 12 13 14 15 16 17 18 19 20 >> [21–30] |
Panes, B., Eckner, C., Hendriks, L., Caron, S., Dijkstra, K., Johannesson, G., et al. (2021). Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challenge. Astron. Astrophys., 656, A62–18pp.
Abstract: Context. At GeV energies, the sky is dominated by the interstellar emission from the Galaxy. With limited statistics and spatial resolution, accurately separating point sources is therefore challenging. Aims. Here we present the first application of deep learning based algorithms to automatically detect and classify point sources from gamma-ray data. For concreteness we refer to this approach as AutoSourceID. Methods. To detect point sources, we utilized U-shaped convolutional networks for image segmentation and k-means for source clustering and localization. We also explored the Centroid-Net algorithm, which is designed to find and count objects. Using two algorithms allows for a cross check of the results, while a combination of their results can be used to improve performance. The training data are based on 9.5 years of exposure from The Fermi Large Area Telescope (Fermi-LAT) and we used source properties of active galactic nuclei (AGNs) and pulsars (PSRs) from the fourth Fermi-LAT source catalog in addition to several models of background interstellar emission. The results of the localization algorithm are fed into a classification neural network that is trained to separate the three general source classes (AGNs, PSRs, and FAKE sources). Results. We compared our localization algorithms qualitatively with traditional methods and find them to have similar detection thresholds. We also demonstrate the robustness of our source localization algorithms to modifications in the interstellar emission models, which presents a clear advantage over traditional methods. The classification network is able to discriminate between the three classes with typical accuracy of similar to 70%, as long as balanced data sets are used in classification training. We published online our training data sets and analysis scripts and invite the community to join the data challenge aimed to improve the localization and classification of gamma-ray point sources.
|
Pallis, C., & Shafi, Q. (2015). Gravity waves from non-minimal quadratic inflation. J. Cosmol. Astropart. Phys., 03(3), 023–31pp.
Abstract: We discuss non-minimal quadratic inflation in supersymmetric (SUSY) and non-SUSY models which entails a linear coupling of the inflaton to gravity. Imposing a lower bound on the parameter c(R), involved in the coupling between the inflaton and the Ricci scalar curvature, inflation can be attained even for subplanckian values of the inflaton while the corresponding effective theory respects the perturbative unitarity up to the Planck scale. Working in the non-SUSY context we also consider radiative corrections to the inflationary potential due to a possible coupling of the inflaton to bosons or fermions. We find ranges of the parameters, depending mildly on the renormalization scale, with adjustable values of the spectral index n(s), tensor-to-scalar ratio r similar or equal to (2 – 4) . 10(-3), and an inflaton mass close to 3 . 10 (13) GeV. In the SUSY framework we employ two gauge singlet chiral superfields, a logarithmic Kahler potential including all the allowed terms up to fourth order in powers of the various fields, and determine uniquely the superpotential by applying a continuous R and a global U(1) symmetry. When the Kahler manifold exhibits a no-scale-type symmetry, the model predicts n(s) similar or equal to 0.963 and r similar or equal to 0.004. Beyond no-scale SUGRA, n(s) and r depend crucially on the coefficient involved in the fourth order term, which mixes the inflaton with the accompanying non-inflaton field in the Kahler potential, and the prefactor encountered in it. Increasing slightly the latter above (-3), an efficient enhancement of the resulting r can be achieved putting it in the observable range. The inflaton mass in the last case is confined in the range (5 – 9) . 10(13) GeV.
|
Pallis, C. (2014). Linking Starobinsky-type inflation in no-scale supergravity to MSSM. J. Cosmol. Astropart. Phys., 04(4), 024–31pp.
Abstract: A novel realization of the Starobinsky inflationary model within a moderate extension of the Minimal Supersymmetric Standard Model (MSSM) is presented. The proposed superpotential is uniquely determined by applying a continuous R and a Z2 discrete symmetry, whereas the Kahler potential is associated with a no-scale-type SU(54, 1)/ SU(54) x U(1) R X Z2 Kahler manifold. The inflaton is identified with a Higgs-like modulus whose the vacuum expectation value controls the gravitational strength. Thanks to a strong enough coupling (with a parameter CT involved) between the inflaton and the Ricci scalar curvature, inflation can be attained even for subplanckian values of the inflaton with CT >= 76 and the corresponding effective theory being valid up to the Planck scale. The inflationary observables turn out to be in agreement with the current data and the inflaton mass is predicted to be 3 10(3) GeV. At the cost of a relatively small superpotential coupling constant, the model offers also a resolution of the f,t problem of MSSM for CT <= 4500 and gravitino heavier than about 10(4) GeV. Supplementing MSSM by three right-handed neutrinos we show that spontaneously arising couplings between the inflaton and the particle content of MSSM not only ensure a sufficiently low reheating temperature but also support a scenario of non-thermal leptogenesis consistently with the neutrino oscillation parameters.
|
Oset, E., Chen, H. X., Feijoo, A., Geng, L. S., Liang, W. H., Li, D. M., et al. (2016). Study of reactions disclosing hidden charm pentaquarks with or without strangeness. Nucl. Phys. A, 954, 371–392.
Abstract: We present results for five reactions, Lambda(b) -> J/psi K(-)p, Lambda(b) -> J/psi eta Lambda, Lambda(b) -> J/psi pi(-)p, Lambda(b) -> J/psi K-0 Lambda and Xi(-)(b) -> J/psi K-Lambda, where combining information from the meson baryon interaction, using the chiral unitary approach, and predictions made for molecular states of hidden charm, with or without strangeness, we can evaluate invariant mass distributions for the light meson baryon states, and for those of J/psi p or J/psi Lambda. We show that with the present available information, in all of these reactions one finds peaks where the pentaquark states show up. In the Lambda(b) -> J/psi K(-)p, and Lambda(b) -> J/psi pi(-)p reactions we show that the results obtained from our study are compatible with present experimental observations.
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Ortega, P. G., Torres-Espallardo, I., Cerutti, F., Ferrari, A., Gillam, J. E., Lacasta, C., et al. (2015). Noise evaluation of Compton camera imaging for proton therapy. Phys. Med. Biol., 60(5), 1845–1863.
Abstract: Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming. energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.
Keywords: proton therapy; Compton camera; Monte Carlo methods; FLUKA; prompt gamma; range verification; MLEM
|
Olmo, G. J., Rubiera-Garcia, D., & Wojnar, A. (2020). Stellar structure models in modified theories of gravity: Lessons and challenges. Phys. Rep., 876, 1–75.
Abstract: The understanding of stellar structure represents the crossroads of our theories of the nuclear force and the gravitational interaction under the most extreme conditions observably accessible. It provides a powerful probe of the strong field regime of General Relativity, and opens fruitful avenues for the exploration of new gravitational physics. The latter can be captured via modified theories of gravity, which modify the Einstein-Hilbert action of General Relativity and/or some of its principles. These theories typically change the Tolman-Oppenheimer-Volkoff equations of stellar's hydrostatic equilibrium, thus having a large impact on the astrophysical properties of the corresponding stars and opening a new window to constrain these theories with present and future observations of different types of stars. For relativistic stars, such as neutron stars, the uncertainty on the equation of state of matter at supranuclear densities intertwines with the new parameters coming from the modified gravity side, providing a whole new phenomenology for the typical predictions of stellar structure models, such as mass-radius relations, maximum masses, or moment of inertia. For non-relativistic stars, such as white, brown and red dwarfs, the weakening/strengthening of the gravitational force inside astrophysical bodies via the modified Newtonian (Poisson) equation may induce changes on the star's mass, radius, central density or luminosity, having an impact, for instance, in the Chandrasekhar's limit for white dwarfs, or in the minimum mass for stable hydrogen burning in high-mass brown dwarfs. This work aims to provide a broad overview of the main such results achieved in the recent literature for many such modified theories of gravity, by combining the results and constraints obtained from the analysis of relativistic and non-relativistic stars in different scenarios. Moreover, we will build a bridge between the efforts of the community working on different theories, formulations, types of stars, theoretical modelings, and observational aspects, highlighting some of the most promising opportunities in the field.
|
Olmo, G. J., Rubiera-Garcia, D., & Sanchez-Puente, A. (2018). Accelerated observers and the notion of singular spacetime. Class. Quantum Gravity, 35(5), 055010–18pp.
Abstract: Geodesic completeness is typically regarded as a basic criterion to determine whether a given spacetime is regular or singular. However, the principle of general covariance does not privilege any family of observers over the others and, therefore, observers with arbitrary motions should be able to provide a complete physical description of the world. This suggests that in a regular spacetime, all physically acceptable observers should have complete paths. In this work we explore this idea by studying the motion of accelerated observers in spherically symmetric spacetimes and illustrate it by considering two geodesically complete black hole spacetimes recently described in the literature. We show that for bound and locally unbound accelerations, the paths of accelerated test particles are complete, providing further support to the regularity of such spacetimes.
|
Olmo, G. J., & Rubiera-Garcia, D. (2012). Nonsingular Charged Black Holes A La Palatini. Int. J. Mod. Phys. D, 21(8), 1250067–6pp.
Abstract: We argue that the quantum nature of matter and gravity should lead to a discretization of the allowed states of the matter confined in the interior of black holes. To support and illustrate this idea, we consider a quadratic extension of general relativity (GR) formulated a la Palatini and show that nonrotating, electrically charged black holes develop a compact core at the Planck density which is nonsingular if the mass spectrum satisfies a certain discreteness condition. We also find that the area of the core is proportional to the number of charges times the Planck area.
|
Olmo, G. J., & Rubiera-Garcia, D. (2015). Brane-world and loop cosmology from a gravity-matter coupling perspective. Phys. Lett. B, 740, 73–79.
Abstract: We show that the effective brane-world and the loop quantum cosmology background expansion histories can be reproduced from a modified gravity perspective in terms of an f (R) gravity action plus a g(R) term non-minimally coupled with the matter Lagrangian. The reconstruction algorithm that we provide depends on a free function of the matter density that must be specified in each case and allows to obtain analytical solutions always. In the simplest cases, the function f (R) is quadratic in the Ricci scalar, R, whereas g(R) is linear. Our approach is compared with recent results in the literature. We show that working in the Palatini formalism there is no need to impose any constraint that keeps the equations second order, which is a key requirement for the successful implementation of the reconstruction algorithm.
|