|
HAWC Collaboration(Alfaro, R. et al), & Salesa Greus, F. (2022). Validation of standardized data formats and tools for ground-level particle-based gamma-ray observatories. Astron. Astrophys., 667, A36–12pp.
Abstract: Context. Ground-based gamma-ray astronomy is still a rather young field of research, with strong historical connections to particle physics. This is why most observations are conducted by experiments with proprietary data and analysis software, as is usual in the particle physics field. However, in recent years, this paradigm has been slowly shifting toward the development and use of open-source data formats and tools, driven by upcoming observatories such as the Cherenkov Telescope Array (CTA). In this context, a community-driven, shared data format (the gamma-astro-data-format, or GADF) and analysis tools such as Gammapy and ctools have been developed. So far, these efforts have been led by the Imaging Atmospheric Cherenkov Telescope community, leaving out other types of ground-based gamma-ray instruments. Aims. We aim to show that the data from ground particle arrays, such as the High-Altitude Water Cherenkov (HAWC) observatory, are also compatible with the GADF and can thus be fully analyzed using the related tools, in this case, Gammapy. Methods. We reproduced several published HAWC results using Gammapy and data products compliant with GADF standard. We also illustrate the capabilities of the shared format and tools by producing a joint fit of the Crab spectrum including data from six different gamma-ray experiments. Results. We find excellent agreement with the reference results, a powerful confirmation of both the published results and the tools involved. Conclusions. The data from particle detector arrays such as the HAWC observatory can be adapted to the GADF and thus analyzed with Gammapy. A common data format and shared analysis tools allow multi-instrument joint analysis and effective data sharing. To emphasize this, a sample of Crab nebula event lists is made public with this paper. Because of the complementary nature of pointing and wide-field instruments, this synergy will be distinctly beneficial for the joint scientific exploitation of future observatories such as the Southern Wide-field Gamma-ray Observatory and CTA.
|
|
|
Strege, C., Bertone, G., Cerdeño, D. G., Fornasa, M., Ruiz de Austri, R., & Trotta, R. (2012). Updated global fits of the cMSSM including the latest LHC SUSY and Higgs searches and XENON100 data. J. Cosmol. Astropart. Phys., 03(3), 030–22pp.
Abstract: We present new global fits of the constrained Minimal Supersymmetric Standard Model (cMSSM), including LHC 1/fb integrated luminosity SUSY exclusion limits, recent LHC 5/fb constraints on the mass of the Higgs boson and XENON100 direct detection data. Our analysis fully takes into account astrophysical and hadronic uncertainties that enter the analysis when translating direct detection limits into constraints on the cMSSM parameter space. We provide results for both a Bayesian and a Frequentist statistical analysis. We find that LHC 2011 constraints in combination with XENON100 data can rule out a significant portion of the cMSSM parameter space. Our results further emphasise the complementarity of collider experiments and direct detection searches in constraining extensions of Standard Model physics. The LHC 2011 exclusion limit strongly impacts on low-mass regions of cMSSM parameter space, such as the stau co-annihilation region, while direct detection data can rule out regions of high SUSY masses, such as the Focus-Point region, which is unreachable for the LHC in the near future. We show that, in addition to XENON100 data, the experimental constraint on the anomalous magnetic moment of the muon plays a dominant role in disfavouring large scalar and gaugino masses. We find that, should the LHC 2011 excess hinting towards a Higgs boson at 126 GeV be confirmed, currently favoured regions of the cMSSM parameter space will be robustly ruled out from both a Bayesian and a profile likelihood statistical perspective.
|
|
|
Escudero, M., Hooper, D., & Witte, S. J. (2017). Updated collider and direct detection constraints on Dark Matter models for the Galactic Center gamma-ray excess. J. Cosmol. Astropart. Phys., 02(2), 038–21pp.
Abstract: Utilizing an exhaustive set of simplified models, we revisit dark matter scenarios potentially capable of generating the observed Galactic Center gamma-ray excess, updating constraints from the LUX and PandaX- II experiments, as well as from the LHC and other colliders. We identify a variety of pseudoscalar mediated models that remain consistent with all constraints. In contrast, dark matter candidates which annihilate through a spin-1 mediator are ruled out by direct detection constraints unless the mass of the mediator is near an annihilation resonance, or the mediator has a purely vector coupling to the dark matter and a purely axial coupling to Standard Model fermions. All scenarios in which the dark matter annihilates throught-channel processes are now ruled out by a combination of the constraints from LUX/ PandaX-II and the LHC.
|
|
|
Mangano, G., Miele, G., Pastor, S., Pisanti, O., & Sarikas, S. (2012). Updated BBN bounds on the cosmological lepton asymmetry for non-zero theta(13). Phys. Lett. B, 708(1-2), 1–5.
Abstract: We discuss the bounds on the cosmological lepton number from Big Bang Nucleosynthesis (BBN), in light of recent evidences for a large value of the neutrino mixing angle theta(13), sin(2) theta(13) greater than or similar to 0.01 at 2 sigma. The largest asymmetries for electron and mu, tau neutrinos compatible with He-4 and H-2 primordial yields are computed versus the neutrino mass hierarchy and mixing angles. The flavour oscillation dynamics is traced till the beginning of BBN and neutrino distributions after decoupling are numerically computed. The latter contains in general, non-thermal distortion due to the onset of flavour oscillations driven by solar squared mass difference in the temperature range where neutrino scatterings become inefficient to enforce thermodynamical equilibrium. Depending on the value of theta(13), this translates into a larger value for the effective number of neutrinos, N-eff. Upper bounds on this parameter are discussed for both neutrino mass hierarchies. Values for N-eff which are large enough to be detectable by the Planck experiment are found only for the (presently disfavoured) range sin(2) theta(13) <= 0.01.
|
|
|
Bhattacharya, A., Esmaili, A., Palomares-Ruiz, S., & Sarcevic, I. (2019). Update on decaying and annihilating heavy dark matter with the 6-year IceCube HESE data. J. Cosmol. Astropart. Phys., 03(5), 051–30pp.
Abstract: In view of the IceCube's 6-year high-energy starting events (HESE) sample, we revisit the possibility that the updated data may be better explained by a combination of neutrino fluxes from dark matter decay and an isotropic astrophysical power-law than purely by the latter. We find that the combined two-component flux qualitatively improves the fit to the observed data over a purely astrophysical one, and discuss how these updated fits compare against a similar analysis done with the 4-year HESE data. We also update fits involving dark matter decay via multiple channels, without any contribution from the astrophysical flux. We find that a DM-only explanation is not excluded by neutrino data alone. Finally, we also consider the possibility of a signal from dark matter annihilations and perform analogous analyses to the case of decays, commenting on its implications.
|
|
|
Borsato, M. et al, Zurita, J., Henry, L., Jashal, B. K., & Oyanguren, A. (2022). Unleashing the full power of LHCb to probe stealth new physics. Rep. Prog. Phys., 85(2), 024201–45pp.
Abstract: In this paper, we describe the potential of the LHCb experiment to detect stealth physics. This refers to dynamics beyond the standard model that would elude searches that focus on energetic objects or precision measurements of known processes. Stealth signatures include long-lived particles and light resonances that are produced very rarely or together with overwhelming backgrounds. We will discuss why LHCb is equipped to discover this kind of physics at the Large Hadron Collider and provide examples of well-motivated theoretical models that can be probed with great detail at the experiment.
|
|
|
Ramirez-Uribe, S., Hernandez-Pinto, R. J., Rodrigo, G., Sborlini, G. F. R., & Torres Bobadilla, W. J. (2021). Universal opening of four-loop scattering amplitudes to trees. J. High Energy Phys., 04(4), 129–22pp.
Abstract: The perturbative approach to quantum field theories has made it possible to obtain incredibly accurate theoretical predictions in high-energy physics. Although various techniques have been developed to boost the efficiency of these calculations, some ingredients remain specially challenging. This is the case of multiloop scattering amplitudes that constitute a hard bottleneck to solve. In this paper, we delve into the application of a disruptive technique based on the loop-tree duality theorem, which is aimed at an efficient computation of such objects by opening the loops to nondisjoint trees. We study the multiloop topologies that first appear at four loops and assemble them in a clever and general expression, the (NMLT)-M-4 universal topology. This general expression enables to open any scattering amplitude of up to four loops, and also describes a subset of higher order configurations to all orders. These results confirm the conjecture of a factorized opening in terms of simpler known subtopologies, which also determines how the causal structure of the entire loop amplitude is characterized by the causal structure of its subtopologies. In addition, we confirm that the loop-tree duality representation of the (NMLT)-M-4 universal topology is manifestly free of noncausal thresholds, thus pointing towards a remarkably more stable numerical implementation of multiloop scattering amplitudes.
|
|
|
Casas, F., Oteo, J. A., & Ros, J. (2012). Unitary transformations depending on a small parameter. Proc. R. Soc. A, 468(2139), 685–700.
Abstract: We formulate a unitary perturbation theory for quantum mechanics inspired by the Lie-Deprit formulation of canonical transformations. The original Hamiltonian is converted into a solvable one by a transformation obtained through a Magnus expansion. This ensures unitarity at every order in a small parameter. A comparison with the standard perturbation theory is provided. We work out the scheme up to order ten with some simple examples.
|
|
|
Di Bari, P., Ludl, P. O., & Palomares-Ruiz, S. (2016). Unifying leptogenesis, dark matter and high-energy neutrinos with right-handed neutrino mixing via Higgs portal. J. Cosmol. Astropart. Phys., 11(11), 044–41pp.
Abstract: We revisit a model in which neutrino masses and mixing are described by a two right-handed (RH) neutrino seesaw scenario, implying a strictly hierarchical light neutrino spectrum. A third decoupled RH neutrino, N-DM with mass M-DM, plays the role of cold dark matter (DM) and is produced by the mixing with a source RH neutrino, Ns with mass M-S, induced by Higgs portal interactions. The same interactions are also responsible for N-DM decays. We discuss in detail the constraints coming from DM abundance and stability conditions showing that in the hierarchical case, for M-DM >> M-S, there is an allowed window on M-DM values necessarily implying a contribution, from DM decays, to the high-energy neutrino flux recently detected by IceCube. We also show how the model can explain the matter-antimatter asymmetry of the Universe via leptogenesis in the quasi-degenerate limit. In this case, the DM mass should be within the range 300 GeV less than or similar to M-S < M-DM < 10PeV. We discuss the specific properties of this high-energy neutrino flux and show the predicted event spectrum for two exemplary cases. Although DM decays, with a relatively hard spectrum, cannot account for all the IceCube high-energy data, we illustrate how this extra source of high-energy neutrinos could reasonably explain some potential features in the observed spectrum. In this way, this represents a unified scenario for leptogenesis and DM that could be tested during the next years with more high-energy neutrino events.
|
|
|
Gelmini, G. B., Huh, J. H., & Witte, S. J. (2017). Unified halo-independent formalism from convex hulls for direct dark matter searches. J. Cosmol. Astropart. Phys., 12(12), 039–33pp.
Abstract: Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1-speed distribution F(v) in Earth's frame or 2-Galactic velocity distribution f(gal) ((u) over right arrow), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (N-1), where N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function (eta) over tilde (BF)-B-0 (v(min)) (which is an integral of the speed distribution) with at most (N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise con fi dence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of f(gal) ((u) over right arrow), which is a sum of Galactic streams, yields a periodic time-dependent halo function (eta) over right arrow BF (v(min); t) which at any fixed time is a piecewise constant function of v(min) with at most N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u)that is once again a sum of delta functions, and produces a time-dependent (eta) over tilde BF (v(min); t) function (and a time-averaged (eta) over tilde (0) BF (v(min))) that is piecewise linear, di ff ering significantly from best-fit halo functions obtained without the assumption of isotropy.
|
|