Doncel, M., Cederwall, B., Martin, S., Quintana, B., Gadea, A., Farnea, E., et al. (2015). Conceptual design of a high resolution Ge array with tracking and imaging capabilities for the DESPEC (FAIR) experiment. J. Instrum., 10, P06010–15pp.
Abstract: We present results of Monte Carlo simulations for the conceptual design of the high-resolution DESPEC Germanium Array Spectrometer (DEGAS) proposed for the Facility for Ion and Antiproton Research (FAIR) under construction at Darmstadt, Germany. The project is carried out in three phases, although only results for the two first phases will be addressed in this work. The first phase will consist of a re-arrangement of the EUROBALL cluster detectors previously used in the RISING campaign at GSI. The second phase is based on coupling AGATA-type triple-cluster detectors with EUROBALL cluster detectors in a compact geometry around the active ion implantation target of DESPEC.
|
Solevi, P., Magrin, G., Moro, D., & Mayer, R. (2015). Monte Carlo study of microdosimetric diamond detectors. Phys. Med. Biol., 60(18), 7069–7083.
Abstract: Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.
|
Beltran Jimenez, J., Heisenberg, L., & Olmo, G. J. (2015). Tensor perturbations in a general class of Palatini theories. J. Cosmol. Astropart. Phys., 06(6), 026–16pp.
Abstract: We study a general class of gravitational theories formulated in the Palatini approach and derive the equations governing the evolution of tensor perturbations. In the absence of torsion, the connection can be solved as the Christoffel symbols of an auxiliary metric which is non-trivially related to the space-time metric. We then consider background solutions corresponding to a perfect fluid and show that the tensor perturbations equations (including anisotropic stresses) for the auxiliary metric around such a background take an Einstein-like form. This facilitates the study in a homogeneous and isotropic cosmological scenario where we explicitly establish the relation between the auxiliary metric and the spacetime metric tensor perturbations. As a general result, we show that both tensor perturbations coincide in the absence of anisotropic stresses.
|
Moline, A., Ibarra, A., & Palomares-Ruiz, S. (2015). Future sensitivity of neutrino telescopes to dark matter annihilations from the cosmic diffuse neutrino signal. J. Cosmol. Astropart. Phys., 06(6), 005–34pp.
Abstract: Cosmological observations and cold dark matter N-body simulations indicate that our Universe is populated by numerous halos, where dark matter particles annihilate, potentially producing Standard Model particles. In this paper we calculate the contribution to the diffuse neutrino background from dark matter annihilations in halos at all redshifts and we estimate the future sensitivity to the annihilation cross section of neutrino telescopes such as IceCube or ANTARES. We consider various parametrizations to describe the internal halo properties and for the halo mass function in order to bracket the theoretical uncertainty in the limits from the modeling of the cosmological annihilation flux. We find that observations of the cosmic diffuse neutrino flux at large angular distances from the galactic center lead to constraints on the dark matter annihilation cross section which are complementary to ( and for some extrapolations of the astrophysical parameters, better than) those stemming from observations of the Milky Way halo, especially for neutrino telescopes not pointing directly to the Milky Way center, as is the case of IceCube.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
Achterberg, A., Amoroso, S., Caron, S., Hendriks, L., Ruiz de Austri, R., & Weniger, C. (2015). A description of the Galactic Center excess in the Minimal Supersymmetric Standard Model. J. Cosmol. Astropart. Phys., 08(8), 006–27pp.
Abstract: Observations with the Fermi Large Area Telescope (LAT) indicate an excess in gamma rays originating from the center of our Galaxy. A possible explanation for this excess is the annihilation of Dark Matter particles. We have investigated the annihilation of neutralinos as Dark Matter candidates within the phenomenological Minimal Supersymmetric Standard Model (pMSSM). An iterative particle filter approach was used to search for solutions within the pMSSM. We found solutions that are consistent with astroparticle physics and collider experiments, and provide a fit to the energy spectrum of the excess. The neutralino is a Bino/Higgsino or Bino/Wino/Higgsino mixture with a mass in the range 84-92 GeV or 87-97 GeV annihilating into W bosons. A third solutions is found for a neutralino of mass 174-187 GeV annihilating into top quarks. The best solutions yield a Dark Matter relic density 0.06 < Omega h(2) < 0.13. These pMSSM solutions make clear forecasts for LHC, direct and indirect DM detection experiments. If the pMSSM explanation of the excess seen by Fermi-LAT is correct, a DM signal might be discovered soon.
|
Aoki, M., Toma, T., & Vicente, A. (2015). Non-thermal production of minimal dark matter via right-handed neutrino decay. J. Cosmol. Astropart. Phys., 09(9), 063–19pp.
Abstract: Minimal Dark Matter (MDM) stands as one of the simplest dark matter scenarios. In MDM models, annihilation and co-annihilation processes among the members of the MDM multiplet are usually very efficient, pushing the dark matter mass above O(10) TeV in order to reproduce the observed dark matter relic density. Motivated by this little drawback, in this paper we consider an extension of the MDM scenario by three right-handed neutrinos. Two specific choices for the MDM multiplet are studied: a fermionic SU(2)(L) quintuplet and a scalar SU(2)(L) septuplet. The lightest right-handed neutrino, with tiny Yukawa couplings, never reaches thermal equilibrium in the early universe and is produced by freeze-in. This creates a link between dark matter and neutrino physics: dark matter can be non-thermally produced by the decay of the lightest right-handed neutrino after freeze-out, allowing to lower significantly the dark matter mass. We discuss the phenomenology of the non-thermally produced MDM and, taking into account significant Sommerfeld corrections, we find that the dark matter mass must have some specific values in order not to be in conflict with the current bounds from gamma-ray observations.
|
Escudero, M., Mena, O., Vincent, A. C., Wilkinson, R. J., & Boehm, C. (2015). Exploring dark matter microphysics with galaxy surveys. J. Cosmol. Astropart. Phys., 09(9), 034–16pp.
Abstract: We use present cosmological observations and forecasts of future experiments to illustrate the power of large-scale structure (LSS) surveys in probing dark matter (DM) microphysics and unveiling potential deviations from the standard ACDM scenario. To quantify this statement, we focus on an extension of ACDM with DM-neutrino scattering, which leaves a distinctive imprint on the angular and matter power spectra. After finding that future CMB experiments (such as COrE+) will not significantly improve the constraints set by the Planck satellite, we show that the next generation of galaxy clustering surveys (such as DESI) could play a leading role in constraining alternative cosmologies and even have the potential to make a discovery. Typically we find that DESI would be an order of magnitude more sensitive to DM interactions than Planck, thus probing effects that until now have only been accessible via N-body simulations.
|
Davesne, D., Meyer, J., Pastore, A., & Navarro, J. (2015). Partial wave decomposition of the N3LO equation of state. Phys. Scr., 90(11), 114002–6pp.
Abstract: By means of a partial wave decomposition, we separate their contributions to the equation of state (EoS) of symmetric nuclear matter for the N3LO pseudo-potential. In particular, we show that although both the tensor and the spin-orbit terms do not contribute to the EoS, they give a non-vanishing contribution to the separate (JLS) channels.
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2015). B flavour tagging using charm decays at the LHCb experiment. J. Instrum., 10, P10005–16pp.
Abstract: An algorithm is described for tagging the flavour content at production of neutral B mesons in the LHCb experiment. The algorithm exploits the correlation of the flavour of a B meson with the charge of a reconstructed secondary charm hadron from the decay of the other b hadron produced in the proton-proton collision. Charm hadron candidates are identified in a number of fully or partially reconstructed Cabibbo-favoured decay modes. The algorithm is calibrated on the self-tagged decay modes B+ -> J/psi K+ and B-0 -> J/psi K*(0) using 3.0fb(-1) of data collected by the LHCb experiment at pp centre-of-mass energies of 7TeV and 8TeV. Its tagging power on these samples of B -> J/psi X decays is (0.30 +/- 0.01 +/- 0.01) %.
|