|
Bellomo, N., Bellini, E., Hu, B., Jimenez, R., Pena-Garay, C., & Verde, L. (2017). Hiding neutrino mass in modified gravity cosmologies. J. Cosmol. Astropart. Phys., 02(2), 043–12pp.
Abstract: Cosmological observables show a dependence with the neutrino mass, which is partially degenerate with parameters of extended models of gravity. We study and explore this degeneracy in Horndeski generalized scalar-tensor theories of gravity. Using forecasted cosmic microwave background and galaxy power spectrum datasets, we find that a single parameter in the linear regime of the effective theory dominates the correlation with the total neutrino mass. For any given mass, a particular value of this parameter approximately cancels the power suppression due to the neutrino mass at a given redshift. The extent of the cancellation of this degeneracy depends on the cosmological large-scale structure data used at different redshifts. We constrain the parameters and functions of the effective gravity theory and determine the influence of gravity on the determination of the neutrino mass from present and future surveys.
|
|
|
de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2017). Calculation of the local density of relic neutrinos. J. Cosmol. Astropart. Phys., 09(9), 034–24pp.
Abstract: Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV. We consider the gravitational clustering of relic neutrinos within the Milky Way, and used the N – one-body simulation technique to compute their density enhancement factor in the neighbourhood of the Earth with respect to the average cosmic density. Compared to previous similar studies, we pushed the simulation down to smaller neutrino masses, and included an improved treatment of the baryonic and dark matter distributions in the Milky Way. Our results are important for future experiments aiming at detecting the cosmic neutrino background, such as the Princeton Tritium Observatory for Light, Early-universe, Massive-neutrino Yield (PTOLEMY) proposal. We calculate the impact of neutrino clustering in the Milky Way on the expected event rate for a PTOLEMY-like experiment. We find that the effect of clustering remains negligible for the minimal normal hierarchy scenario, while it enhances the event rate by 10 to 20% (resp. a factor 1.7 to 2.5) for the minimal inverted hierarchy scenario (resp. a degenerate scenario with 150 meV masses). Finally we compute the impact on the event rate of a possible fourth sterile neutrino with a mass of 1.3 eV.
|
|
|
Adhikari, R. et al, Pastor, S., & Valle, J. W. F. (2017). A White Paper on keV sterile neutrino Dark Matter. J. Cosmol. Astropart. Phys., 01(1), 025–247pp.
Abstract: We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved – cosmology, astrophysics, nuclear, and particle physics – in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.
|
|
|
Barenboim, G., & Park, W. I. (2017). A full picture of large lepton number asymmetries of the Universe. J. Cosmol. Astropart. Phys., 04(4), 048–10pp.
Abstract: A large lepton number asymmetry of O(0.1-1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 30. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10(-2) -10(2)) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m(phi) greater than or similar to O(10) TeV and phi(0) greater than or similar to O(10(14)) GeV, respectively.
|
|
|
Double Chooz collaboration(Abrahao, T. et al), & Novella, P. (2017). Cosmic-muon characterization and annual modulation measurement with Double Chooz detectors. J. Cosmol. Astropart. Phys., 02(2), 017–20pp.
Abstract: A study on cosmic muons has been performed for the two identical near and far neutrino detectors of the Double Chooz experiment, placed at similar to 120 and similar to 300 m. w.e. underground respectively, including the corresponding simulations using the MUSIC simulation package. This characterization has allowed us to measure the muon flux reaching both detectors to be (3.64 +/- 0.04) x 10(-4) cm(-2) s(-1) for the near detector and (7.00 +/- 0.05) x 10(-5) cm(-2) s(-1) for the far one. The seasonal modulation of the signal has also been studied observing a positive correlation with the atmospheric temperature, leading to an effective temperature coefficient of alpha(T) = 0.212 +/- 0.024 and 0.355 +/- 0.019 for the near and far detectors respectively. These measurements, in good agreement with expectations based on theoretical models, represent one of the first measurements of this coefficient in shallow depth installations.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2017). Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings. Radiol. Phys. Technol., 10(1), 68–81.
Abstract: We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang's approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.
|
|
|
Albiol, A., Corbi, A., & Albiol, F. (2017). Automatic intensity windowing of mammographic images based on a perceptual metric. Med. Phys., 44(4), 1369–1378.
Abstract: Purpose: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. Methods: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at . Results: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. Conclusions: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.
|
|
|
Muñoz, E., Barrio, J., Etxebeste, A., Ortega, P. G., Lacasta, C., Oliver, J. F., et al. (2017). Performance evaluation of MACACO: a multilayer Compton camera. Phys. Med. Biol., 62(18), 7321–7341.
Abstract: Compton imaging devices have been proposed and studied for a wide range of applications. We have developed a Compton camera prototype which can be operated with two or three detector layers based on monolithic lanthanum bromide (LaBr3) crystals coupled to silicon photomultipliers (SiPMs), to be used for proton range verification in hadron therapy. In this work, we present the results obtained with our prototype in laboratory tests with radioactive sources and in simulation studies. Images of a Na-22 and an Y-88 radioactive sources have been successfully reconstructed. The full width half maximum of the reconstructed images is below 4 mm for a Na-22 source at a distance of 5 cm.
|
|
|
Albaladejo, M., Daub, J. T., Hanhart, C., Kubis, B., & Moussallamd, B. (2017). How to employ (B)over-bar(d)(0) -> J/psi(pi eta, (K)over-barK) decays to extract information on pi eta scattering. J. High Energy Phys., 04(4), 010–28pp.
Abstract: We demonstrate that dispersion theory allows one to deduce crucial information on pi eta scattering from the final-state interactions of the light mesons visible in the spectral distributions of the decays (B) over bar (0)(d) -> J/psi(pi(0)eta, K+K-, K-0 (K) over bar (0)). Thus high-quality measurements of these differential observables are highly desired. The corresponding rates are predicted to be of the same order of magnitude as those for (B) over bar (0)(d) -> J/psi pi(+)pi(-) measured recently at LHCb, letting the corresponding measurement appear feasible.
|
|
|
Gomez-Cadenas, J. J., Benlloch-Rodriguez, J. M., & Ferrario, P. (2017). Monte Carlo study of the coincidence resolving time of a liquid xenon PET scanner, using Cherenkov radiation. J. Instrum., 12, P08023–13pp.
Abstract: In this paper we use detailed Monte Carlo simulations to demonstrate that liquid xenon (LXe) can be used to build a Cherenkov-based TOF-PET, with an intrinsic coincidence resolving time (CRT) in the vicinity of 10 ps. This extraordinary performance is due to three facts: a) the abundant emission of Cherenkov photons by liquid xenon; b) the fact that LXe is transparent to Cherenkov light; and c) the fact that the fastest photons in LXe have wavelengths higher than 300 nm, therefore making it possible to separate the detection of scintillation and Cherenkov light. The CRT in a Cherenkov LXe TOF-PET detector is, therefore, dominated by the resolution (time jitter) introduced by the photosensors and the electronics. However, we show that for sufficiently fast photosensors (e.g, an overall 40 ps jitter, which can be achieved by current micro-channel plate photomultipliers) the overall CRT varies between 30 and 55 ps, depending on the detection efficiency. This is still one order of magnitude better than commercial CRT devices and improves by a factor 3 the best CRT obtained with small laboratory prototypes.
|
|