|
Villaescusa-Navarro, F., Vogelsberger, M., Viel, M., & Loeb, A. (2013). Neutrino signatures on the high-transmission regions of the Lyman alpha forest. Mon. Not. Roy. Astron. Soc., 431(4), 3670–3677.
Abstract: We quantify the impact of massive neutrinos on the statistics of low-density regions in the intergalactic medium as probed by the Lyman alpha forest at redshifts z = 2.2-4. Based on mock but realistic quasar (QSO) spectra extracted from hydrodynamic simulations with cold dark matter, baryons and neutrinos, we find that the probability distribution of weak Lyman alpha absorption features, as sampled by Lyman alpha flux regions at high transmissivity, is strongly affected by the presence of massive neutrinos. We show that systematic errors affecting the Lyman alpha forest reduce but do not erase the neutrino signal. Using the Fisher matrix formalism, we conclude that the sum of the neutrino masses can be measured, using the method proposed in this paper, with a precision smaller than 0.4 eV using a catalogue of 200 high-resolution (signal-to-noise ratio similar to 100) QSO spectra. This number reduces to 0.27 eV by making use of reasonable priors in the other parameters that also affect the statistics of the high-transitivity regions of the Lyman alpha forest. The constraints obtained with this method can be combined with independent bounds from the cosmic microwave background, large-scale structures and measurements of the matter power spectrum from the Lyman alpha forest to produce tighter upper limits on the sum of the masses of the neutrinos.
|
|
|
Egea, F. J. et al, Gadea, A., Barrientos, D., & Huyuk, T. (2013). Design and Test of a High-Speed Flash ADC Mezzanine Card for High-Resolution and Timing Performance in Nuclear Structure Experiments. IEEE Trans. Nucl. Sci., 60(5), 3526–3531.
Abstract: This work describes new electronics for the EX-OGAM2 (HP-Ge detector array) and NEDA (BC501A-based neutron detector array). A new digitizing card with high resolution has been designed for gamma-ray and neutron spectroscopy experiments. The higher bandwidth requirement of the NEDA signals, together with the necessity for accuracy, require a high sampling rate in order to preserve the shape for real-time Pulse Shape Analysis (PSA). The PSA is of paramount importance for the NEDA to discriminate between neutrons and gamma-ray signals. Both high resolution and high speed parameters are often difficult to achieve in a single electronic unit. These constraints, together with the need to build new digitizing electronics to improve performance and flexibility of signal analysis in nuclear physics experiments, led to the development a new FADC mezzanine card. In this work, the design and development are described, including the characterization procedure and the preliminary measurement results.
|
|
|
Barrientos, D., Gonzalez, V., Bellato, M., Gadea, A., Bazzacco, D., Blasco, J. M., et al. (2013). Multiple Register Synchronization With a High-Speed Serial Link Using the Aurora Protocol. IEEE Trans. Nucl. Sci., 60(5), 3521–3525.
Abstract: In this work, the development and characterization of a multiple synchronous registers interface communicating with a high-speed serial link and using the Aurora protocol is presented. A detailed description of the developing process and the characterization methods and hardware test benches are also included. This interface will implement the slow control buses of the digitizer cards for the second generation of electronics for the Advanced GAmma Tracking Array (AGATA).
|
|
|
Oliver, J. F., Fuster-Garcia, E., Cabello, J., Tortajada, S., & Rafecas, M. (2013). Application of Artificial Neural Network for Reducing Random Coincidences in PET. IEEE Trans. Nucl. Sci., 60(5), 3399–3409.
Abstract: Positron Emission Tomography (PET) is based on the detection in coincidence of the two photons created in a positron annihilation. In conventional PET, this coincidence identification is usually carried out through a coincidence electronic unit. An accidental coincidence occurs when two photons arising from different annihilations are classified as a coincidence. Accidental coincidences are one of the main sources of image degradation in PET. Some novel systems allow coincidences to be selected post-acquisition in software, or in real time through a digital coincidence engine in an FPGA. These approaches provide the user with extra flexibility in the sorting process and allow the application of alternative coincidence sorting procedures. In this work a novel sorting procedure based on Artificial Neural Network (ANN) techniques has been developed. It has been compared to a conventional coincidence sorting algorithm based on a time coincidence window. The data have been obtained from Monte-Carlo simulations. A small animal PET scanner has been implemented to this end. The efficiency (the ratio of correct identifications) can be selected for both methods. In one case by changing the actual value of the coincidence window used, and in the other by changing a threshold at the output of the neural network. At matched efficiencies, the ANN-based method always produces a sorted output with a smaller random fraction. In addition, two differential trends are found: the conventional method presents a maximum achievable efficiency, while the ANN-based method is able to increase the efficiency up to unity, the ideal value, at the cost of increasing the random fraction. Images reconstructed using ANN sorted data (no compensation for randoms) present better contrast, and those image features which are more affected by randoms are enhanced. For the image quality phantom used in the paper, the ANN method decreases the spill-over ratio by a factor of 18%.
|
|
|
Cabello, J., Torres-Espallardo, I., Gillam, J. E., & Rafecas, M. (2013). PET Reconstruction From Truncated Projections Using Total-Variation Regularization for Hadron Therapy Monitoring. IEEE Trans. Nucl. Sci., 60(5), 3364–3372.
Abstract: Hadron therapy exploits the properties of ion beams to treat tumors by maximizing the dose released to the target and sparing healthy tissue. With hadron beams, the dose distribution shows a relatively low entrance dose which rises sharply at the end of the range, providing the characteristic Bragg peak that drops quickly thereafter. It is of critical importance in order not to damage surrounding healthy tissues and/or avoid targeting underdosage to know where the delivered dose profile ends-the location of the Bragg peak. During hadron therapy, short-lived beta(+)-emitters are produced along the beam path, their distribution being correlated with the delivered dose. Following positron annihilation, two photons are emitted, which can be detected using a positron emission tomography (PET) scanner. The low yield of emitters, their short half-life, and the wash out from the target region make the use of PET, even only a few minutes after hadron irradiation, a challenging application. In-beam PET represents a potential candidate to estimate the distribution of beta(+)-emitters during or immediately after irradiation, at the cost of truncation effects and degraded image quality due to the partial rings required of the PET scanner. Time-of-flight (ToF) information can potentially be used to compensate for truncation effects and to enhance image contrast. However, the highly demanding timing performance required in ToF-PET makes this option costly. Alternatively, the use of maximum-a-posteriori-expectation-maximization (MAP-EM), including total variation (TV) in the cost function, produces images with low noise, while preserving spatial resolution. In this paper, we compare data reconstructed with maximum-likelihood-expectation-maximization (ML-EM) and MAP-EM using TV as prior, and the impact of including ToF information, from data acquired with a complete and a partial-ring PET scanner, of simulated hadron beams interacting with a polymethyl methacrylate (PMMA) target. The results show that MAP-EM, in the absence of ToF information, produces lower noise images and more similar data compared to the simulated beta(+) distributions than ML-EM with ToF information in the order of 200-600 ps. The investigation is extended to the combination of MAP-EM and ToF information to study the limit of performance using both approaches.
|
|
|
Brown, J. M. C., Gillam, J. E., Paganin, D. M., & Dimmock, M. R. (2013). Laplacian Erosion: An Image Deblurring Technique for Multi-Plane Gamma-Cameras. IEEE Trans. Nucl. Sci., 60(5), 3333–3342.
Abstract: Laplacian Erosion, an image deblurring technique for multi-plane Gamma-cameras, has been developed and tested for planar imaging using a GEANT4 Monte Carlo model of the Pixelated Emission Detector for RadioisOtopes (PEDRO) as a test platform. A contrast and Derenzo-like phantom composed of I-125 were both employed to investigate the dependence of detection plane and pinhole geometry on the performance of Laplacian Erosion. Three different pinhole geometries were tested. It was found that, for the test system, the performance of Laplacian Erosion was inversely proportional to the detection plane offset, and directly proportional to the pinhole diameter. All tested pinhole geometries saw a reduction in the level of image blurring associated with the pinhole geometry. However, the reduction in image blurring came at the cost of signal to noise ratio in the image. The application of Laplacian Erosion was shown to reduce the level of image blurring associated with pinhole geometry and improve recovered image quality in multi-plane Gamma-cameras for the targeted radiotracer I-125.
|
|
|
Vincent, A. C., Scott, P., & Trampedach, R. (2013). Light bosons in the photosphere and the solar abundance problem. Mon. Not. Roy. Astron. Soc., 432(4), 3332–3339.
Abstract: Spectroscopy is used to measure the elemental abundances in the outer layers of the Sun, whereas helioseismology probes the interior. It is well known that current spectroscopic determinations of the chemical composition are starkly at odds with the metallicity implied by helioseismology. We investigate whether the discrepancy may be due to conversion of photons to a new light boson in the solar photosphere. We examine the impact of particles with axion-like interactions with the photon on the inferred photospheric abundances, showing that resonant axion-photon conversion is not possible in the region of the solar atmosphere in which line formation occurs. Although non-resonant conversion in the line-forming regions can in principle impact derived abundances, constraints from axion-photon conversion experiments rule out the couplings necessary for these effects to be detectable. We show that this extends to hidden photons and chameleons (which would exhibit similar phenomenological behaviour), ruling out known theories of new light bosons as photospheric solutions to the solar abundance problem.
|
|
|
KM3NeT Collaboration(Adrian-Martinez, S. et al), Aguilar, J. A., Bigongiari, C., Calvo Diaz-Aldagalan, D., Emanuele, U., Gomez-Gonzalez, J. P., et al. (2013). Expansion cone for the 3-inch PMTs of the KM3NeT optical modules. J. Instrum., 8, T03006–20pp.
Abstract: Detection of high-energy neutrinos from distant astrophysical sources will open a new window on the Universe. The detection principle exploits the measurement of Cherenkov light emitted by charged particles resulting from neutrino interactions in the matter containing the telescope. A novel multi-PMT digital optical module (DOM) was developed to contain 31 3-inch photomultiplier tubes (PMTs). In order to maximize the detector sensitivity, each PMT will be surrounded by an expansion cone which collects photons that would otherwise miss the photocathode. Results for various angles of incidence with respect to the PMT surface indicate an increase in collection efficiency by 30% on average for angles up to 45 degrees with respect to the perpendicular. Ray-tracing calculations could reproduce the measurements, allowing to estimate an increase in the overall photocathode sensitivity, integrated over all angles of incidence, by 27% (for a single PMT). Prototype DOMs, being built by the KM3NeT consortium, will be equipped with these expansion cones.
|
|
|
Robert, C., Dedes, G., Battistoni, G., Bohlen, T. T., Buvat, I., Cerutti, F., et al. (2013). Distributions of secondary particles in proton and carbon-ion therapy: a comparison between GATE/Geant4 and FLUKA Monte Carlo codes. Phys. Med. Biol., 58(9), 2879–2899.
Abstract: Monte Carlo simulations play a crucial role for in-vivo treatment monitoring based on PET and prompt gamma imaging in proton and carbon-ion therapies. The accuracy of the nuclear fragmentation models implemented in these codes might affect the quality of the treatment verification. In this paper, we investigate the nuclear models implemented in GATE/Geant4 and FLUKA by comparing the angular and energy distributions of secondary particles exiting a homogeneous target of PMMA. Comparison results were restricted to fragmentation of O-16 and C-12. Despite the very simple target and set-up, substantial discrepancies were observed between the two codes. For instance, the number of high energy (>1 MeV) prompt gammas exiting the target was about twice as large with GATE/Geant4 than with FLUKA both for proton and carbon ion beams. Such differences were not observed for the predicted annihilation photon production yields, for which ratios of 1.09 and 1.20 were obtained between GATE and FLUKA for the proton beam and the carbon ion beam, respectively. For neutrons and protons, discrepancies from 14% (exiting protons-carbon ion beam) to 57% (exiting neutrons-proton beam) have been identified in production yields as well as in the energy spectra for neutrons.
|
|
|
D'Ambrosio, G., Greynat, D., & Vulvert, G. (2013). Standard model and new physics contributions to K (L) and K (S) into four leptons. Eur. Phys. J. C, 73(12), 2678–10pp.
Abstract: We study the K (L) and K (S) decays into four leptons (, , ) where we use a form factor motivated by vector meson dominance, and show the dependence of the branching ratios and spectra from the slopes. A precise determination of short-distance contribution to K (L) ->mu μis affected by our ignorance on the sign of the amplitude but we show a possibility to measure the sign of this amplitude by studying K (L) and K (S) decays in four leptons. We also investigate the effect of New Physics contributions for these decays.
|
|