|
de Salas, P. F., Gariazzo, S., Mena, O., Ternes, C. A., & Tortola, M. (2018). Neutrino Mass Ordering From Oscillations and Beyond: 2018 Status and Future Prospects. Front. Astron. Space Sci., 5, 36–50pp.
Abstract: The ordering of the neutrino masses is a crucial input for a deep understanding of flavor physics, and its determination may provide the key to establish the relationship among the lepton masses and mixings and their analogous properties in the quark sector. The extraction of the neutrino mass ordering is a data-driven field expected to evolve very rapidly in the next decade. In this review, we both analyse the present status and describe the physics of subsequent prospects. Firstly, the different current available tools to measure the neutrino mass ordering are described. Namely, reactor, long-baseline (accelerator and atmospheric) neutrino beams, laboratory searches for beta and neutrinoless double beta decays and observations of the cosmic background radiation and the large scale structure of the universe are carefully reviewed. Secondly, the results from an up-to-date comprehensive global fit are reported: the Bayesian analysis to the 2018 publicly available oscillation and cosmological data sets provides strong evidence for the normal neutrino mass ordering vs. the inverted scenario, with a significance of 3.5 standard deviations. This preference for the normal neutrino mass ordering is mostly due to neutrino oscillation measurements. Finally, we shall also emphasize the future perspectives for unveiling the neutrinomass ordering. In this regard, apart from describing the expectations from the aforementioned probes, we also focus on those arising from alternative and novel methods, as 21 cm cosmology, core-collapse supernova neutrinos and the direct detection of relic neutrinos.
|
|
|
De La Torre Luque, P., Gaggero, D., Grasso, D., & Marinelli, A. (2022). Prospects for detection of a galactic diffuse neutrino flux. Front. Astron. Space Sci., 9, 1041838–9pp.
Abstract: A Galactic cosmic-ray transport model featuring non-homogeneous transport has been developed over the latest years. This setup is aimed at reproducing gamma-ray observations in different regions of the Galaxy (with particular focus on the progressive hardening of the hadronic spectrum in the inner Galaxy) and was shown to be compatible with the very-high-energy gamma-ray diffuse emission recently detected up to PeV energies. In this work, we extend the results previously presented to test the reliability of that model throughout the whole sky. To this aim, we compare our predictions with detailed longitude and latitude profiles of the diffuse gamma-ray emission measured by Fermi-LAT for different energies and compute the expected Galactic nu diffuse emission, comparing it with current limits from the ANTARES collaboration. We emphasize that the possible detection of a Galactic nu component will allow us to break the degeneracy between our model and other scenarios featuring prominent contributions from unresolved sources and TeV halos.
|
|
|
Colonna, N., Belloni, F., Berthoumieux, E., Calviani, M., Domingo-Pardo, C., Guerrero, C., et al. (2010). Advanced nuclear energy systems and the need of accurate nuclear data: the n_TOF project at CERN. Energy Environ. Sci., 3(12), 1910–1917.
Abstract: To satisfy the world's constantly increasing demand for energy, a suitable mix of different energy sources has to be devised. In this scenario, an important role could be played by nuclear energy, provided that major safety, waste and proliferation issues affecting current nuclear reactors are satisfactorily addressed. To this purpose, a large effort has been under way for a few years towards the development of advanced nuclear systems with the aim of closing the fuel cycle. Generation IV reactors, with full or partial waste recycling capability, accelerator driven systems, as well as new fuel cycles are the main options being investigated. The design of advanced systems requires improvements in basic nuclear data, such as cross-sections for neutron-induced reactions on actinides. In this paper, the main concepts of advanced reactor systems are described, together with the related needs of new and accurate nuclear data. The present activity in this field at the neutron facility n_TOF at CERN is discussed.
|
|
|
Clausse, A., Soto, L., & Tarifeño-Saldivia, A. (2015). Influence of the Anode Length on the Neutron Emission of a 50 J Plasma Focus: Modeling and Experiment. IEEE Trans. Plasma Sci., 43(2), 629–636.
Abstract: A comprehensive set of electric data measured in a small plasma focus (PF) device of 50 J correlated with the corresponding neutron emissions is taken as the base for developing a semiempirical model of the current sheet dynamics and the neutron yield. The model is able to explain the dependence of the neutron yield with the pressure and anode length with good accuracy, and suggests a physical interpretation of the drive parameter commonly used in PF design.
|
|
|
Carrio, F., Castillo Gimenez, V., Ferrer, A., Gonzalez, V., Higon-Rodriguez, E., Marin, C., et al. (2011). Optical Link Card Design for the Phase II Upgrade of TileCal Experiment. IEEE Trans. Nucl. Sci., 58(4), 1657–1663.
Abstract: This paper presents the design of an optical link card developed in the frame of the R&D activities for the phase 2 upgrade of the TileCal experiment. This board, that is part of the evaluation of different technologies for the final choice in the next years, is designed as a mezzanine that can work independently or be plugged in the optical multiplexer board of the TileCal backend electronics. It includes two SNAP 12 optical connectors able to transmit and receive up to 75 Gb/s and one SFP optical connector for lower speeds and compatibility with existing hardware as the read out driver. All processing is done in a Stratix II GX field-programmable gate array (FPGA). Details are given on the hardware design, including signal and power integrity analysis, needed when working with these high data rates and on firmware development to obtain the best performance of the FPGA signal transceivers and for the use of the GBT protocol.
|
|
|
Carrio, F. (2022). The Data Acquisition System for the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator. IEEE Trans. Nucl. Sci., 69(4), 687–695.
Abstract: The tile calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the large hadron collider (LHC). In 2025, the LHC will be upgraded leading to the high luminosity LHC (HL-LHC). The HL-LHC will deliver an instantaneous luminosity up to seven times larger than the LHC nominal luminosity. The ATLAS Phase-II upgrade (2025-2027) will accommodate the subdetectors to the HL-LHC requirements. As part of this upgrade, the majority of the TileCal on-detector and off-detector electronics will be replaced using a new readout strategy, where the on-detector electronics will digitize and transmit digitized detector data to the off-detector electronics at the bunch crossing frequency (40 MHz). In the counting rooms, the off-detector electronics will compute reconstructed trigger objects for the first-level trigger and will store the digitized samples in pipelined buffers until the reception of a trigger acceptance signal. The off-detector electronics will also distribute the LHC clock to the on-detector electronics embedded within the digital data stream. The TileCal Phase-II upgrade project has undertaken an extensive research and development program that includes the development of a Demonstrator module to evaluate the performance of the new clock and readout architecture envisaged for the HL-LHC. The Demonstrator module equipped with the latest version of the on-detector electronics was built and inserted into the ATLAS experiment. The Demonstrator module is operated and read out using a Tile PreProcessor (TilePPr) Demonstrator which enables backward compatibility with the present ATLAS Trigger and Data AcQuisition (TDAQ), and the timing, trigger, and command (TTC) systems. This article describes in detail the main hardware and firmware components of the clock distribution and data acquisition systems for the Demonstrator module, focusing on the TilePPr Demonstrator.
|
|
|
Cabello, J., Torres-Espallardo, I., Gillam, J. E., & Rafecas, M. (2013). PET Reconstruction From Truncated Projections Using Total-Variation Regularization for Hadron Therapy Monitoring. IEEE Trans. Nucl. Sci., 60(5), 3364–3372.
Abstract: Hadron therapy exploits the properties of ion beams to treat tumors by maximizing the dose released to the target and sparing healthy tissue. With hadron beams, the dose distribution shows a relatively low entrance dose which rises sharply at the end of the range, providing the characteristic Bragg peak that drops quickly thereafter. It is of critical importance in order not to damage surrounding healthy tissues and/or avoid targeting underdosage to know where the delivered dose profile ends-the location of the Bragg peak. During hadron therapy, short-lived beta(+)-emitters are produced along the beam path, their distribution being correlated with the delivered dose. Following positron annihilation, two photons are emitted, which can be detected using a positron emission tomography (PET) scanner. The low yield of emitters, their short half-life, and the wash out from the target region make the use of PET, even only a few minutes after hadron irradiation, a challenging application. In-beam PET represents a potential candidate to estimate the distribution of beta(+)-emitters during or immediately after irradiation, at the cost of truncation effects and degraded image quality due to the partial rings required of the PET scanner. Time-of-flight (ToF) information can potentially be used to compensate for truncation effects and to enhance image contrast. However, the highly demanding timing performance required in ToF-PET makes this option costly. Alternatively, the use of maximum-a-posteriori-expectation-maximization (MAP-EM), including total variation (TV) in the cost function, produces images with low noise, while preserving spatial resolution. In this paper, we compare data reconstructed with maximum-likelihood-expectation-maximization (ML-EM) and MAP-EM using TV as prior, and the impact of including ToF information, from data acquired with a complete and a partial-ring PET scanner, of simulated hadron beams interacting with a polymethyl methacrylate (PMMA) target. The results show that MAP-EM, in the absence of ToF information, produces lower noise images and more similar data compared to the simulated beta(+) distributions than ML-EM with ToF information in the order of 200-600 ps. The investigation is extended to the combination of MAP-EM and ToF information to study the limit of performance using both approaches.
|
|
|
Brown, J. M. C., Gillam, J. E., Paganin, D. M., & Dimmock, M. R. (2013). Laplacian Erosion: An Image Deblurring Technique for Multi-Plane Gamma-Cameras. IEEE Trans. Nucl. Sci., 60(5), 3333–3342.
Abstract: Laplacian Erosion, an image deblurring technique for multi-plane Gamma-cameras, has been developed and tested for planar imaging using a GEANT4 Monte Carlo model of the Pixelated Emission Detector for RadioisOtopes (PEDRO) as a test platform. A contrast and Derenzo-like phantom composed of I-125 were both employed to investigate the dependence of detection plane and pinhole geometry on the performance of Laplacian Erosion. Three different pinhole geometries were tested. It was found that, for the test system, the performance of Laplacian Erosion was inversely proportional to the detection plane offset, and directly proportional to the pinhole diameter. All tested pinhole geometries saw a reduction in the level of image blurring associated with the pinhole geometry. However, the reduction in image blurring came at the cost of signal to noise ratio in the image. The application of Laplacian Erosion was shown to reduce the level of image blurring associated with pinhole geometry and improve recovered image quality in multi-plane Gamma-cameras for the targeted radiotracer I-125.
|
|
|
Briz, J. A., Nerio, A. N., Ballesteros, C., Borge, M. J. G., Martinez, P., Perea, A., et al. (2022). Proton Radiographs Using Position-Sensitive Silicon Detectors and High-Resolution Scintillators. IEEE Trans. Nucl. Sci., 69(4), 696–702.
Abstract: Proton therapy is a cancer treatment technique currently in growth since it offers advantages with respect to conventional X-ray and gamma-ray radiotherapy. In particular, better control of the dose deposition allowing to reach higher conformity in the treatments causing less secondary effects. However, in order to take full advantage of its potential, improvements in treatment planning and dose verification are required. A new prototype of proton computed tomography scanner is proposed to design more accurate and precise treatment plans for proton therapy. Our prototype is formed by double-sided silicon strip detectors and scintillators of LaBr3(Ce) with high energy resolution and fast response. Here, the results obtained from an experiment performed using a 100-MeV proton beam are presented. Proton radiographs of polymethyl methacrylate (PMMA) samples of 50-mm thickness with spatial patterns in aluminum were taken. Their properties were studied, including reproduction of the dimensions, spatial resolution, and sensitivity to different materials. Structures of up to 2 mm are well resolved and the sensitivity of the system was enough to distinguish the thicknesses of 10 mm of aluminum or PMMA. The spatial resolution of the images was 0.3 line pairs per mm (MTF-10%). This constitutes the first step to validate the device as a proton radiography scanner.
|
|
|
Bouhova-Thacker, E., Kostyukhin, V., Koffas, T., Liebig, W., Limper, M., Piacquadio, G. N., et al. (2010). Expected Performance of Vertex Reconstruction in the ATLAS Experiment at the LHC. IEEE Trans. Nucl. Sci., 57(2), 760–767.
Abstract: In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10(34) cm(-2) s(-1)) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper is the expected performance of the vertex reconstruction used in the ATLAS experiment. The algorithms for the reconstruction of primary and secondary vertices as well as for finding photon conversions and vertex reconstruction in jets are described. The implementation of vertex algorithms which follows a very modular design based on object-oriented C++ is presented. A user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.
|
|