|
Borja-Lloret, M., Barrientos, L., Bernabeu, J., Lacasta, C., Muñoz, E., Ros, A., et al. (2023). Influence of the background in Compton camera images for proton therapy treatment monitoring. Phys. Med. Biol., 68(14), 144001–16pp.
Abstract: Objective. Background events are one of the most relevant contributions to image degradation in Compton camera imaging for hadron therapy treatment monitoring. A study of the background and its contribution to image degradation is important to define future strategies to reduce the background in the system. Approach. In this simulation study, the percentage of different kinds of events and their contribution to the reconstructed image in a two-layer Compton camera have been evaluated. To this end, GATE v8.2 simulations of a proton beam impinging on a PMMA phantom have been carried out, for different proton beam energies and at different beam intensities. Main results. For a simulated Compton camera made of Lanthanum (III) Bromide monolithic crystals, coincidences caused by neutrons arriving from the phantom are the most common type of background produced by secondary radiations in the Compton camera, causing between 13% and 33% of the detected coincidences, depending on the beam energy. Results also show that random coincidences are a significant cause of image degradation at high beam intensities, and their influence in the reconstructed images is studied for values of the time coincidence windows from 500 ps to 100 ns. Significance. Results indicate the timing capabilities required to retrieve the fall-off position with good precision. Still, the noise observed in the image when no randoms are considered make us consider further background rejection methods.
|
|
|
de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2017). Calculation of the local density of relic neutrinos. J. Cosmol. Astropart. Phys., 09(9), 034–24pp.
Abstract: Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV. We consider the gravitational clustering of relic neutrinos within the Milky Way, and used the N – one-body simulation technique to compute their density enhancement factor in the neighbourhood of the Earth with respect to the average cosmic density. Compared to previous similar studies, we pushed the simulation down to smaller neutrino masses, and included an improved treatment of the baryonic and dark matter distributions in the Milky Way. Our results are important for future experiments aiming at detecting the cosmic neutrino background, such as the Princeton Tritium Observatory for Light, Early-universe, Massive-neutrino Yield (PTOLEMY) proposal. We calculate the impact of neutrino clustering in the Milky Way on the expected event rate for a PTOLEMY-like experiment. We find that the effect of clustering remains negligible for the minimal normal hierarchy scenario, while it enhances the event rate by 10 to 20% (resp. a factor 1.7 to 2.5) for the minimal inverted hierarchy scenario (resp. a degenerate scenario with 150 meV masses). Finally we compute the impact on the event rate of a possible fourth sterile neutrino with a mass of 1.3 eV.
|
|
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo, F. L., et al. (2021). Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents. J. Instrum., 16(8), P08025–46pp.
Abstract: Non-ionizing energy loss causes bulk damage to the silicon sensors of the ATLAS pixel and strip detectors. This damage has important implications for data-taking operations, charged-particle track reconstruction, detector simulations, and physics analysis. This paper presents simulations and measurements of the leakage current in the ATLAS pixel detector and semiconductor tracker as a function of location in the detector and time, using data collected in Run 1 (2010-2012) and Run 2 (2015-2018) of the Large Hadron Collider. The extracted fluence shows a much stronger vertical bar z vertical bar-dependence in the innermost layers than is seen in simulation. Furthermore, the overall fluence on the second innermost layer is significantly higher than in simulation, with better agreement in layers at higher radii. These measurements are important for validating the simulation models and can be used in part to justify safety factors for future detector designs and interventions.
|
|
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
|
|
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|
|
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., Cerda Alberich, L., et al. (2016). A measurement of material in the ATLAS tracker using secondary hadronic interactions in 7 TeV p p collisions. J. Instrum., 11, P11020–41pp.
Abstract: Knowledge of the material in the ATLAS inner tracking detector is crucial in under-standing the reconstruction of charged-particle tracks, the performance of algorithms that identify jets containing b-hadrons and is also essential to reduce background in searches for exotic particles that can decay within the inner detector volume. Interactions of primary hadrons produced in pp collisions with the material in the inner detector are used to map the location and amount of this material. The hadronic interactions of primary particles may result in secondary vertices, which in this analysis are reconstructed by an inclusive vertex-finding algorithm. Data were collected using minimum-bias triggers by the ATLAS detector operating at the LHC during 2010 at centre-of-mass energy root s = 7 TeV, and correspond to an integrated luminosity of 19 nb(-1). Kinematic properties of these secondary vertices are used to study the validity of the modelling of hadronic interactions in simulation. Secondary-vertex yields are compared between data and simulation over a volume of about 0.7m(3) around the interaction point, and agreement is found within overall uncertainties.
|
|
|
Guadilla, V. et al, Algora, A., Tain, J. L., Agramunt, J., Jordan, D., Monserrate, M., et al. (2017). Characterization of a cylindrical plastic beta-detector with Monte Carlo simulations of optical photons. Nucl. Instrum. Methods Phys. Res. A, 854, 134–138.
Abstract: In this work we report on the Monte Carlo study performed to understand and reproduce experimental measurements of a new plastic beta-detector with cylindrical geometry. Since energy deposition simulations differ from the experimental measurements for such a geometry, we show how the simulation of production and transport of optical photons does allow one to obtain the shapes of the experimental spectra. Moreover, taking into account the computational effort associated with this kind of simulation, we develop a method to convert the simulations of energy deposited into light collected, depending only on the interaction point in the detector. This method represents a useful solution when extensive simulations have to be done, as in the case of the calculation of the response function of the spectrometer in a total absorption gamma-ray spectroscopy analysis.
|
|
|
Zhang, X., Xiao, Y. T., & Gimeno, B. (2020). Multipactor Suppression by a Resonant Static Magnetic Field on a Dielectric Surface. IEEE Trans. Electron Devices, 67(12), 5723–5728.
Abstract: In this article, we study the suppression of the multipactor phenomenon on a dielectric surface by a resonant static magnetic field. A homemade Monte Carlo algorithm is developed for multipactor simulations on a dielectric surface driven by two orthogonal radio frequency (RF) electric field components. When the static magnetic field is perpendicular to the tangential and normal RF electric fields, it is shown that if the normal electric field lags the tangential electric field by pi/2, the superposition of the normal and tangential electric fields will trigger a gyro-acceleration of the electron cloud and restrain the multipactor discharge effectively. By contrast, when the normal electric field is in advance of the tangential electric field by pi/2, the difference between the normal and tangential electric fields drives gyro-motion of the electron cloud. Consequently, two enhanced discharge zones are inevitable. The suppression effects of the resonant static magnetic field that is parallel to the tangential RF electric field or to the normal RF electric field are also presented.
|
|
|
Campanario, F., & Kubocz, M. (2014). Higgs boson CP-properties of the gluonic contributions in Higgs plus three jet production via gluon fusion at the LHC. J. High Energy Phys., 10(10), 173–16pp.
Abstract: in high energy hadronic collisions, a general CP-violating Higgs boson Phi with accompanying jets can be efficiently produced via gluon fusion, which is mediated by heavy quark loops. In this article, we study the dominant sub-channel gg -> ggg Phi of the gluon fusion production process with triple real emission corrections at order alpha(5)(s). We go beyond the heavy top-quark approximation and include the full mass dependence of the top- and bottom-quark contributions. Furthermore, in a specific model we demonstrate the features of our program and show the impact of bottom-quark loop contributions in combination with large values of tan beta on differential distributions sensitive to CP-rneasurements of the Higgs boson.
|
|
|
Muñoz, E., Barrio, J., Bernabeu, J., Etxebeste, A., Lacasta, C., Llosa, G., et al. (2018). Study and comparison of different sensitivity models for a two-plane Compton camera. Phys. Med. Biol., 63(13), 135004–19pp.
Abstract: Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with Na-22 sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
|
|