Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–20] |
![]() |
Ros Garcia, A., Barrio, J., Etxebeste, A., Garcia-Lopez, J., Jimenez-Ramos, M. C., Lacasta, C., et al. (2020). MACACO II test-beam with high energy photons. Phys. Med. Biol., 65(24), 245027–12pp.
Abstract: The IRIS group at IFIC Valencia is developing a three-layer Compton camera for treatment monitoring in proton therapy. The system is composed of three detector planes, each made of a LaBr3<i monolithic crystal coupled to a SiPM array. Having obtained successful results with the first prototype (MACACO) that demonstrated the feasibility of the proposed technology, a second prototype (MACACO II) with improved performance has been developed, and is the subject of this work. The new system has an enhanced detector energy resolution which translates into a higher spatial resolution of the telescope. The image reconstruction method has also been improved with an accurate model of the sensitivity matrix. The device has been tested with high energy photons at the National Accelerator Centre (CNA, Seville). The tests involved a proton beam of 18 MeV impinging on a graphite target, to produce 4.4 MeV photons. Data were taken at different system positions of the telescope with the first detector at 65 and 160 mm from the target, and at different beam intensities. The measurements allowed successful reconstruction of the photon emission distribution at two target positions separated by 5 mm in different telescope configurations. This result was obtained both with data recorded in the first and second telescope planes (two interaction events) and, for the first time in beam experiments, with data recorded in the three planes (three interaction events).
Keywords: Compton imaging; Compton camera; proton therapy; LaBr3; test-beam; image reconstruction
|
Gonzalez-Iglesias, D., Esperante, D., Gimeno, B., Boronat, M., Blanch, C., Fuster-Martinez, N., et al. (2021). Analytical RF Pulse Heating Analysis for High Gradient Accelerating Structures. IEEE Trans. Nucl. Sci., 68(2), 78–91.
Abstract: The main aim of this work is to present a simple method, based on analytical expressions, for obtaining the temperature increase due to the Joule effect inside the metallic walls of an RF accelerating component. This technique relies on solving the 1-D heat-transfer equation for a thick wall, considering that the heat sources inside the wall are the ohmic losses produced by the RF electromagnetic fields penetrating the metal with finite electrical conductivity. Furthermore, it is discussed how the theoretical expressions of this method can be applied to obtain an approximation to the temperature increase in realistic 3-D RF accelerating structures, taking as an example the cavity of an RF electron photoinjector and a traveling wave linac cavity. These theoretical results have been benchmarked with numerical simulations carried out with commercial finite-element method (FEM) software, finding good agreement among them. Besides, the advantage of the analytical method with respect to the numerical simulations is evidenced. In particular, the model could be very useful during the design and optimization phase of RF accelerating structures, where many different combinations of parameters must be analyzed in order to obtain the proper working point of the device, allowing to save time and speed up the process. However, it must be mentioned that the method described in this article is intended to provide a quick approximation to the temperature increase in the device, which of course is not as accurate as the proper 3-D numerical simulations of the component.
|
Balibrea-Correa, J., Lerendegui-Marco, J., Babiano-Suarez, V., Caballero, L., Calvo, D., Ladarescu, I., et al. (2021). Machine Learning aided 3D-position reconstruction in large LaCl3 crystals. Nucl. Instrum. Methods Phys. Res. A, 1001, 165249–17pp.
Abstract: We investigate five different models to reconstruct the 3D gamma-ray hit coordinates in five large LaCl3(Ce) monolithic crystals optically coupled to pixelated silicon photomultipliers. These scintillators have a base surface of 50 x 50 mm(2) and five different thicknesses, from 10 mm to 30 mm. Four of these models are analytical prescriptions and one is based on a Convolutional Neural Network. Average resolutions close to 1-2 mm fwhm are obtained in the transverse crystal plane for crystal thicknesses between 10 mm and 20 mm using analytical models. For thicker crystals average resolutions of about 3-5 mm fwhm are obtained. Depth of interaction resolutions between 1 mm and 4 mm are achieved depending on the distance of the interaction point to the photosensor surface. We propose a Machine Learning algorithm to correct for linearity distortions and pin-cushion effects. The latter allows one to keep a large field of view of about 70%-80% of the crystal surface, regardless of crystal thickness. This work is aimed at optimizing the performance of the so-called Total Energy Detector with Compton imaging capability (i-TED) for time-of-flight neutron capture cross-section measurements.
|
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|
Carrasco-Ribelles, L. A., Pardo-Mas, J. R., Tortajada, S., Saez, C., Valdivieso, B., & Garcia-Gomez, J. M. (2021). Predicting morbidity by local similarities in multi-scale patient trajectories. J. Biomed. Inform., 120, 103837–9pp.
Abstract: Patient Trajectories (PTs) are a method of representing the temporal evolution of patients. They can include information from different sources and be used in socio-medical or clinical domains. PTs have generally been used to generate and study the most common trajectories in, for instance, the development of a disease. On the other hand, healthcare predictive models generally rely on static snapshots of patient information. Only a few works about prediction in healthcare have been found that use PTs, and therefore benefit from their temporal dimension. All of them, however, have used PTs created from single-source information. Therefore, the use of longitudinal multi-scale data to build PTs and use them to obtain predictions about health conditions is yet to be explored. Our hypothesis is that local similarities on small chunks of PTs can identify similar patients concerning their future morbidities. The objectives of this work are (1) to develop a methodology to identify local similarities between PTs before the occurrence of morbidities to predict these on new query individuals; and (2) to validate this methodology on risk prediction of cardiovascular diseases (CVD) occurrence in patients with diabetes. We have proposed a novel formal definition of PTs based on sequences of longitudinal multi-scale data. Moreover, a dynamic programming methodology to identify local alignments on PTs for predicting future morbidities is proposed. Both the proposed methodology for PT definition and the alignment algorithm are generic to be applied on any clinical domain. We validated this solution for predicting CVD in patients with diabetes and we achieved a precision of 0.33, a recall of 0.72 and a specificity of 0.38. Therefore, the proposed solution in the diabetes use case can result of utmost utility to secondary screening.
|
Fernandez Casani, A., Orduña, J. M., Sanchez, J., & Gonzalez de la Hoz, S. (2021). A Reliable Large Distributed Object Store Based Platform for Collecting Event Metadata. J. Grid Comput., 19(3), 39–19pp.
Abstract: The Large Hadron Collider (LHC) is about to enter its third run at unprecedented energies. The experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousands of physics users. The ATLAS EventIndex project, currently running in production, builds a complete catalogue of particle collisions, or events, for the ATLAS experiment at the LHC. The distributed nature of the experiment data model is exploited by running jobs at over one hundred Grid data centers worldwide. Millions of files with petabytes of data are indexed, extracting a small quantity of metadata per event, that is conveyed with a data collection system in real time to a central Hadoop instance at CERN. After a successful first implementation based on a messaging system, some issues suggested performance bottlenecks for the challenging higher rates in next runs of the experiment. In this work we characterize the weaknesses of the previous messaging system, regarding complexity, scalability, performance and resource consumption. A new approach based on an object-based storage method was designed and implemented, taking into account the lessons learned and leveraging the ATLAS experience with this kind of systems. We present the experiment that we run during three months in the real production scenario worldwide, in order to evaluate the messaging and object store approaches. The results of the experiment show that the new object-based storage method can efficiently support large-scale data collection for big data environments like the next runs of the ATLAS experiment at the LHC.
Keywords: Grid computing; Hadoop file system; Object-Based storage
|
Reig, M. (2021). The stochastic axiverse. J. High Energy Phys., 09(9), 207–40pp.
Abstract: In addition to spectacular signatures such as black hole superradiance and the rotation of CMB polarization, the plenitude of axions appearing in the string axiverse may have potentially dangerous implications. An example is the cosmological overproduction of relic axions and moduli by the misalignment mechanism, more pronounced in regions where the signals mentioned above may be observable, that is for large axion decay constant. In this work, we study the minimal requirements to soften this problem and show that the fundamental requirement is a long period of low-scale inflation. However, in this case, if the inflationary Hubble scale is lower than around O(100) eV, no relic DM axion is produced in the early Universe. Cosmological production of some axions may be activated, via the misalignment mechanism, if their potential minimum changes between inflation and today. As a particular example, we study in detail how the maximal-misalignment mechanism dilutes the effect of dangerous axions and allows the production of axion DM in a controlled way. In this case, the potential of the axion that realises the mechanism shifts by a factor increment theta = pi between the inflationary epoch and today, and the axion starts to oscillate from the top of its potential. We also show that axions with masses m(a) similar to O(1 – 100) H-0 realising the maximal-misalignment mechanism generically behave as dark energy with a decay constant that can take values well below the Planck scale, avoiding problems associated to super-Planckian scales. Finally, we briefly study the basic phenomenological implications of the mechanism and comment on the compatibility of this type of maximally-misaligned quintessence with the swampland criteria.
|
Coves, A., Maestre, H., Archiles, R., Andres, M. V., & Gimeno, B. (2022). Surface-Impedance Formulation for Hollow-Core Waveguides Based on Subwavelength Gratings. IEEE Access, 10, 18843–18854.
Abstract: A rigorous Surface Impedance (SI) formulation for planar waveguides is presented. This modal technique splits the modal analysis of the waveguide in two steps. First, we obtain the modes characteristic equations as a function of the SI and, second, we need to obtain the surface impedance values using either analytical or numerical methods. We validate the technique by comparison with well-known analytical cases: the parallel-plate waveguide with losses and the dielectric slab waveguide. Then, we analyze an optical hollow-core waveguide defined by two high-contrast subwavelength gratings validating our results by comparison with reported values. Finally, we show the potential of our formulation with the analysis of a THz hollow-core waveguide defined by two surface-relief subwavelength gratings, including material losses in our formulation.
|
Carrio, F. (2022). The Data Acquisition System for the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator. IEEE Trans. Nucl. Sci., 69(4), 687–695.
Abstract: The tile calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the large hadron collider (LHC). In 2025, the LHC will be upgraded leading to the high luminosity LHC (HL-LHC). The HL-LHC will deliver an instantaneous luminosity up to seven times larger than the LHC nominal luminosity. The ATLAS Phase-II upgrade (2025-2027) will accommodate the subdetectors to the HL-LHC requirements. As part of this upgrade, the majority of the TileCal on-detector and off-detector electronics will be replaced using a new readout strategy, where the on-detector electronics will digitize and transmit digitized detector data to the off-detector electronics at the bunch crossing frequency (40 MHz). In the counting rooms, the off-detector electronics will compute reconstructed trigger objects for the first-level trigger and will store the digitized samples in pipelined buffers until the reception of a trigger acceptance signal. The off-detector electronics will also distribute the LHC clock to the on-detector electronics embedded within the digital data stream. The TileCal Phase-II upgrade project has undertaken an extensive research and development program that includes the development of a Demonstrator module to evaluate the performance of the new clock and readout architecture envisaged for the HL-LHC. The Demonstrator module equipped with the latest version of the on-detector electronics was built and inserted into the ATLAS experiment. The Demonstrator module is operated and read out using a Tile PreProcessor (TilePPr) Demonstrator which enables backward compatibility with the present ATLAS Trigger and Data AcQuisition (TDAQ), and the timing, trigger, and command (TTC) systems. This article describes in detail the main hardware and firmware components of the clock distribution and data acquisition systems for the Demonstrator module, focusing on the TilePPr Demonstrator.
|
Roser, J., Barrientos, L., Bernabeu, J., Borja-Lloret, M., Muñoz, E., Ros, A., et al. (2022). Joint image reconstruction algorithm in Compton cameras. Phys. Med. Biol., 67(15), 155009–15pp.
Abstract: Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.
|