Home | << 1 2 3 4 5 6 7 8 9 >> |
Valdes-Cortez, C., Mansour, I., Rivard, M. J., Ballester, F., Mainegra-Hing, E., Thomson, R. M., et al. (2021). A study of Type B uncertainties associated with the photoelectric effect in low-energy Monte Carlo simulations. Phys. Med. Biol., 66(10), 105014–14pp.
Abstract: Purpose. To estimate Type B uncertainties in absorbed-dose calculations arising from the different implementations in current state-of-the-art Monte Carlo (MC) codes of low-energy photon cross-sections (<200 keV). Methods. MC simulations are carried out using three codes widely used in the low-energy domain: PENELOPE-2018, EGSnrc, and MCNP. Three dosimetry-relevant quantities are considered: mass energy-absorption coefficients for water, air, graphite, and their respective ratios; absorbed dose; and photon-fluence spectra. The absorbed dose and the photon-fluence spectra are scored in a spherical water phantom of 15 cm radius. Benchmark simulations using similar cross-sections have been performed. The differences observed between these quantities when different cross-sections are considered are taken to be a good estimator for the corresponding Type B uncertainties. Results. A conservative Type B uncertainty for the absorbed dose (k = 2) of 1.2%-1.7% (<50 keV), 0.6%-1.2% (50-100 keV), and 0.3% (100-200 keV) is estimated. The photon-fluence spectrum does not present clinically relevant differences that merit considering additional Type B uncertainties except for energies below 25 keV, where a Type B uncertainty of 0.5% is obtained. Below 30 keV, mass energy-absorption coefficients show Type B uncertainties (k = 2) of about 1.5% (water and air), and 2% (graphite), diminishing in all materials for larger energies and reaching values about 1% (40-50 keV) and 0.5% (50-75 keV). With respect to their ratios, the only significant Type B uncertainties are observed in the case of the water-to-graphite ratio for energies below 30 keV, being about 0.7% (k = 2). Conclusions. In contrast with the intermediate (about 500 keV) or high (about 1 MeV) energy domains, Type B uncertainties due to the different cross-sections implementation cannot be considered subdominant with respect to Type A uncertainties or even to other sources of Type B uncertainties (tally volume averaging, manufacturing tolerances, etc). Therefore, the values reported here should be accommodated within the uncertainty budget in low-energy photon dosimetry studies.
|
Aguiar, P., Rafecas, M., Ortuño, J. E., Kontaxakis, G., Santos, A., Pavia, J., et al. (2010). Geometrical and Monte Carlo projectors in 3D PET reconstruction. Med. Phys., 37(11), 5691–5702.
Abstract: Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under consideration involves an extensive model of the system response matrix based on Monte Carlo simulations and is computed off-line and stored on disk. Methods: Comparisons were performed using simulated and experimental data of the commercial small animal PET scanner rPET. Results: The results demonstrate that the orthogonal distance-based ray-tracer and Siddon ray-tracer using PSF image-space convolutions yield better images in terms of contrast and spatial resolution than those obtained after using the conventional method and the multiray-based S-RT. Furthermore, the Monte Carlo-based method yields slight improvements in terms of contrast and spatial resolution with respect to these geometrical projectors. Conclusions: The orthogonal distance-based ray-tracer and Siddon ray-tracer using PSF image-space convolutions represent satisfactory alternatives to factorizing the system matrix or to the conventional on-the-fly ray-tracing methods for list-mode reconstruction, where an extensive modeling based on Monte Carlo simulations is unfeasible.
|
Baran, J. et al, & Brzezinski, K. (2024). Feasibility of the J-PET to monitor the range of therapeutic proton beams. Phys. Medica, 118, 103301–9pp.
Abstract: Purpose: The aim of this work is to investigate the feasibility of the Jagiellonian Positron Emission Tomography (J -PET) scanner for intra-treatment proton beam range monitoring. Methods: The Monte Carlo simulation studies with GATE and PET image reconstruction with CASToR were performed in order to compare six J -PET scanner geometries. We simulated proton irradiation of a PMMA phantom with a Single Pencil Beam (SPB) and Spread -Out Bragg Peak (SOBP) of various ranges. The sensitivity and precision of each scanner were calculated, and considering the setup's cost-effectiveness, we indicated potentially optimal geometries for the J -PET scanner prototype dedicated to the proton beam range assessment. Results: The investigations indicate that the double -layer cylindrical and triple -layer double -head configurations are the most promising for clinical application. We found that the scanner sensitivity is of the order of 10-5 coincidences per primary proton, while the precision of the range assessment for both SPB and SOBP irradiation plans was found below 1 mm. Among the scanners with the same number of detector modules, the best results are found for the triple -layer dual -head geometry. The results indicate that the double -layer cylindrical and triple -layer double -head configurations are the most promising for the clinical application, Conclusions: We performed simulation studies demonstrating that the feasibility of the J -PET detector for PET -based proton beam therapy range monitoring is possible with reasonable sensitivity and precision enabling its pre -clinical tests in the clinical proton therapy environment. Considering the sensitivity, precision and cost-effectiveness, the double -layer cylindrical and triple -layer dual -head J -PET geometry configurations seem promising for future clinical application.
Keywords: PET; Range monitoring; J-PET; Monte Carlo simulations; Proton radiotherapy
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., et al. (2018). Comparison between simulated and observed LHC beam backgrounds in the ATLAS experiment at E-beam=4 TeV. J. Instrum., 13, P12006–41pp.
Abstract: Results of dedicated Monte Carlo simulations of beam-induced background (BIB) in the ATLAS experiment at the Large Hadron Collider (LHC) are presented and compared with data recorded in 2012. During normal physics operation this background arises mainly from scattering of the 4 TeV protons on residual gas in the beam pipe. Methods of reconstructing the BIB signals in the ATLAS detector, developed and implemented in the simulation chain based on the FLUKA Monte Carlo simulation package, are described. The interaction rates are determined from the residual gas pressure distribution in the LHC ring in order to set an absolute scale on the predicted rates of BIB so that they can be compared quantitatively with data. Through these comparisons the origins of the BIB leading to different observables in the ATLAS detectors are analysed. The level of agreement between simulation results and BIB measurements by ATLAS in 2012 demonstrates that a good understanding of the origin of BIB has been reached.
|
Etxebeste, A., Barrio, J., Bernabeu, J., Lacasta, C., Llosa, G., Muñoz, E., et al. (2019). Study of sensitivity and resolution for full ring PET prototypes based on continuous crystals and analytical modeling of the light distribution. Phys. Med. Biol., 64(3), 035015–17pp.
Abstract: Sensitivity and spatial resolution are the main parameters to maximize in the performance of a PET scanner. For this purpose, detectors consisting of a combination of continuous crystals optically coupled to segmented photodetectors have been employed. With the use of continuous crystals the sensitivity is increased with respect to the pixelated crystals. In addition, spatial resolution is no longer limited to the crystal size. The main drawback is the difficulty in determining the interaction position. In this work, we present the characterization of the performance of a full ring based on cuboid continuous crystals coupled to SiPMs. To this end, we have employed the simulations developed in a previous work for our experimental detector head. Sensitivity could be further enhanced by using tapered crystals. This enhancement is obtained by increasing the solid angle coverage, reducing the wedge-shaped gaps between contiguous detectors. The performance of the scanners based on both crystal geometries was characterized following NEMA NU 4-2008 standardized protocol in order to compare them. An average sensitivity gain over the entire axial field of view of 13.63% has been obtained with tapered geometry while similar performance of the spatial resolution has been proven with both scanners. The activity at which NECR and true peak occur is smaller and the peak value is greater for tapered crystals than for cuboid crystals. Moreover, a higher degree of homogeneity was obtained in the sensitivity map due to the tighter packing of the crystals, which reduces the gaps and results in a better recovery of homogeneous regions than for the cuboid configuration. Some of the results obtained, such as spatial resolution, depend on the interaction position estimation and may vary if other method is employed.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Modelling radiation damage to pixel sensors in the ATLAS detector. J. Instrum., 14, P06012–52pp.
Abstract: Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment at the LHC. Given their close proximity to the interaction point, these detectors will be exposed to an unprecedented amount of radiation over their lifetime. The current pixel detector will receive damage from non-ionizing radiation in excess of 10(15) 1 MeV n(eq)/cm(2), while the pixel detector designed for the high-luminosity LHC must cope with an order of magnitude larger fluence. This paper presents a digitization model incorporating effects of radiation damage to the pixel sensors. The model is described in detail and predictions for the charge collection efficiency and Lorentz angle are compared with collision data collected between 2015 and 2017 (<= 10(15) 1 MeV n(eq)/cm(2)).
|
AGATA Collaboration(Akkoyun, S. et al), Algora, A., Barrientos, D., Domingo-Pardo, C., Egea, F. J., Gadea, A., et al. (2012). AGATA-Advanced GAmma Tracking Array. Nucl. Instrum. Methods Phys. Res. A, 668, 26–58.
Abstract: The Advanced GAmma Tracking Array (AGATA) is a European project to develop and operate the next generation gamma-ray spectrometer. AGATA is based on the technique of gamma-ray energy tracking in electrically segmented high-purity germanium crystals. This technique requires the accurate determination of the energy, time and position of every interaction as a gamma ray deposits its energy within the detector volume. Reconstruction of the full interaction path results in a detector with very high efficiency and excellent spectral response. The realisation of gamma-ray tracking and AGATA is a result of many technical advances. These include the development of encapsulated highly segmented germanium detectors assembled in a triple cluster detector cryostat, an electronics system with fast digital sampling and a data acquisition system to process the data at a high rate. The full characterisation of the crystals was measured and compared with detector-response simulations. This enabled pulse-shape analysis algorithms, to extract energy, time and position, to be employed. In addition, tracking algorithms for event reconstruction were developed. The first phase of AGATA is now complete and operational in its first physics campaign. In the future AGATA will be moved between laboratories in Europe and operated in a series of campaigns to take advantage of the different beams and facilities available to maximise its science output. The paper reviews all the achievements made in the AGATA project including all the necessary infrastructure to operate and support the spectrometer.
|
Pierre Auger Collaboration(Abreu, P. et al), & Pastor, S. (2011). Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory. Nucl. Instrum. Methods Phys. Res. A, 635(1), 92–102.
Abstract: The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs “radio-hybrid” measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.
Keywords: Cosmic rays; Radio detection; Analysis software; Detector simulation
|
Nzongani, U., Zylberman, J., Doncecchi, C. E., Perez, A., Debbasch, F., & Arnault, P. (2023). Quantum circuits for discrete-time quantum walks with position-dependent coin operator. Quantum Inf. Process., 22(7), 270–46pp.
Abstract: The aim of this paper is to build quantum circuits that implement discrete-time quantum walks having an arbitrary position-dependent coin operator. The position of the walker is encoded in base 2: with n wires, each corresponding to one qubit, we encode 2(n) position states. The data necessary to define an arbitrary position-dependent coin operator is therefore exponential in n. Hence, the exponentiality will necessarily appear somewhere in our circuits. We first propose a circuit implementing the position-dependent coin operator, that is naive, in the sense that it has exponential depth and implements sequentially all appropriate position-dependent coin operators. We then propose a circuit that “transfers” all the depth into ancillae, yielding a final depth that is linear in n at the cost of an exponential number of ancillae. Themain idea of this linear-depth circuit is to implement in parallel all coin operators at the different positions. Reducing the depth exponentially at the cost of having an exponential number of ancillae is a goal which has already been achieved for the problem of loading classical data on a quantum circuit (Araujo in Sci Rep 11:6329, 2021) (notice that such a circuit can be used to load the initial state of the walker). Here, we achieve this goal for the problem of applying a position-dependent coin operator in a discrete-time quantum walk. Finally, we extend the result of Welch (New J Phys 16:033040, 2014) from position-dependent unitaries which are diagonal in the position basis to position-dependent 2 x 2-block-diagonal unitaries: indeed, we show that for a position dependence of the coin operator (the block-diagonal unitary) which is smooth enough, one can find an efficient quantum-circuit implementation approximating the coin operator up to an error epsilon (in terms of the spectral norm), the depth and size of which scale as O(1/epsilon). A typical application of the efficient implementation would be the quantum simulation of a relativistic spin-1/2 particle on a lattice, coupled to a smooth external gauge field; notice that recently, quantum spatial-search schemes have been developed which use gauge fields as the oracle, to mark the vertex to be found (Zylberman in Entropy 23:1441, 2021), (Fredon arXiv:2210.13920). A typical application of the linear-depth circuit would be when there is spatial noise on the coin operator (and hence a non-smooth dependence in the position).
Keywords: Quantum walks; Quantum circuits; Quantum simulation
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|