|
Robert, C., Dedes, G., Battistoni, G., Bohlen, T. T., Buvat, I., Cerutti, F., et al. (2013). Distributions of secondary particles in proton and carbon-ion therapy: a comparison between GATE/Geant4 and FLUKA Monte Carlo codes. Phys. Med. Biol., 58(9), 2879–2899.
Abstract: Monte Carlo simulations play a crucial role for in-vivo treatment monitoring based on PET and prompt gamma imaging in proton and carbon-ion therapies. The accuracy of the nuclear fragmentation models implemented in these codes might affect the quality of the treatment verification. In this paper, we investigate the nuclear models implemented in GATE/Geant4 and FLUKA by comparing the angular and energy distributions of secondary particles exiting a homogeneous target of PMMA. Comparison results were restricted to fragmentation of O-16 and C-12. Despite the very simple target and set-up, substantial discrepancies were observed between the two codes. For instance, the number of high energy (>1 MeV) prompt gammas exiting the target was about twice as large with GATE/Geant4 than with FLUKA both for proton and carbon ion beams. Such differences were not observed for the predicted annihilation photon production yields, for which ratios of 1.09 and 1.20 were obtained between GATE and FLUKA for the proton beam and the carbon ion beam, respectively. For neutrons and protons, discrepancies from 14% (exiting protons-carbon ion beam) to 57% (exiting neutrons-proton beam) have been identified in production yields as well as in the energy spectra for neutrons.
|
|
|
Solevi, P., Magrin, G., Moro, D., & Mayer, R. (2015). Monte Carlo study of microdosimetric diamond detectors. Phys. Med. Biol., 60(18), 7069–7083.
Abstract: Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.
|
|
|
Gillam, J. E., Solevi, P., Oliver, J. F., & Rafecas, M. (2013). Simulated one-pass list-mode: an approach to on-the-fly system matrix calculation. Phys. Med. Biol., 58(7), 2377–2394.
Abstract: In the development of prototype systems for positron emission tomography a valid and robust image reconstruction algorithm is required. However, prototypes often employ novel detector and system geometries which may change rapidly under optimization. In addition, developing systems generally produce highly granular, or possibly continuous detection domains which require some level of on-the-fly calculation for retention of measurement precision. In this investigation a new method of on-the-fly system matrix calculation is proposed that provides advantages in application to such list-mode systems in terms of flexibility in system modeling. The new method is easily adaptable to complicated system geometries and available computational resources. Detection uncertainty models are used as random number generators to produce ensembles of possible photon trajectories at image reconstruction time for each datum in the measurement list. However, the result of this approach is that the system matrix elements change at each iteration in a non-repetitive manner. The resulting algorithm is considered the simulation of a one-pass list (SOPL) which is generated and the list traversed during image reconstruction. SOPL alters the system matrix in use at each iteration and so behavior within the maximum likelihood-expectation maximization algorithm was investigated. A two-pixel system and a small two dimensional imaging model are used to illustrate the process and quantify aspects of the algorithm. The two-dimensional imaging system showed that, while incurring a penalty in image resolution, in comparison to a non-random equal-computation counterpart, SOPL provides much enhanced noise properties. In addition, enhancement in system matrix quality is straightforward (by increasing the number of samples in the ensemble) so that the resolution penalty can be recovered when desired while retaining improvement in noise properties. Finally the approach is tested and validated against a standard (highly accurate) system matrix using experimental data from a prototype system-the AX-PET.
|
|
|
Solevi, P., Muñoz, E., Solaz, C., Trovato, M., Dendooven, P., Gillam, J. E., et al. (2016). Performance of MACACO Compton telescope for ion-beam therapy monitoring: first test with proton beams. Phys. Med. Biol., 61(14), 5149–5165.
Abstract: In order to exploit the advantages of ion-beam therapy in a clinical setting, delivery verification techniques are necessary to detect deviations from the planned treatment. Efforts are currently oriented towards the development of devices for real-time range monitoring. Among the different detector concepts proposed, Compton cameras are employed to detect prompt gammas and represent a valid candidate for real-time range verification. We present the first on-beam test of MACACO, a Compton telescope (multi-layer Compton camera) based on lanthanum bromide crystals and silicon photo-multipliers. The Compton telescope was first characterized through measurements and Monte Carlo simulations. The detector linearity was measured employing Na-22 and Am-Be sources, obtaining about 10% deviation from linearity at 3.44 MeV. A spectral image reconstruction algorithm was tested on synthetic data. Point-like sources emitting gamma rays with energy between 2 and 7 MeV were reconstructed with 3-5 mm resolution. The two-layer Compton telescope was employed to measure radiation emitted from a beam of 150 MeV protons impinging on a cylindrical PMMA target. Bragg-peak shifts were achieved via adjustment of the PMMA target location and the resulting measurements used during image reconstruction. Reconstructed Bragg peak profiles proved sufficient to observe peak-location differences within 10 mm demonstrating the potential of the MACACO Compton Telescope as a monitoring device for ion-beam therapy.
|
|
|
Hueso-Gonzalez, F., Vijande, J., Ballester, F., Perez-Calatayud, J., & Siebert, F. A. (2015). A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy. Phys. Med. Biol., 60(14), 5455–5469.
Abstract: In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.
|
|
|
Cabello, J., & Rafecas, M. (2012). Comparison of basis functions for 3D PET reconstruction using a Monte Carlo system matrix. Phys. Med. Biol., 57(7), 1759–1777.
Abstract: In emission tomography, iterative statistical methods are accepted as the reconstruction algorithms that achieve the best image quality. The accuracy of these methods relies partly on the quality of the system response matrix (SRM) that characterizes the scanner. The more physical phenomena included in the SRM, the higher the SRM quality, and therefore higher image quality is obtained from the reconstruction process. High-resolution small animal scanners contain as many as 10(3)-10(4) small crystal pairs, while the field of view (FOV) is divided into hundreds of thousands of small voxels. These two characteristics have a significant impact on the number of elements to be calculated in the SRM. Monte Carlo (MC) methods have gained popularity as a way of calculating the SRM, due to the increased accuracy achievable, at the cost of introducing some statistical noise and long simulation times. In the work presented here the SRM is calculated using MC methods exploiting the cylindrical symmetries of the scanner, significantly reducing the simulation time necessary to calculate a high statistical quality SRM and the storage space necessary. The use of cylindrical symmetries makes polar voxels a convenient basis function. Alternatively, spherically symmetric basis functions result in improved noise properties compared to cubic and polar basis functions. The quality of reconstructed images using polar voxels, spherically symmetric basis functions on a polar grid, cubic voxels and post-reconstruction filtered polar and cubic voxels is compared from a noise and spatial resolution perspective. This study demonstrates that polar voxels perform as well as cubic voxels, reducing the simulation time necessary to calculate the SRM and the disk space necessary to store it. Results showed that spherically symmetric functions outperform polar and cubic basis functions in terms of noise properties, at the cost of slightly degraded spatial resolution, larger SRM file size and longer reconstruction times. However, we demonstrate that post-reconstruction smoothing, usually applied in emission imaging to reduce the level of noise, can produce a spatial resolution degradation of similar to 50%, while spherically symmetric basis functions produce a degradation of only similar to 6%, compared to polar and cubic voxels, at the same noise level. Therefore, the image quality trade-off obtained with blobs is higher than that obtained with cubic or polar voxels.
|
|
|
Muñoz, E., Barrio, J., Bernabeu, J., Etxebeste, A., Lacasta, C., Llosa, G., et al. (2018). Study and comparison of different sensitivity models for a two-plane Compton camera. Phys. Med. Biol., 63(13), 135004–19pp.
Abstract: Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with Na-22 sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
|
|
|
Cabello, J., Etxebeste, A., Llosa, G., & Ziegler, S. I. (2015). Simulation study of PET detector limitations using continuous crystals. Phys. Med. Biol., 60(9), 3673–3694.
Abstract: Continuous crystals can potentially obtain better intrinsic detector spatial resolution compared to pixelated crystals, additionally providing depth of interaction (DoI) information from the light distribution. To achieve high performance sophisticated interaction position estimation algorithms are required. There are a number of algorithms in the literature applied to different crystal dimensions and different photodetectors. However, the different crystal properties and photodetector array geometries have an impact on the algorithm performance. In this work we analysed, through Monte Carlo simulations, different combinations of realistic crystals and photodetector parameters to better understand their influence on the interaction position estimation accuracy, with special emphasis on the DoI. We used an interaction position estimation based on an analytical model for the present work. Different photodetector granulation schemes were investigated. The impact of the number of crystal faces readout by photodetectors was studied by simulating scenarios with one and two photodetectors. In addition, crystals with different levels of reflection and aspect ratios (AR) were analysed. Results showed that the impact of photodetector granularity is mainly shown near the edges and specially in the corners of the crystal. The resulting intrinsic spatial resolution near the centre with a 12 x 12 x 10 mm(3) LYSO crystal was 0.7-0.9 mm, while the average spatial resolution calculated on the entire crystal was 0.77 +/- 0.18 mm for all the simulated geometries with one and two photodetectors. Having front and back photodetectors reduced the DoI bias (Euclidean distance between estimated DoI and real DoI) and improved the transversal resolution near the corners. In scenarios with one photodetector, small AR resulted in DoI inaccuracies for absorbed events at the entrance of the crystal. These inaccuracies were slightly reduced either by increasing the AR or reducing the amount of reflected light, and highly mitigated using two photodetectors. Using one photodetector, we obtained a piecewise DoI error model with a DoI resolution of 0.4-0.9 mm for a 1.2 AR crystal, and we observed that including a second photodetector or reducing the amount of reflections reduced the DoI bias but did not significantly improve the DoI resolution. Translating the piecewise DoI error model obtained in this study to image reconstruction we obtained a spatial resolution variability of 0.39 mm using 85% of the FoV, compared to 2.59 mm and 1.87 mm without DoI correction or with a dual layer system, respectively.
|
|
|
Muñoz, E., Barrio, J., Etxebeste, A., Ortega, P. G., Lacasta, C., Oliver, J. F., et al. (2017). Performance evaluation of MACACO: a multilayer Compton camera. Phys. Med. Biol., 62(18), 7321–7341.
Abstract: Compton imaging devices have been proposed and studied for a wide range of applications. We have developed a Compton camera prototype which can be operated with two or three detector layers based on monolithic lanthanum bromide (LaBr3) crystals coupled to silicon photomultipliers (SiPMs), to be used for proton range verification in hadron therapy. In this work, we present the results obtained with our prototype in laboratory tests with radioactive sources and in simulation studies. Images of a Na-22 and an Y-88 radioactive sources have been successfully reconstructed. The full width half maximum of the reconstructed images is below 4 mm for a Na-22 source at a distance of 5 cm.
|
|
|
Ortega, P. G., Torres-Espallardo, I., Cerutti, F., Ferrari, A., Gillam, J. E., Lacasta, C., et al. (2015). Noise evaluation of Compton camera imaging for proton therapy. Phys. Med. Biol., 60(5), 1845–1863.
Abstract: Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming. energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.
|
|