|
Aliaga, R. J. (2017). Real-Time Estimation of Zero Crossings of Sampled Signals for Timing Using Cubic Spline Interpolation. IEEE Trans. Nucl. Sci., 64(8), 2414–2422.
Abstract: A scheme is proposed for hardware estimation of the location of zero crossings of sampled signals with subsample resolution for timing applications, which consists of interpolating the signal with a cubic spline near the zero crossing and then finding the root of the resulting polynomial. An iterative algorithm based on the bisection method is presented that obtains one bit of the result per step and admits an efficient digital implementation using fixed-point representation. In particular, the root estimation iteration involves only two additions, and the initial values can be obtained from finite impulse response (FIR) filters with certain symmetry properties. It is shown that this allows online real-time estimation of timestamps in free-running sampling detector systems with improved accuracy with respect to the more common linear interpolation. The method is evaluated with simulations using ideal and real timing signals, and estimates are given for the resource usage and speed of its implementation.
|
|
|
Egea Canet, F. J. et al, Gadea, A., & Huyuk, T. (2015). A New Front-End High-Resolution Sampling Board for the New-Generation Electronics of EXOGAM2 and NEDA Detectors. IEEE Trans. Nucl. Sci., 62(3), 1056–1062.
Abstract: This paper presents the final design and results of the FADC Mezzanine for the EXOGAM (EXOtic GAMma array spectrometer) and NEDA (Neutron Detector Array) detectors. The measurements performed include those of studying the effective number of bits, the energy resolution using HP-Ge detectors, as well as timing histograms and discrimination performance. Finally, the conclusion shows how a common digitizing device has been integrated in the experimental environment of two very different detectors which combine both low-noise acquisition and fast sampling rates. Not only the integration fulfilled the expected specifications on both systems, but it also showed how a study of synergy between detectors could lead to the reduction of resources and time by applying a common strategy.
|
|
|
Egea Canet, F. J. et al, Gadea, A., & Huyuk, T. (2015). Digital Front-End Electronics for the Neutron Detector NEDA. IEEE Trans. Nucl. Sci., 62(3), 1063–1069.
Abstract: This paper presents the design of the NEDA (Neutron Detector Array) electronics, a first attempt to involve the use of digital electronics in large neutron detector arrays. Starting from the front-end modules attached to the PMTs (PhotoMultiplier Tubes) and ending up with the data processing workstations, a comprehensive electronic system capable of dealing with the acquisition and pre-processing of the neutron array is detailed. Among the electronic modules required, we emphasize the front-end analog processing, the digitalization, digital pre-processing and communications firmware, as well as the integration of the GTS (Global Trigger and Synchronization) system, already used successfully in AGATA (Advanced Gamma Tracking Array). The NEDA array will be available for measurements in 2016.
|
|
|
Egea, F. J. et al, Gadea, A., Barrientos, D., & Huyuk, T. (2013). Design and Test of a High-Speed Flash ADC Mezzanine Card for High-Resolution and Timing Performance in Nuclear Structure Experiments. IEEE Trans. Nucl. Sci., 60(5), 3526–3531.
Abstract: This work describes new electronics for the EX-OGAM2 (HP-Ge detector array) and NEDA (BC501A-based neutron detector array). A new digitizing card with high resolution has been designed for gamma-ray and neutron spectroscopy experiments. The higher bandwidth requirement of the NEDA signals, together with the necessity for accuracy, require a high sampling rate in order to preserve the shape for real-time Pulse Shape Analysis (PSA). The PSA is of paramount importance for the NEDA to discriminate between neutrons and gamma-ray signals. Both high resolution and high speed parameters are often difficult to achieve in a single electronic unit. These constraints, together with the need to build new digitizing electronics to improve performance and flexibility of signal analysis in nuclear physics experiments, led to the development a new FADC mezzanine card. In this work, the design and development are described, including the characterization procedure and the preliminary measurement results.
|
|
|
Cabello, J., Torres-Espallardo, I., Gillam, J. E., & Rafecas, M. (2013). PET Reconstruction From Truncated Projections Using Total-Variation Regularization for Hadron Therapy Monitoring. IEEE Trans. Nucl. Sci., 60(5), 3364–3372.
Abstract: Hadron therapy exploits the properties of ion beams to treat tumors by maximizing the dose released to the target and sparing healthy tissue. With hadron beams, the dose distribution shows a relatively low entrance dose which rises sharply at the end of the range, providing the characteristic Bragg peak that drops quickly thereafter. It is of critical importance in order not to damage surrounding healthy tissues and/or avoid targeting underdosage to know where the delivered dose profile ends-the location of the Bragg peak. During hadron therapy, short-lived beta(+)-emitters are produced along the beam path, their distribution being correlated with the delivered dose. Following positron annihilation, two photons are emitted, which can be detected using a positron emission tomography (PET) scanner. The low yield of emitters, their short half-life, and the wash out from the target region make the use of PET, even only a few minutes after hadron irradiation, a challenging application. In-beam PET represents a potential candidate to estimate the distribution of beta(+)-emitters during or immediately after irradiation, at the cost of truncation effects and degraded image quality due to the partial rings required of the PET scanner. Time-of-flight (ToF) information can potentially be used to compensate for truncation effects and to enhance image contrast. However, the highly demanding timing performance required in ToF-PET makes this option costly. Alternatively, the use of maximum-a-posteriori-expectation-maximization (MAP-EM), including total variation (TV) in the cost function, produces images with low noise, while preserving spatial resolution. In this paper, we compare data reconstructed with maximum-likelihood-expectation-maximization (ML-EM) and MAP-EM using TV as prior, and the impact of including ToF information, from data acquired with a complete and a partial-ring PET scanner, of simulated hadron beams interacting with a polymethyl methacrylate (PMMA) target. The results show that MAP-EM, in the absence of ToF information, produces lower noise images and more similar data compared to the simulated beta(+) distributions than ML-EM with ToF information in the order of 200-600 ps. The investigation is extended to the combination of MAP-EM and ToF information to study the limit of performance using both approaches.
|
|
|
Oliver, J. F., Fuster-Garcia, E., Cabello, J., Tortajada, S., & Rafecas, M. (2013). Application of Artificial Neural Network for Reducing Random Coincidences in PET. IEEE Trans. Nucl. Sci., 60(5), 3399–3409.
Abstract: Positron Emission Tomography (PET) is based on the detection in coincidence of the two photons created in a positron annihilation. In conventional PET, this coincidence identification is usually carried out through a coincidence electronic unit. An accidental coincidence occurs when two photons arising from different annihilations are classified as a coincidence. Accidental coincidences are one of the main sources of image degradation in PET. Some novel systems allow coincidences to be selected post-acquisition in software, or in real time through a digital coincidence engine in an FPGA. These approaches provide the user with extra flexibility in the sorting process and allow the application of alternative coincidence sorting procedures. In this work a novel sorting procedure based on Artificial Neural Network (ANN) techniques has been developed. It has been compared to a conventional coincidence sorting algorithm based on a time coincidence window. The data have been obtained from Monte-Carlo simulations. A small animal PET scanner has been implemented to this end. The efficiency (the ratio of correct identifications) can be selected for both methods. In one case by changing the actual value of the coincidence window used, and in the other by changing a threshold at the output of the neural network. At matched efficiencies, the ANN-based method always produces a sorted output with a smaller random fraction. In addition, two differential trends are found: the conventional method presents a maximum achievable efficiency, while the ANN-based method is able to increase the efficiency up to unity, the ideal value, at the cost of increasing the random fraction. Images reconstructed using ANN sorted data (no compensation for randoms) present better contrast, and those image features which are more affected by randoms are enhanced. For the image quality phantom used in the paper, the ANN method decreases the spill-over ratio by a factor of 18%.
|
|
|
Barrientos, D., Gonzalez, V., Bellato, M., Gadea, A., Bazzacco, D., Blasco, J. M., et al. (2013). Multiple Register Synchronization With a High-Speed Serial Link Using the Aurora Protocol. IEEE Trans. Nucl. Sci., 60(5), 3521–3525.
Abstract: In this work, the development and characterization of a multiple synchronous registers interface communicating with a high-speed serial link and using the Aurora protocol is presented. A detailed description of the developing process and the characterization methods and hardware test benches are also included. This interface will implement the slow control buses of the digitizer cards for the second generation of electronics for the Advanced GAmma Tracking Array (AGATA).
|
|
|
Brown, J. M. C., Gillam, J. E., Paganin, D. M., & Dimmock, M. R. (2013). Laplacian Erosion: An Image Deblurring Technique for Multi-Plane Gamma-Cameras. IEEE Trans. Nucl. Sci., 60(5), 3333–3342.
Abstract: Laplacian Erosion, an image deblurring technique for multi-plane Gamma-cameras, has been developed and tested for planar imaging using a GEANT4 Monte Carlo model of the Pixelated Emission Detector for RadioisOtopes (PEDRO) as a test platform. A contrast and Derenzo-like phantom composed of I-125 were both employed to investigate the dependence of detection plane and pinhole geometry on the performance of Laplacian Erosion. Three different pinhole geometries were tested. It was found that, for the test system, the performance of Laplacian Erosion was inversely proportional to the detection plane offset, and directly proportional to the pinhole diameter. All tested pinhole geometries saw a reduction in the level of image blurring associated with the pinhole geometry. However, the reduction in image blurring came at the cost of signal to noise ratio in the image. The application of Laplacian Erosion was shown to reduce the level of image blurring associated with pinhole geometry and improve recovered image quality in multi-plane Gamma-cameras for the targeted radiotracer I-125.
|
|
|
Dimmock, M. R., Nikulin, D. A., Gillam, J. E., & Nguyen, C. V. (2012). An OpenCL Implementation of Pinhole Image Reconstruction. IEEE Trans. Nucl. Sci., 59(4), 1738–1749.
Abstract: AC++/OpenCL software platform for emission image reconstruction of data from pinhole cameras has been developed. The software incorporates a new, accurate but computationally costly, probability distribution function for operating on list-mode data from detector stacks. The platform architecture is more general than previous works, supporting advanced models such as arbitrary probability distribution, collimation geometry and detector stack geometry. The software was implemented such that all performance-critical operations occur on OpenCL devices, generally GPUs. The performance of the software is tested on several commodity CPU and GPU devices.
|
|
|
Carrio, F., Castillo Gimenez, V., Ferrer, A., Gonzalez, V., Higon-Rodriguez, E., Marin, C., et al. (2011). Optical Link Card Design for the Phase II Upgrade of TileCal Experiment. IEEE Trans. Nucl. Sci., 58(4), 1657–1663.
Abstract: This paper presents the design of an optical link card developed in the frame of the R&D activities for the phase 2 upgrade of the TileCal experiment. This board, that is part of the evaluation of different technologies for the final choice in the next years, is designed as a mezzanine that can work independently or be plugged in the optical multiplexer board of the TileCal backend electronics. It includes two SNAP 12 optical connectors able to transmit and receive up to 75 Gb/s and one SFP optical connector for lower speeds and compatibility with existing hardware as the read out driver. All processing is done in a Stratix II GX field-programmable gate array (FPGA). Details are given on the hardware design, including signal and power integrity analysis, needed when working with these high data rates and on firmware development to obtain the best performance of the FPGA signal transceivers and for the use of the GBT protocol.
|
|