|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
|
|
Poley, L., Stolzenberg, U., Schwenker, B., Frey, A., Gottlicher, P., Marinas, C., et al. (2021). Mapping the material distribution of a complex structure in an electron beam. J. Instrum., 16(1), P01010–33pp.
Abstract: The simulation and analysis of High Energy Physics experiments require a realistic simulation of the detector material and its distribution. The challenge is to describe all active and passive parts of large scale detectors like ATLAS in terms of their size, position and material composition. The common method for estimating the radiation length by weighing individual components, adding up their contributions and averaging the resulting material distribution over extended structures provides a good general estimate, but can deviate significantly from the material actually present. A method has been developed to assess its material distribution with high spatial resolution using the reconstructed scattering angles and hit positions of high energy electron tracks traversing an object under investigation. The study presented here shows measurements for an extended structure with a highly inhomogeneous material distribution. The structure under investigation is an End-of-Substructure-card prototype designed for the ATLAS Inner Tracker strip tracker – a PCB populated with components of a large range of material budgets and sizes. The measurements presented here summarise requirements for data samples and reconstructed electron tracks for reliable image reconstruction of large scale, inhomogeneous samples, choices of pixel sizes compared to the size of features under investigation as well as a bremsstrahlung correction for high material densities and thicknesses.
|
|
|
De Romeri, V., Majumdar, A., Papoulias, D. K., & Srivastava, R. (2024). XENONnT and LUX-ZEPLIN constraints on DSNB-boosted dark matter. J. Cosmol. Astropart. Phys., 03(3), 028–34pp.
Abstract: We consider a scenario in which dark matter particles are accelerated to semirelativistic velocities through their scattering with the Diffuse Supernova Neutrino Background. Such a subdominant, but more energetic dark matter component can be then detected via its scattering on the electrons and nucleons inside direct detection experiments. This opens up the possibility to probe the sub -GeV mass range, a region of parameter space that is usually not accessible at such facilities. We analyze current data from the XENONnT and LUX-ZEPLIN experiments and we obtain novel constraints on the scattering cross sections of sub -GeV boosted dark matter with both nucleons and electrons. We also highlight the importance of carefully taking into account Earth's attenuation effects as well as the finite nuclear size into the analysis. By comparing our results to other existing constraints, we show that these effects lead to improved and more robust constraints.
|
|
|
Roser, J., Barrientos, L., Bernabeu, J., Borja-Lloret, M., Muñoz, E., Ros, A., et al. (2022). Joint image reconstruction algorithm in Compton cameras. Phys. Med. Biol., 67(15), 155009–15pp.
Abstract: Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.
|
|
|
Roser, J., Muñoz, E., Barrientos, L., Barrio, J., Bernabeu, J., Borja-Lloret, M., et al. (2020). Image reconstruction for a multi-layer Compton telescope: an analytical model for three interaction events. Phys. Med. Biol., 65(14), 145005–17pp.
Abstract: Compton Cameras are electronically collimated photon imagers suitable for sub-MeV to few MeV gamma-ray detection. Such features are desirable to enablein vivorange verification in hadron therapy, through the detection of secondary Prompt Gammas. A major concern with this technique is the poor image quality obtained when the incoming gamma-ray energy is unknown. Compton Cameras with more than two detector planes (multi-layer Compton Cameras) have been proposed as a solution, given that these devices incorporate more signal sequences of interactions to the conventional two interaction events. In particular, three interaction events convey more spectral information as they allow inferring directly the incident gamma-ray energy. A three-layer Compton Telescope based on continuous Lanthanum (III) Bromide crystals coupled to Silicon Photomultipliers is being developed at the IRIS group of IFIC-Valencia. In a previous work we proposed a spectral reconstruction algorithm for two interaction events based on an analytical model for the formation of the signal. To fully exploit the capabilities of our prototype, we present here an extension of the model for three interaction events. Analytical expressions of the sensitivity and the System Matrix are derived and validated against Monte Carlo simulations. Implemented in a List Mode Maximum Likelihood Expectation Maximization algorithm, the proposed model allows us to obtain four-dimensional (energy and position) images by using exclusively three interaction events. We are able to recover the correct spectrum and spatial distribution of gamma-ray sources when ideal data are employed. However, the uncertainties associated to experimental measurements result in a degradation when real data from complex structures are employed. Incorrect estimation of the incident gamma-ray interaction positions, and missing deposited energy associated with escaping secondaries, have been identified as the causes of such degradation by means of a detailed Monte Carlo study. As expected, our current experimental resolution and efficiency to three interaction events prevents us from correctly recovering complex structures of radioactive sources. However, given the better spectral information conveyed by three interaction events, we expect an improvement of the image quality of conventional Compton imaging when including such events. In this regard, future development includes the incorporation of the model assessed in this work to the two interaction events model in order to allow using simultaneously two and three interaction events in the image reconstruction.
|
|
|
AGATA Collaboration(Soderstrom, P. A. et al), & Gadea, A. (2011). Interaction position resolution simulations and in-beam measurements of the AGATA HPGe detectors. Nucl. Instrum. Methods Phys. Res. A, 638(1), 96–109.
Abstract: The interaction position resolution of the segmented HPGe detectors of an AGATA triple cluster detector has been studied through Monte Carlo simulations and in an in-beam experiment. A new method based on measuring the energy resolution of Doppler-corrected gamma-ray spectra at two different target to detector distances is described. This gives the two-dimensional position resolution in the plane perpendicular to the direction of the emitted gamma-ray. The gamma-ray tracking was used to determine the full energy of the gamma-rays and the first interaction point, which is needed for the Doppler correction. Five different heavy-ion induced fusion-evaporation reactions and a reference reaction were selected for the simulations. The results of the simulations show that the method works very well and gives a systematic deviation of <1 mm in the FVVHM of the interaction position resolution for the gamma-ray energy range from 60 keV to 5 MeV. The method was tested with real data from an in-beam measurement using a (30)5i beam at 64 MeV on a thin C-12 target. Pulse-shape analysis of the digitized detector waveforms and gamma-ray tracking was performed to determine the position of the first interaction point, which was used for the Doppler corrections. Results of the dependency of the interaction position resolution on the gamma-ray energy and on the energy, axial location and type of the first interaction point, are presented. The FVVHM of the interaction position resolution varies roughly linearly as a function of gamma-ray energy from 8.5 mm at 250 key to 4 mm at 1.5 MeV, and has an approximately constant value of about 4 mm in the gamma-ray energy range from 1.5 to 4 MeV.
|
|
|
Tain, J. L., Agramunt, J., Algora, A., Aprahamian, A., Cano-Ott, D., Fraile, L. M., et al. (2015). The sensitivity of LaBr3:Ce scintillation detectors to low energy neutrons: Measurement and Monte Carlo simulation. Nucl. Instrum. Methods Phys. Res. A, 774, 17–24.
Abstract: The neutron sensitivity of a cylindrical circle minus 1.5 in x 1.5 in LaBr3:Ce scintillation detector was measured using quasi-monoenergetic neutron beams in the energy range from 40 keV to 2.5 MeV. In this energy range the detector is sensitive to gamma-rays generated in neutron inelastic and capture processes. The experimental energy response was compared with Monte Carlo simulations performed with the Geant4 simulation toolkit using the so-called High Precision Neutron Models. These models rely on relevant information stored in evaluated nuclear data libraries. The performance of the Geant4 Neutron Data Library as well as several standard nuclear data libraries was investigated. In the latter case this was made possible by the use of a conversion tool that allowed the direct use of the data from other libraries in Geant4. Overall it was found that there was good agreement with experiment for some of the neutron data bases like ENDF/B-VII.0 or JENDL-3.3 but not with the others such as ENDF/B-VI.8 or JEFF-3.1.
|
|
|
BRIKEN Collaboration(Tarifeño-Saldivia, A. et al), Tain, J. L., Domingo-Pardo, C., Agramunt, J., Algora, A., Morales, A. I., et al. (2017). Conceptual design of a hybrid neutron-gamma detector for study of beta-delayed neutrons at the RIB facility of RIKEN. J. Instrum., 12, P04006–22pp.
Abstract: BRIKEN is a complex detection system to be installed at the RIB-facility of the RIKEN Nishina Center. It is aimed at the detection of heavy-ion implants, β-particles, γ-rays and β-delayed neutrons. The whole detection setup involves the Advanced Implantation Detection Array (AIDA), two HPGe Clover detectors and a large set of 166 counters of 3He embedded in a high-density polyethylene matrix. This article reports on a novel methodology developed for the conceptual design and optimisation of the 3He-tubes array, aiming at the best possible performance in terms of neutron detection. The algorithm is based on a geometric representation of two selected parameters of merit, namely, average neutron detection efficiency and efficiency flatness, as a function of a reduced number of geometric variables. The response of the detection system itself, for each configuration, is obtained from a systematic MC-simulation implemented realistically in Geant4. This approach has been found to be particularly useful. On the one hand, due to the different types and large number of 3He-tubes involved and, on the other hand, due to the additional constraints introduced by the ancillary detectors for charged particles and gamma-rays. Empowered by the robustness of the algorithm, we have been able to design a versatile detection system, which can be easily re-arranged into a compact mode in order to maximize the neutron detection performance, at the cost of the gamma-ray sensitivity. In summary, we have designed a system which shows, for neutron energies up to 1(5) MeV, a rather flat and high average efficiency of 68.6%(64%) and 75.7%(71%) for the hybrid and compact modes, respectively. The performance of the BRIKEN system has been also quantified realistically by means of MC-simulations made with different neutron energy distributions.
|
|
|
Valdes-Cortez, C., Mansour, I., Rivard, M. J., Ballester, F., Mainegra-Hing, E., Thomson, R. M., et al. (2021). A study of Type B uncertainties associated with the photoelectric effect in low-energy Monte Carlo simulations. Phys. Med. Biol., 66(10), 105014–14pp.
Abstract: Purpose. To estimate Type B uncertainties in absorbed-dose calculations arising from the different implementations in current state-of-the-art Monte Carlo (MC) codes of low-energy photon cross-sections (<200 keV). Methods. MC simulations are carried out using three codes widely used in the low-energy domain: PENELOPE-2018, EGSnrc, and MCNP. Three dosimetry-relevant quantities are considered: mass energy-absorption coefficients for water, air, graphite, and their respective ratios; absorbed dose; and photon-fluence spectra. The absorbed dose and the photon-fluence spectra are scored in a spherical water phantom of 15 cm radius. Benchmark simulations using similar cross-sections have been performed. The differences observed between these quantities when different cross-sections are considered are taken to be a good estimator for the corresponding Type B uncertainties. Results. A conservative Type B uncertainty for the absorbed dose (k = 2) of 1.2%-1.7% (<50 keV), 0.6%-1.2% (50-100 keV), and 0.3% (100-200 keV) is estimated. The photon-fluence spectrum does not present clinically relevant differences that merit considering additional Type B uncertainties except for energies below 25 keV, where a Type B uncertainty of 0.5% is obtained. Below 30 keV, mass energy-absorption coefficients show Type B uncertainties (k = 2) of about 1.5% (water and air), and 2% (graphite), diminishing in all materials for larger energies and reaching values about 1% (40-50 keV) and 0.5% (50-75 keV). With respect to their ratios, the only significant Type B uncertainties are observed in the case of the water-to-graphite ratio for energies below 30 keV, being about 0.7% (k = 2). Conclusions. In contrast with the intermediate (about 500 keV) or high (about 1 MeV) energy domains, Type B uncertainties due to the different cross-sections implementation cannot be considered subdominant with respect to Type A uncertainties or even to other sources of Type B uncertainties (tally volume averaging, manufacturing tolerances, etc). Therefore, the values reported here should be accommodated within the uncertainty budget in low-energy photon dosimetry studies.
|
|
|
Villaescusa-Navarro, F. et al, & Villanueva-Domingo, P. (2023). The CAMELS Project: Public Data Release. Astrophys. J. Suppl. Ser., 265(2), 54–14pp.
Abstract: The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project was developed to combine cosmology with astrophysics through thousands of cosmological hydrodynamic simulations and machine learning. CAMELS contains 4233 cosmological simulations, 2049 N-body simulations, and 2184 state-of-the-art hydrodynamic simulations that sample a vast volume in parameter space. In this paper, we present the CAMELS public data release, describing the characteristics of the CAMELS simulations and a variety of data products generated from them, including halo, subhalo, galaxy, and void catalogs, power spectra, bispectra, Lya spectra, probability distribution functions, halo radial profiles, and X-rays photon lists. We also release over 1000 catalogs that contain billions of galaxies from CAMELS-SAM: a large collection of N-body simulations that have been combined with the Santa Cruz semianalytic model. We release all the data, comprising more than 350 terabytes and containing 143,922 snapshots, millions of halos, galaxies, and summary statistics. We provide further technical details on how to access, download, read, and process the data at .
|
|