|
Muñoz, E., Barrio, J., Bernabeu, J., Etxebeste, A., Lacasta, C., Llosa, G., et al. (2018). Study and comparison of different sensitivity models for a two-plane Compton camera. Phys. Med. Biol., 63(13), 135004–19pp.
Abstract: Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with Na-22 sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
|
|
|
Natochii, A. et al, & Marinas, C. (2023). Measured and projected beam backgrounds in the Belle II experiment at the SuperKEKB collider. Nucl. Instrum. Methods Phys. Res. A, 1055, 168550–21pp.
Abstract: The Belle II experiment at the SuperKEKB electron-positron collider aims to collect an unprecedented data set of 50 ab-1 to study CP-violation in the B-meson system and to search for Physics beyond the Standard Model. SuperKEKB is already the world's highest-luminosity collider. In order to collect the planned data set within approximately one decade, the target is to reach a peak luminosity of 6 x 1035 cm-2 s-1by further increasing the beam currents and reducing the beam size at the interaction point by squeezing the betatron function down to betay* = 0.3 mm. To ensure detector longevity and maintain good reconstruction performance, beam backgrounds must remain well controlled. We report on current background rates in Belle II and compare these against simulation. We find that a number of recent refinements have significantly improved the background simulation accuracy. Finally, we estimate the safety margins going forward. We predict that backgrounds should remain high but acceptable until a luminosity of at least 2.8 x 1035 cm-2 s-1is reached for betay* = 0.6 mm. At this point, the most vulnerable Belle II detectors, the Time-of-Propagation (TOP) particle identification system and the Central Drift Chamber (CDC), have predicted background hit rates from single-beam and luminosity backgrounds that add up to approximately half of the maximum acceptable rates.
|
|
|
NEXT Collaboration(Azevedo, C. D. R. et al), Gomez-Cadenas, J. J., Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., et al. (2018). Microscopic simulation of xenon-based optical TPCs in the presence of molecular additives. Nucl. Instrum. Methods Phys. Res. A, 877, 157–172.
Abstract: We introduce a simulation framework for the transport of high and low energy electrons in xenon-based optical time projection chambers (OTPCs). The simulation relies on elementary cross sections (electron-atom and electron-molecule) and incorporates, in order to compute the gas scintillation, the reaction/quenching rates (atom-atom and atom-molecule) of the first 41 excited states of xenon and the relevant associated excimers, together with their radiative cascade. The results compare positively with observations made in pure xenon and its mixtures with CO2 and CF4 in a range of pressures from 0.1 to 10 bar. This work sheds some light on the elementary processes responsible for the primary and secondary xenon-scintillation mechanisms in the presence of additives, that are of interest to the OTPC technology.
|
|
|
Nzongani, U., Zylberman, J., Doncecchi, C. E., Perez, A., Debbasch, F., & Arnault, P. (2023). Quantum circuits for discrete-time quantum walks with position-dependent coin operator. Quantum Inf. Process., 22(7), 270–46pp.
Abstract: The aim of this paper is to build quantum circuits that implement discrete-time quantum walks having an arbitrary position-dependent coin operator. The position of the walker is encoded in base 2: with n wires, each corresponding to one qubit, we encode 2(n) position states. The data necessary to define an arbitrary position-dependent coin operator is therefore exponential in n. Hence, the exponentiality will necessarily appear somewhere in our circuits. We first propose a circuit implementing the position-dependent coin operator, that is naive, in the sense that it has exponential depth and implements sequentially all appropriate position-dependent coin operators. We then propose a circuit that “transfers” all the depth into ancillae, yielding a final depth that is linear in n at the cost of an exponential number of ancillae. Themain idea of this linear-depth circuit is to implement in parallel all coin operators at the different positions. Reducing the depth exponentially at the cost of having an exponential number of ancillae is a goal which has already been achieved for the problem of loading classical data on a quantum circuit (Araujo in Sci Rep 11:6329, 2021) (notice that such a circuit can be used to load the initial state of the walker). Here, we achieve this goal for the problem of applying a position-dependent coin operator in a discrete-time quantum walk. Finally, we extend the result of Welch (New J Phys 16:033040, 2014) from position-dependent unitaries which are diagonal in the position basis to position-dependent 2 x 2-block-diagonal unitaries: indeed, we show that for a position dependence of the coin operator (the block-diagonal unitary) which is smooth enough, one can find an efficient quantum-circuit implementation approximating the coin operator up to an error epsilon (in terms of the spectral norm), the depth and size of which scale as O(1/epsilon). A typical application of the efficient implementation would be the quantum simulation of a relativistic spin-1/2 particle on a lattice, coupled to a smooth external gauge field; notice that recently, quantum spatial-search schemes have been developed which use gauge fields as the oracle, to mark the vertex to be found (Zylberman in Entropy 23:1441, 2021), (Fredon arXiv:2210.13920). A typical application of the linear-depth circuit would be when there is spatial noise on the coin operator (and hence a non-smooth dependence in the position).
|
|
|
n_TOF Collaboration(Alcayne, V. et al), Balibrea-Correa, J., Domingo-Pardo, C., Lerendegui-Marco, J., Babiano-Suarez, V., & Ladarescu, I. (2024). A Segmented Total Energy Detector (sTED) optimized for (n,γ) cross-section measurements at n_TOF EAR2. Radiat. Phys. Chem., 217, 11pp.
Abstract: The neutron time-of-flight facility nTOF at CERN is a spallation source dedicated to measurements of neutroninduced reaction cross-sections of interest in nuclear technologies, astrophysics, and other applications. Since 2014, Experimental ARea 2 (EAR2) is operational and delivers a neutron fluence of similar to 4 center dot 10(7) neutrons per nominal proton pulse, which is similar to 50 times higher than the one of Experimental ARea 1 (EAR1) of similar to 8 center dot 10(5) neutrons per pulse. The high neutron flux at EAR2 results in high counting rates in the detectors that challenged the previously existing capture detection systems. For this reason, a Segmented Total Energy Detector (sTED) has been developed to overcome the limitations in the detector's response, by reducing the active volume per module and by using a photo-multiplier (PMT) optimized for high counting rates. This paper presents the main characteristics of the sTED, including energy and time resolution, response to gamma-rays, and provides as well details of the use of the Pulse Height Weighting Technique (PHWT) with this detector. The sTED has been validated to perform neutron-capture cross-section measurements in EAR2 in the neutron energy range from thermal up to at least 400 keV. The detector has already been successfully used in several measurements at nTOF EAR2.
|
|
|
n_TOF Collaboration(Mendoza, E. et al), Giubrone, G., & Tain, J. L. (2011). Improved Neutron Capture Cross Section Measurements with the n_TOF Total Absorption Calorimeter. J. Korean Phys. Soc., 59(2), 1813–1816.
Abstract: The n_TOF collaboration operates a Total Absorption Calorimeter (TAC) [1] for measuring neutron capture cross-sections of low-mass and/or radioactive samples. The results obtained with the TAC have led to a substantial improvement of the capture cross sections of (237)Np and (240)Pu [2]. The experience acquired during the first measurements has allowed us to optimize the performance of the TAC and to improve the capture signal to background ratio, thus opening the way to more complex and demanding measurements on rare radioactive materials. The new design has been reached by a series of detailed Monte Carlo simulations of complete experiments and dedicated test measurements. The new capture setup will be presented and the main achievements highlighted.
|
|
|
n_TOF Collaboration(Zugec, P. et al), Domingo-Pardo, C., Giubrone, G., & Tain, J. L. (2014). GEANT4 simulation of the neutron background of the C6D6 set-up for capture studies at n_TOF. Nucl. Instrum. Methods Phys. Res. A, 760, 57–67.
Abstract: The neutron sensitivity of the Cr6D6 detector setup used at nTOF facility for capture measurements has been studied by means of detailed GEANT4 simulations. A realistic software replica of the entire nTOF experimental hall, including the neutron beam line, sample, detector supports and the walls of the experimental area has been implemented in the simulations. The simulations have been analyzed in the same manner as experimental data, in particular by applying the Pulse Height Weighting Technique. The simulations have been validated against a measurement of the neutron background performed with a(nat)-C sample, showing an excellent agreement above 1 keV. At lower energies, an additional component in the measured C-nat yield has been discovered, which prevents the use of C-nat data for neutron background estimates at neutron energies below a few hundred eV. The origin and time structure of the neutron background have been derived from the simulations. Examples of the neutron background for two different samples are demonstrating the important role of accurate simulations of the neutron background in capture cross-section measurements.
|
|
|
Olivares Herrador, J., Latina, A., Aksoy, A., Fuster Martinez, N., Gimeno, B., & Esperante, D. (2024). Implementation of the beam-loading effect in the tracking code RF-track based on a power-diffusive model. Front. Physics, 12, 1348042–11pp.
Abstract: The need to achieve high energies in particle accelerators has led to the development of new accelerator technologies, resulting in higher beam intensities and more compact devices with stronger accelerating fields. In such scenarios, beam-loading effects occur, and intensity-dependent gradient reduction affects the accelerated beam as a consequence of its interaction with the surrounding cavity. In this study, a power-diffusive partial differential equation is derived to account for this effect. Its numerical resolution has been implemented in the tracking code RF-Track, allowing the simulation of apparatuses where transient beam loading plays an important role. Finally, measurements of this effect have been carried out in the CERN Linear Electron Accelerator for Research (CLEAR) facility at CERN, finding good agreement with the RF-Track simulations.
|
|
|
Oliveira, C. A. B., Sorel, M., Martin-Albo, J., Gomez-Cadenas, J. J., Ferreira, A. L., & Veloso, J. F. C. A. (2011). Energy resolution studies for NEXT. J. Instrum., 6, P05007–13pp.
Abstract: This work aims to present the current state of simulations of electroluminescence (EL) produced in gas-based detectors with special interest for NEXT – Neutrino Experiment with a Xenon TPC. NEXT is a neutrinoless double beta decay experiment, thus needs outstanding energy resolution which can be achieved by using electroluminescence. The process of light production is reviewed and properties such as EL yield and associated fluctuations, excitation and electroluminescence efficiencies, and energy resolution, are calculated. An EL production region with a 5 mm width gap between two infinite parallel planes is considered, where a uniform electric field is produced. The pressure and temperature considered are 10 bar and 293 K, respectively. The results show that, even for low values of VUV photon detection efficiency, good energy resolution can be achieved: below 0.4% (FWHM) at Q(beta beta) = 2.458 MeV.
|
|
|
Oliver, S., Rodriguez Bosca, S., & Gimenez-Alventosa, V. (2024). Enabling particle transport on CAD-based geometries for radiation simulations with penRed. Comput. Phys. Commun., 298, 109091–11pp.
Abstract: Geometry construction is a fundamental aspect of any radiation transport simulation, regardless of the Monte Carlo code being used. Typically, this process is tedious, time-consuming, and error-prone. The conventional approach involves defining geometries using mathematical objects or surfaces. However, this method comes with several limitations, especially when dealing with complex models, particularly those with organic shapes. Furthermore, since each code employs its own format and methodology for defining geometries, sharing and reproducing simulations among researchers becomes a challenging task. Consequently, many codes have implemented support for simulating over geometries constructed via Computer-Aided Design (CAD) tools. Unfortunately, this feature is lacking in penRed and other PENELOPE physics-based codes. Therefore, the objective of this work is to implement such support within the penRed framework. New version program summary Program Title: Parallel Engine for Radiation Energy Deposition (penRed) CPC Library link to program files: https://doi.org/10.17632/rkw6tvtngy.2 Developer's repository link: https://github.com/PenRed/PenRed Code Ocean capsule: https://codeocean.com/capsule/1041417/tree Licensing provisions: GNU Affero General Public License v3 Programming language: C++ standard 2011. Journal reference of previous version: V. Gimenez-Alventosa, V. Gimenez Gomez, S. Oliver, PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE, Computer Physics Communications 267 (2021) 108065. doi:https://doi.org/10.1016/j.cpc.2021.108065. Does the new version supersede the previous version?: Yes Reasons for the new version: Implements the capability to simulate on CAD constructed geometries, among many other features and fixes. Summary of revisions: All changes applied through the code versions are summarized in the file CHANGELOG.md in the repository package. Nature of problem: While Monte Carlo codes have proven valuable in simulating complex radiation scenarios, they rely heavily on accurate geometrical representations. In the same way as many other Monte Carlo codes, penRed employs simple geometric quadric surfaces like planes, spheres and cylinders to define geometries. However, since these geometric models offer a certain level of flexibility, these representations have limitations when it comes to simulating highly intricate and irregular shapes. Anatomic structures, for example, require detailed representations of organs, tissues and bones, which are difficult to achieve using basic geometric objects. Similarly, complex devices or intricate mechanical systems may have designs that cannot be accurately represented within the constraints of such geometric models. Moreover, when the complexity of the model increases, geometry construction process becomes more difficult, tedious, time-consuming and error-prone [2]. Also, as each Monte Carlo geometry library uses its own format and construction method, reproducing the same geometry among different codes is a challenging task. Solution method: To face the problems stated above, the objective of this work is to implement the capability to simulate using irregular and adaptable meshed geometries in the penRed framework. This kind of meshes can be constructed using Computer-Aided Design (CAD) tools, the use of which is very widespread and streamline the design process. This feature has been implemented in a new geometry module named “MESH_BODY” specific for this kind of geometries. This one is freely available and usable within the official penRed package1. It can be used since penRed version 1.9.3b and above.
|
|