|
XENON100 Collaboration(Aprile, E. et al), & Orrigo, S. E. A. (2014). Observation and applications of single-electron charge signals in the XENON100 experiment. J. Phys. G, 41(3), 035201–13pp.
Abstract: The XENON100 dark matter experiment uses liquid xenon in a time projection chamber (TPC) to measure xenon nuclear recoils resulting from the scattering of dark matter weakly interacting massive particles (WIMPs). In this paper, we report the observation of single-electron charge signals which are not related to WIMP interactions. These signals, which show the excellent sensitivity of the detector to small charge signals, are explained as being due to the photoionization of impurities in the liquid xenon and of the metal components inside the TPC. They are used as a unique calibration source to characterize the detector. We explain how we can infer crucial parameters for the XENON100 experiment: the secondary-scintillation gain, the extraction yield from the liquid to the gas phase and the electron drift velocity.
|
|
|
Andringa, S. et al, Capozzi, F., & Sorel, M. (2023). Low-energy physics in neutrino LArTPCs. J. Phys. G, 50(3), 033001–60pp.
Abstract: In this paper, we review scientific opportunities and challenges related to detection and reconstruction of low-energy (less than 100 MeV) signatures in liquid argon time-projection chamber (LArTPC) neutrino detectors. LArTPC neutrino detectors designed for performing precise long-baseline oscillation measurements with GeV-scale accelerator neutrino beams also have unique sensitivity to a range of physics and astrophysics signatures via detection of event features at and below the few tens of MeV range. In addition, low-energy signatures are an integral part of GeV-scale accelerator neutrino interaction final-states, and their reconstruction can enhance the oscillation physics sensitivities of LArTPC experiments. New physics signals from accelerator and natural sources also generate diverse signatures in the low-energy range, and reconstruction of these signatures can increase the breadth of Beyond the Standard Model scenarios accessible in LArTPC-based searches. A variety of experimental and theory-related challenges remain to realizing this full range of potential benefits. Neutrino interaction cross-sections and other nuclear physics processes in argon relevant to sub-hundred-MeV LArTPC signatures are poorly understood, and improved theory and experimental measurements are needed; pion decay-at-rest sources and charged particle and neutron test beams are ideal facilities for improving this understanding. There are specific calibration needs in the low-energy range, as well as specific needs for control and understanding of radiological and cosmogenic backgrounds. Low-energy signatures, whether steady-state or part of a supernova burst or larger GeV-scale event topology, have specific triggering, DAQ and reconstruction requirements that must be addressed outside the scope of conventional GeV-scale data collection and analysis pathways. Novel concepts for future LArTPC technology that enhance low-energy capabilities should also be explored to help address these challenges.
|
|
|
DUNE Collaboration(Abud, A. A. et al), Amedo, P., Antonova, M., Barenboim, G., Cervera-Villanueva, A., De Romeri, V., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. J. Instrum., 18(4), P04034–35pp.
Abstract: The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
|
|