Jaworski, G., Palacz, M., Nyberg, J., de Angelis, G., de France, G., Di Nitto, A., et al. (2012). Monte Carlo simulation of a single detector unit for the neutron detector array NEDA. Nucl. Instrum. Methods Phys. Res. A, 673, 64–72.
Abstract: A study of the dimensions and performance of a single detector of the future neutron detector array NEDA was performed by means of Monte Carlo simulations, using GEANT4. Two different liquid scintillators were evaluated: the hydrogen based BC501A and the deuterated BC537. The efficiency and the probability that one neutron will trigger a signal in more than one detector were investigated as a function of the detector size. The simulations were validated comparing the results to experimental measurements performed with two existing neutron detectors, with different geometries, based on the liquid scintillator BC501.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Barranco Navarro, L., Cabrera Urban, S., et al. (2019). Modelling radiation damage to pixel sensors in the ATLAS detector. J. Instrum., 14, P06012–52pp.
Abstract: Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment at the LHC. Given their close proximity to the interaction point, these detectors will be exposed to an unprecedented amount of radiation over their lifetime. The current pixel detector will receive damage from non-ionizing radiation in excess of 10(15) 1 MeV n(eq)/cm(2), while the pixel detector designed for the high-luminosity LHC must cope with an order of magnitude larger fluence. This paper presents a digitization model incorporating effects of radiation damage to the pixel sensors. The model is described in detail and predictions for the charge collection efficiency and Lorentz angle are compared with collision data collected between 2015 and 2017 (<= 10(15) 1 MeV n(eq)/cm(2)).
|
Caron, S., Eckner, C., Hendriks, L., Johannesson, G., Ruiz de Austri, R., & Zaharijas, G. (2023). Mind the gap: the discrepancy between simulation and reality drives interpretations of the Galactic Center Excess. J. Cosmol. Astropart. Phys., 06(6), 013–56pp.
Abstract: The Galactic Center Excess (GCE) in GeV gamma rays has been debated for over a decade, with the possibility that it might be due to dark matter annihilation or undetected point sources such as millisecond pulsars (MSPs). This study investigates how the gamma-ray emission model (-yEM) used in Galactic center analyses affects the interpretation of the GCE's nature. To address this issue, we construct an ultra-fast and powerful inference pipeline based on convolutional Deep Ensemble Networks. We explore the two main competing hypotheses for the GCE using a set of-yEMs with increasing parametric freedom. We calculate the fractional contribution (fsrc) of a dim population of MSPs to the total luminosity of the GCE and analyze its dependence on the complexity of the ryEM. For the simplest ryEM, we obtain fsrc = 0.10 f 0.07, while the most complex model yields fsrc = 0.79 f 0.24. In conclusion, we find that the statement about the nature of the GCE (dark matter or not) strongly depends on the assumed ryEM. The quoted results for fsrc do not account for the additional uncertainty arising from the fact that the observed gamma-ray sky is out-of-distribution concerning the investigated ryEM iterations. We quantify the reality gap between our ryEMs using deep-learning-based One-Class Deep Support Vector Data Description networks, revealing that all employed ryEMs have gaps to reality. Our study casts doubt on the validity of previous conclusions regarding the GCE and dark matter, and underscores the urgent need to account for the reality gap and consider previously overlooked “out of domain” uncertainties in future interpretations.
|
NEXT Collaboration(Azevedo, C. D. R. et al), Gomez-Cadenas, J. J., Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., et al. (2018). Microscopic simulation of xenon-based optical TPCs in the presence of molecular additives. Nucl. Instrum. Methods Phys. Res. A, 877, 157–172.
Abstract: We introduce a simulation framework for the transport of high and low energy electrons in xenon-based optical time projection chambers (OTPCs). The simulation relies on elementary cross sections (electron-atom and electron-molecule) and incorporates, in order to compute the gas scintillation, the reaction/quenching rates (atom-atom and atom-molecule) of the first 41 excited states of xenon and the relevant associated excimers, together with their radiative cascade. The results compare positively with observations made in pure xenon and its mixtures with CO2 and CF4 in a range of pressures from 0.1 to 10 bar. This work sheds some light on the elementary processes responsible for the primary and secondary xenon-scintillation mechanisms in the presence of additives, that are of interest to the OTPC technology.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo, F. L., et al. (2021). Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents. J. Instrum., 16(8), P08025–46pp.
Abstract: Non-ionizing energy loss causes bulk damage to the silicon sensors of the ATLAS pixel and strip detectors. This damage has important implications for data-taking operations, charged-particle track reconstruction, detector simulations, and physics analysis. This paper presents simulations and measurements of the leakage current in the ATLAS pixel detector and semiconductor tracker as a function of location in the detector and time, using data collected in Run 1 (2010-2012) and Run 2 (2015-2018) of the Large Hadron Collider. The extracted fluence shows a much stronger vertical bar z vertical bar-dependence in the innermost layers than is seen in simulation. Furthermore, the overall fluence on the second innermost layer is significantly higher than in simulation, with better agreement in layers at higher radii. These measurements are important for validating the simulation models and can be used in part to justify safety factors for future detector designs and interventions.
|
ATLAS Collaboration(Adragna, P. et al), Castelo, J., Castillo Gimenez, V., Cuenca, C., Ferrer, A., Fullana, E., et al. (2010). Measurement of pion and proton response and longitudinal shower profiles up to 20 nuclear interaction lengths with the ATLAS Tile calorimeter. Nucl. Instrum. Methods Phys. Res. A, 615(2), 158–181.
Abstract: The response of pions and protons in the energy range of 20-180 GeV, produced at CERN's SPS H8 test-beam line in the ATLAS iron-scintillator Tile hadron calorimeter, has been measured. The test-beam configuration allowed the measurement of the longitudinal shower development for pions and protons up to 20 nuclear interaction lengths. It was found that pions penetrate deeper in the calorimeter than protons. However, protons induce showers that are wider laterally to the direction of the impinging particle. Including the measured total energy response, the pion-to-proton energy ratio and the resolution, all observations are consistent with a higher electromagnetic energy fraction in pion-induced showers. The data are compared with GEANT4 simulations using several hadronic physics lists. The measured longitudinal shower profiles are described by an analytical shower parametrization within an accuracy of 5-10%. The amount of energy leaking out behind the calorimeter is determined and parametrized as a function of the beam energy and the calorimeter depth. This allows for a leakage correction of test-beam results in the standard projective geometry.
|
Natochii, A. et al, & Marinas, C. (2023). Measured and projected beam backgrounds in the Belle II experiment at the SuperKEKB collider. Nucl. Instrum. Methods Phys. Res. A, 1055, 168550–21pp.
Abstract: The Belle II experiment at the SuperKEKB electron-positron collider aims to collect an unprecedented data set of 50 ab-1 to study CP-violation in the B-meson system and to search for Physics beyond the Standard Model. SuperKEKB is already the world's highest-luminosity collider. In order to collect the planned data set within approximately one decade, the target is to reach a peak luminosity of 6 x 1035 cm-2 s-1by further increasing the beam currents and reducing the beam size at the interaction point by squeezing the betatron function down to betay* = 0.3 mm. To ensure detector longevity and maintain good reconstruction performance, beam backgrounds must remain well controlled. We report on current background rates in Belle II and compare these against simulation. We find that a number of recent refinements have significantly improved the background simulation accuracy. Finally, we estimate the safety margins going forward. We predict that backgrounds should remain high but acceptable until a luminosity of at least 2.8 x 1035 cm-2 s-1is reached for betay* = 0.6 mm. At this point, the most vulnerable Belle II detectors, the Time-of-Propagation (TOP) particle identification system and the Central Drift Chamber (CDC), have predicted background hit rates from single-beam and luminosity backgrounds that add up to approximately half of the maximum acceptable rates.
|
Poley, L., Stolzenberg, U., Schwenker, B., Frey, A., Gottlicher, P., Marinas, C., et al. (2021). Mapping the material distribution of a complex structure in an electron beam. J. Instrum., 16(1), P01010–33pp.
Abstract: The simulation and analysis of High Energy Physics experiments require a realistic simulation of the detector material and its distribution. The challenge is to describe all active and passive parts of large scale detectors like ATLAS in terms of their size, position and material composition. The common method for estimating the radiation length by weighing individual components, adding up their contributions and averaging the resulting material distribution over extended structures provides a good general estimate, but can deviate significantly from the material actually present. A method has been developed to assess its material distribution with high spatial resolution using the reconstructed scattering angles and hit positions of high energy electron tracks traversing an object under investigation. The study presented here shows measurements for an extended structure with a highly inhomogeneous material distribution. The structure under investigation is an End-of-Substructure-card prototype designed for the ATLAS Inner Tracker strip tracker – a PCB populated with components of a large range of material budgets and sizes. The measurements presented here summarise requirements for data samples and reconstructed electron tracks for reliable image reconstruction of large scale, inhomogeneous samples, choices of pixel sizes compared to the size of features under investigation as well as a bremsstrahlung correction for high material densities and thicknesses.
|
Roser, J., Barrientos, L., Bernabeu, J., Borja-Lloret, M., Muñoz, E., Ros, A., et al. (2022). Joint image reconstruction algorithm in Compton cameras. Phys. Med. Biol., 67(15), 155009–15pp.
Abstract: Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.
|