Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–17] |
![]() |
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Barranco Navarro, L., Cabrera Urban, S., Castillo Gimenez, V., Cerda Alberich, L., et al. (2016). A measurement of material in the ATLAS tracker using secondary hadronic interactions in 7 TeV p p collisions. J. Instrum., 11, P11020–41pp.
Abstract: Knowledge of the material in the ATLAS inner tracking detector is crucial in under-standing the reconstruction of charged-particle tracks, the performance of algorithms that identify jets containing b-hadrons and is also essential to reduce background in searches for exotic particles that can decay within the inner detector volume. Interactions of primary hadrons produced in pp collisions with the material in the inner detector are used to map the location and amount of this material. The hadronic interactions of primary particles may result in secondary vertices, which in this analysis are reconstructed by an inclusive vertex-finding algorithm. Data were collected using minimum-bias triggers by the ATLAS detector operating at the LHC during 2010 at centre-of-mass energy root s = 7 TeV, and correspond to an integrated luminosity of 19 nb(-1). Kinematic properties of these secondary vertices are used to study the validity of the modelling of hadronic interactions in simulation. Secondary-vertex yields are compared between data and simulation over a volume of about 0.7m(3) around the interaction point, and agreement is found within overall uncertainties.
|
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo, F. L., et al. (2021). Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents. J. Instrum., 16(8), P08025–46pp.
Abstract: Non-ionizing energy loss causes bulk damage to the silicon sensors of the ATLAS pixel and strip detectors. This damage has important implications for data-taking operations, charged-particle track reconstruction, detector simulations, and physics analysis. This paper presents simulations and measurements of the leakage current in the ATLAS pixel detector and semiconductor tracker as a function of location in the detector and time, using data collected in Run 1 (2010-2012) and Run 2 (2015-2018) of the Large Hadron Collider. The extracted fluence shows a much stronger vertical bar z vertical bar-dependence in the innermost layers than is seen in simulation. Furthermore, the overall fluence on the second innermost layer is significantly higher than in simulation, with better agreement in layers at higher radii. These measurements are important for validating the simulation models and can be used in part to justify safety factors for future detector designs and interventions.
|
de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2017). Calculation of the local density of relic neutrinos. J. Cosmol. Astropart. Phys., 09(9), 034–24pp.
Abstract: Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV. We consider the gravitational clustering of relic neutrinos within the Milky Way, and used the N – one-body simulation technique to compute their density enhancement factor in the neighbourhood of the Earth with respect to the average cosmic density. Compared to previous similar studies, we pushed the simulation down to smaller neutrino masses, and included an improved treatment of the baryonic and dark matter distributions in the Milky Way. Our results are important for future experiments aiming at detecting the cosmic neutrino background, such as the Princeton Tritium Observatory for Light, Early-universe, Massive-neutrino Yield (PTOLEMY) proposal. We calculate the impact of neutrino clustering in the Milky Way on the expected event rate for a PTOLEMY-like experiment. We find that the effect of clustering remains negligible for the minimal normal hierarchy scenario, while it enhances the event rate by 10 to 20% (resp. a factor 1.7 to 2.5) for the minimal inverted hierarchy scenario (resp. a degenerate scenario with 150 meV masses). Finally we compute the impact on the event rate of a possible fourth sterile neutrino with a mass of 1.3 eV.
|