Home | << 1 2 3 4 5 6 7 8 9 >> |
![]() |
Wagner, C., Verde, L., & Boubekeur, L. (2010). N-body simulations with generic non-Gaussian initial conditions I: power spectrum and halo mass function. J. Cosmol. Astropart. Phys., 10(10), 022–24pp.
Abstract: We address the issue of setting up generic non-Gaussian initial conditions for N-body simulations. We consider inflationary-motivated primordial non-Gaussianity where the perturbations in the Bardeen potential are given by a dominant Gaussian part plus a non-Gaussian part specified by its bispectrum. The approach we explore here is suitable for any bispectrum, i.e. it does not have to be of the so-called separable or factorizable form. The procedure of generating a non-Gaussian field with a given bispectrum (and a given power spectrum for the Gaussian component) is not univocal, and care must be taken so that higher-order corrections do not leave a too large signature on the power spectrum. This is so far a limiting factor of our approach. We then run N-body simulations for the most popular inflationary-motivated non-Gaussian shapes. The halo mass function and the non-linear power spectrum agree with theoretical analytical approximations proposed in the literature, even if they were so far developed and tested only for a particular shape (the local one). We plan to make the simulations outputs available to the community via the non-Gaussian simulations comparison project web site http://icc.ub.edu/similar to liciaverde/NGSCP.html.
|
Arnault, P., Macquet, A., Angles-Castillo, A., Marquez-Martin, I., Pina-Canelles, V., Perez, A., et al. (2020). Quantum simulation of quantum relativistic diffusion via quantum walks. J. Phys. A, 53(20), 205303–39pp.
Abstract: Two models are first presented, of a one-dimensional discrete-time quantum walk (DTQW) with temporal noise on the internal degree of freedom (i.e., the coin): (i) a model with both a coin-flip and a phase-flip channel, and (ii) a model with random coin unitaries. It is then shown that both these models admit a common limit in the spacetime continuum, namely, a Lindblad equation with Dirac-fermion Hamiltonian part and, as Lindblad jumps, a chirality flip and a chirality-dependent phase flip, which are two of the three standard error channels for a two-level quantum system. This, as one may call it, Dirac Lindblad equation, provides a model of quantum relativistic spatial diffusion, which is evidenced both analytically and numerically. This model of spatial diffusion has the intriguing specificity of making sense only with original unitary models which are relativistic in the sense that they have chirality, on which the noise is introduced: the diffusion arises via the by-construction (quantum) coupling of chirality to the position. For a particle with vanishing mass, the model of quantum relativistic diffusion introduced in the present work, reduces to the well-known telegraph equation, which yields propagation at short times, diffusion at long times, and exhibits no quantumness. Finally, the results are extended to temporal noises which depend smoothly on position.
|
Hornillos, M. B. G., Gorlychev, V., Caballero, R., Cortes, G., Poch, A., Pretel, C., et al. (2011). Monte Carlo Simulations for the Study of a Moderated Neutron Detector. J. Korean Phys. Soc., 59(2), 1573–1576.
Abstract: This work presents the Monte Carlo simulations performed with the MCNPX and GEANT4 codes for the design of a BEta deLayEd Neutron detector, BELEN-20. This detector will be used for the study of beta delayed neutron emission and consists of a block of polyethylene with dimensions 90 x 90 x 80 cm(3) and 20 cylindrical (3)He gas counters. The results of these simulations have been validated experimentally with a (252)Cf source in the laboratory at UPC, Barcelona. Also the first experiment with this detector has been carried out in November 2009 in JYFL, Finland. In this experiment the neutron emission probability after beta decay of the fission products (88)Br, (94,95)Rb, and (138)I has been measured; this data is still under analysis. Simulations with MCNPX and GEANT4 have been performed in order to obtain the efficiency of the BELEN-20 detector for each of the above nuclei using the neutron energy distribution corresponding to each nucleus.
|
Oliveira, C. A. B., Sorel, M., Martin-Albo, J., Gomez-Cadenas, J. J., Ferreira, A. L., & Veloso, J. F. C. A. (2011). Energy resolution studies for NEXT. J. Instrum., 6, P05007–13pp.
Abstract: This work aims to present the current state of simulations of electroluminescence (EL) produced in gas-based detectors with special interest for NEXT – Neutrino Experiment with a Xenon TPC. NEXT is a neutrinoless double beta decay experiment, thus needs outstanding energy resolution which can be achieved by using electroluminescence. The process of light production is reviewed and properties such as EL yield and associated fluctuations, excitation and electroluminescence efficiencies, and energy resolution, are calculated. An EL production region with a 5 mm width gap between two infinite parallel planes is considered, where a uniform electric field is produced. The pressure and temperature considered are 10 bar and 293 K, respectively. The results show that, even for low values of VUV photon detection efficiency, good energy resolution can be achieved: below 0.4% (FWHM) at Q(beta beta) = 2.458 MeV.
Keywords: Scintillators, scintillation and light emission processes (solid, gas and liquid scintillators); Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission etc); Large detector systems for particle and astroparticle physics; Time projection chambers
|
ATLAS Collaboration(Aad, G. et al), Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., Ferrer, A., et al. (2013). Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run. J. Instrum., 8, P07004–72pp.
Abstract: This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
Figueroa, D. G., Florio, A., Torrenti, F., & Valkenburg, W. (2023). CosmoLattice: A modern code for lattice simulations of scalar and gauge field dynamics in an expanding universe. Comput. Phys. Commun., 283, 108586–13pp.
Abstract: This paper describes CosmoGattice, a modern package for lattice simulations of the dynamics of interacting scalar and gauge fields in an expanding universe. CosmoGattice incorporates a series of features that makes it very versatile and powerful: i) it is written in C++ fully exploiting the object oriented programming paradigm, with a modular structure and a clear separation between the physics and the technical details, ii) it is MPI-based and uses a discrete Fourier transform parallelized in multiple spatial dimensions, which makes it specially appropriate for probing scenarios with well -separated scales, running very high resolution simulations, or simply very long ones, iii) it introduces its own symbolic language, defining field variables and operations over them, so that one can introduce differential equations and operators in a manner as close as possible to the continuum, iv) it includes a library of numerical algorithms, ranging from O(delta t(2)) to O(delta t(10)) methods, suitable for simulating global and gauge theories in an expanding grid, including the case of 'self-consistent' expansion sourced by the fields themselves. Relevant observables are provided for each algorithm (e.g. energy densities, field spectra, lattice snapshots) and we note that, remarkably, all our algorithms for gauge theories (Abelian or non-Abelian) always respect the Gauss constraint to machine precision. Program summary Program Title:: CosmoGattice CPC Library link to program files: https://doi .org /10 .17632 /44vr5xssc6 .1 Developer's repository link: http://github .com /cosmolattice /cosmolattice Licensing provisions: MIT Programming language: C++, MPI Nature of problem: The phenomenology of high energy physics in the early universe is typically characterized by non-linear dynamics, which cannot be captured accurately with analytical techniques. In order to fully understand the non-linearities developed in a given scenario, one needs to carry out lattice simulations. A number of public packages for lattice simulations have appeared over the years, but most of them are only capable of simulating scalar fields. However, realistic models of particle physics do contain other kind of field species, such as (Abelian or non-Abelian) gauge fields, whose non-linear dynamics can also play a relevant role in the early universe. Tensor modes representing gravitational waves are also naturally expected in many scenarios. Solution method: CosmoGattice represents a modern code for lattice simulations of scalar-gauge field theories in an expanding universe. It allows for the simulation of the evolution of interacting (singlet) scalar fields, charged scalar fields under U(1) and/or SU(2) gauge groups, and the corresponding associated Abelian and/or non-Abelian gauge fields. From version 1.1 onward, CosmoGattice also allows to simulate the production of gravitational waves. Simulations can be done either in a flat space-time background, or in a homogeneous and isotropic (spatially flat) expanding FLRW background. CosmoGattice provides symplectic integrators, with accuracy ranging from O (delta t(2)) up to O(delta t(10)), to simuate the non-linear dynamics of the appropriate fields in comoving three-dimensional lattices. The code is parallelized with MPI, and uses a discrete Fourier Transform parallelized in multiple spatial dimensions, which makes it a very powerful code for probing physical problems with well-separated scales. Moreover, the code has been designed as a `platform' to implement any system of dynamical equations suitable for discretization on a lattice.
|
Jordan, D., Algora, A., & Tain, J. L. (2016). An event generator for simulations of complex beta-decay experiments. Nucl. Instrum. Methods Phys. Res. A, 828, 52–57.
Abstract: This article describes a Monte Carlo event generator for the design, optimization and performance characterization of beta decay spectroscopy experimental set-ups. The event generator has been developed within the Geant4 simulation architecture and provides new features and greater flexibility in comparison with the current available decay generator.
|
XENON Collaboration(Aprile, E. et al), & Orrigo, S. E. A. (2016). Physics reach of the XENON1T dark matter experiment. J. Cosmol. Astropart. Phys., 04(4), 027–37pp.
Abstract: The XENON1T experiment is currently in the commissioning phase at the Laboratori Nazionali del Gran Sasso, Italy. In this article we study the experiment's expected sensitivity to the spin-independent WIMP-nucleon interaction cross section, based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds. The total electronic recoil background in 1 tonne fiducial volume and (1, 12) keV electronic recoil equivalent energy region, before applying any selection to discriminate between electronic and nuclear recoils, is (1.80+/-0.15) . 10(-4) (kg.day.keV)(-1), mainly due to the decay of Rn-222 daughters inside the xenon target. The nuclear recoil background in the corresponding nuclear recoil equivalent energy region (4, 50) keV, is composed of (0.6 +/- 0.1) (t.y)(-1) from radiogenic neutrons, (1.8+/-0.3) . 10(-2) (t.y)(-1) from coherent scattering of neutrinos, and less than 0.01 (t.y)(-1) from muon-induced neutrons. The sensitivity of XENON1T is calculated with the Pro file Likelihood Ratio method, after converting the deposited energy of electronic and nuclear recoils into the scintillation and ionization signals seen in the detector. We take into account the systematic uncertainties on the photon and electron emission model, and on the estimation of the backgrounds, treated as nuisance parameters. The main contribution comes from the relative scintillation efficiency L-eff, which affects both the signal from WIMPs and the nuclear recoil backgrounds. After a 2 y measurement in 1 tonne fiducial volume, the sensitivity reaches a minimum cross section of 1.6 . 10(-47) cm(2) at m(chi) = 50 GeV/c(2).
Keywords: dark matter simulations; dark matter experiments
|
Mertsch, P., Parimbelli, G., de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2020). Neutrino clustering in the Milky Way and beyond. J. Cosmol. Astropart. Phys., 01(1), 015–23pp.
Abstract: The standard cosmological model predicts the existence of a Cosmic Neutrino Background, which has not yet been observed directly. Some experiments aiming at its detection are currently under development, despite the tiny kinetic energy of the cosmological relic neutrinos, which makes this task incredibly challenging. Since massive neutrinos are attracted by the gravitational potential of our Galaxy, they can cluster locally. Neutrinos should be more abundant at the Earth position than at an average point in the Universe. This fact may enhance the expected event rate in any future experiment. Past calculations of the local neutrino clustering factor only considered a spherical distribution of matter in the Milky Way and neglected the influence of other nearby objects like the Virgo cluster, although recent N-body simulations suggest that the latter may actually be important. In this paper, we adopt a back-tracking technique, well established in the calculation of cosmic rays fluxes, to perform the first three-dimensional calculation of the number density of relic neutrinos at the Solar System, taking into account not only the matter composition of the Milky Way, but also the contribution of the Andromeda galaxy and the Virgo cluster. The effect of Virgo is indeed found to be relevant and to depend non-trivially on the value of the neutrino mass. Our results show that the local neutrino density is enhanced by 0.53% for a neutrino mass of 10 meV, 12% for 50 meV, 50% for 100 meV or 500% for 300 meV.
|