Home | << 1 2 3 4 5 6 7 8 9 >> |
![]() |
Zhang, X., Xiao, Y. T., & Gimeno, B. (2020). Multipactor Suppression by a Resonant Static Magnetic Field on a Dielectric Surface. IEEE Trans. Electron Devices, 67(12), 5723–5728.
Abstract: In this article, we study the suppression of the multipactor phenomenon on a dielectric surface by a resonant static magnetic field. A homemade Monte Carlo algorithm is developed for multipactor simulations on a dielectric surface driven by two orthogonal radio frequency (RF) electric field components. When the static magnetic field is perpendicular to the tangential and normal RF electric fields, it is shown that if the normal electric field lags the tangential electric field by pi/2, the superposition of the normal and tangential electric fields will trigger a gyro-acceleration of the electron cloud and restrain the multipactor discharge effectively. By contrast, when the normal electric field is in advance of the tangential electric field by pi/2, the difference between the normal and tangential electric fields drives gyro-motion of the electron cloud. Consequently, two enhanced discharge zones are inevitable. The suppression effects of the resonant static magnetic field that is parallel to the tangential RF electric field or to the normal RF electric field are also presented.
|
XENON Collaboration(Aprile, E. et al), & Orrigo, S. E. A. (2014). Conceptual design and simulation of a water Cherenkov muon veto for the XENON1T experiment. J. Instrum., 9, P11006–20pp.
Abstract: XENON is a dark matter direct detection project, consisting of a time projection chamber (TPC) filled with liquid xenon as detection medium. The construction of the next generation detector, XENON1T, is presently taking place at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. It aims at a sensitivity to spin-independent cross sections of 2.10(47) cm(2) for WIMP masses around 50 GeV/c(2), which requires a background reduction by two orders of magnitude compared to XENON100, the current generation detector. An active system that is able to tag muons and muon-induced backgrounds is critical for this goal. A water Cherenkov detector of similar to 10m height and diameter has been therefore developed, equipped with 8 inch photomultipliers and cladded by a reflective foil. We present the design and optimization study for this detector, which has been carried out with a series of Monte Carlo simulations. The muon veto will reach very high detection efficiencies for muons (> 99.5%) and showers of secondary particles from muon interactions in the rock (> 70%). Similar efficiencies will be obtained for XENONnT, the upgrade of XENON1T, which will later improve the WIMP sensitivity by another order of magnitude. With the Cherenkov water shield studied here, the background from muon-induced neutrons in XENON1T is negligible.
|
XENON Collaboration(Aprile, E. et al), & Orrigo, S. E. A. (2016). Physics reach of the XENON1T dark matter experiment. J. Cosmol. Astropart. Phys., 04(4), 027–37pp.
Abstract: The XENON1T experiment is currently in the commissioning phase at the Laboratori Nazionali del Gran Sasso, Italy. In this article we study the experiment's expected sensitivity to the spin-independent WIMP-nucleon interaction cross section, based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds. The total electronic recoil background in 1 tonne fiducial volume and (1, 12) keV electronic recoil equivalent energy region, before applying any selection to discriminate between electronic and nuclear recoils, is (1.80+/-0.15) . 10(-4) (kg.day.keV)(-1), mainly due to the decay of Rn-222 daughters inside the xenon target. The nuclear recoil background in the corresponding nuclear recoil equivalent energy region (4, 50) keV, is composed of (0.6 +/- 0.1) (t.y)(-1) from radiogenic neutrons, (1.8+/-0.3) . 10(-2) (t.y)(-1) from coherent scattering of neutrinos, and less than 0.01 (t.y)(-1) from muon-induced neutrons. The sensitivity of XENON1T is calculated with the Pro file Likelihood Ratio method, after converting the deposited energy of electronic and nuclear recoils into the scintillation and ionization signals seen in the detector. We take into account the systematic uncertainties on the photon and electron emission model, and on the estimation of the backgrounds, treated as nuisance parameters. The main contribution comes from the relative scintillation efficiency L-eff, which affects both the signal from WIMPs and the nuclear recoil backgrounds. After a 2 y measurement in 1 tonne fiducial volume, the sensitivity reaches a minimum cross section of 1.6 . 10(-47) cm(2) at m(chi) = 50 GeV/c(2).
Keywords: dark matter simulations; dark matter experiments
|
Wagner, C., Verde, L., & Boubekeur, L. (2010). N-body simulations with generic non-Gaussian initial conditions I: power spectrum and halo mass function. J. Cosmol. Astropart. Phys., 10(10), 022–24pp.
Abstract: We address the issue of setting up generic non-Gaussian initial conditions for N-body simulations. We consider inflationary-motivated primordial non-Gaussianity where the perturbations in the Bardeen potential are given by a dominant Gaussian part plus a non-Gaussian part specified by its bispectrum. The approach we explore here is suitable for any bispectrum, i.e. it does not have to be of the so-called separable or factorizable form. The procedure of generating a non-Gaussian field with a given bispectrum (and a given power spectrum for the Gaussian component) is not univocal, and care must be taken so that higher-order corrections do not leave a too large signature on the power spectrum. This is so far a limiting factor of our approach. We then run N-body simulations for the most popular inflationary-motivated non-Gaussian shapes. The halo mass function and the non-linear power spectrum agree with theoretical analytical approximations proposed in the literature, even if they were so far developed and tested only for a particular shape (the local one). We plan to make the simulations outputs available to the community via the non-Gaussian simulations comparison project web site http://icc.ub.edu/similar to liciaverde/NGSCP.html.
|
Villaescusa-Navarro, F., & Dalal, N. (2011). Cores and cusps in warm dark matter halos. J. Cosmol. Astropart. Phys., 03(3), 024–16pp.
Abstract: The apparent presence of large core radii in Low Surface Brightness galaxies has been claimed as evidence in favor of warm dark matter. Here we show that WDM halos do not have cores that are large fractions of the halo size: typically, r(core)/r(200) less than or similar to 10(-3). This suggests an astrophysical origin for the large cores observed in these galaxies, as has been argued by other authors.
|
Villaescusa-Navarro, F. et al, & Villanueva-Domingo, P. (2023). The CAMELS Project: Public Data Release. Astrophys. J. Suppl. Ser., 265(2), 54–14pp.
Abstract: The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project was developed to combine cosmology with astrophysics through thousands of cosmological hydrodynamic simulations and machine learning. CAMELS contains 4233 cosmological simulations, 2049 N-body simulations, and 2184 state-of-the-art hydrodynamic simulations that sample a vast volume in parameter space. In this paper, we present the CAMELS public data release, describing the characteristics of the CAMELS simulations and a variety of data products generated from them, including halo, subhalo, galaxy, and void catalogs, power spectra, bispectra, Lya spectra, probability distribution functions, halo radial profiles, and X-rays photon lists. We also release over 1000 catalogs that contain billions of galaxies from CAMELS-SAM: a large collection of N-body simulations that have been combined with the Santa Cruz semianalytic model. We release all the data, comprising more than 350 terabytes and containing 143,922 snapshots, millions of halos, galaxies, and summary statistics. We provide further technical details on how to access, download, read, and process the data at .
|
Valdes-Cortez, C., Mansour, I., Rivard, M. J., Ballester, F., Mainegra-Hing, E., Thomson, R. M., et al. (2021). A study of Type B uncertainties associated with the photoelectric effect in low-energy Monte Carlo simulations. Phys. Med. Biol., 66(10), 105014–14pp.
Abstract: Purpose. To estimate Type B uncertainties in absorbed-dose calculations arising from the different implementations in current state-of-the-art Monte Carlo (MC) codes of low-energy photon cross-sections (<200 keV). Methods. MC simulations are carried out using three codes widely used in the low-energy domain: PENELOPE-2018, EGSnrc, and MCNP. Three dosimetry-relevant quantities are considered: mass energy-absorption coefficients for water, air, graphite, and their respective ratios; absorbed dose; and photon-fluence spectra. The absorbed dose and the photon-fluence spectra are scored in a spherical water phantom of 15 cm radius. Benchmark simulations using similar cross-sections have been performed. The differences observed between these quantities when different cross-sections are considered are taken to be a good estimator for the corresponding Type B uncertainties. Results. A conservative Type B uncertainty for the absorbed dose (k = 2) of 1.2%-1.7% (<50 keV), 0.6%-1.2% (50-100 keV), and 0.3% (100-200 keV) is estimated. The photon-fluence spectrum does not present clinically relevant differences that merit considering additional Type B uncertainties except for energies below 25 keV, where a Type B uncertainty of 0.5% is obtained. Below 30 keV, mass energy-absorption coefficients show Type B uncertainties (k = 2) of about 1.5% (water and air), and 2% (graphite), diminishing in all materials for larger energies and reaching values about 1% (40-50 keV) and 0.5% (50-75 keV). With respect to their ratios, the only significant Type B uncertainties are observed in the case of the water-to-graphite ratio for energies below 30 keV, being about 0.7% (k = 2). Conclusions. In contrast with the intermediate (about 500 keV) or high (about 1 MeV) energy domains, Type B uncertainties due to the different cross-sections implementation cannot be considered subdominant with respect to Type A uncertainties or even to other sources of Type B uncertainties (tally volume averaging, manufacturing tolerances, etc). Therefore, the values reported here should be accommodated within the uncertainty budget in low-energy photon dosimetry studies.
|
Tain, J. L., Agramunt, J., Algora, A., Aprahamian, A., Cano-Ott, D., Fraile, L. M., et al. (2015). The sensitivity of LaBr3:Ce scintillation detectors to low energy neutrons: Measurement and Monte Carlo simulation. Nucl. Instrum. Methods Phys. Res. A, 774, 17–24.
Abstract: The neutron sensitivity of a cylindrical circle minus 1.5 in x 1.5 in LaBr3:Ce scintillation detector was measured using quasi-monoenergetic neutron beams in the energy range from 40 keV to 2.5 MeV. In this energy range the detector is sensitive to gamma-rays generated in neutron inelastic and capture processes. The experimental energy response was compared with Monte Carlo simulations performed with the Geant4 simulation toolkit using the so-called High Precision Neutron Models. These models rely on relevant information stored in evaluated nuclear data libraries. The performance of the Geant4 Neutron Data Library as well as several standard nuclear data libraries was investigated. In the latter case this was made possible by the use of a conversion tool that allowed the direct use of the data from other libraries in Geant4. Overall it was found that there was good agreement with experiment for some of the neutron data bases like ENDF/B-VII.0 or JENDL-3.3 but not with the others such as ENDF/B-VI.8 or JEFF-3.1.
|
Roser, J., Muñoz, E., Barrientos, L., Barrio, J., Bernabeu, J., Borja-Lloret, M., et al. (2020). Image reconstruction for a multi-layer Compton telescope: an analytical model for three interaction events. Phys. Med. Biol., 65(14), 145005–17pp.
Abstract: Compton Cameras are electronically collimated photon imagers suitable for sub-MeV to few MeV gamma-ray detection. Such features are desirable to enablein vivorange verification in hadron therapy, through the detection of secondary Prompt Gammas. A major concern with this technique is the poor image quality obtained when the incoming gamma-ray energy is unknown. Compton Cameras with more than two detector planes (multi-layer Compton Cameras) have been proposed as a solution, given that these devices incorporate more signal sequences of interactions to the conventional two interaction events. In particular, three interaction events convey more spectral information as they allow inferring directly the incident gamma-ray energy. A three-layer Compton Telescope based on continuous Lanthanum (III) Bromide crystals coupled to Silicon Photomultipliers is being developed at the IRIS group of IFIC-Valencia. In a previous work we proposed a spectral reconstruction algorithm for two interaction events based on an analytical model for the formation of the signal. To fully exploit the capabilities of our prototype, we present here an extension of the model for three interaction events. Analytical expressions of the sensitivity and the System Matrix are derived and validated against Monte Carlo simulations. Implemented in a List Mode Maximum Likelihood Expectation Maximization algorithm, the proposed model allows us to obtain four-dimensional (energy and position) images by using exclusively three interaction events. We are able to recover the correct spectrum and spatial distribution of gamma-ray sources when ideal data are employed. However, the uncertainties associated to experimental measurements result in a degradation when real data from complex structures are employed. Incorrect estimation of the incident gamma-ray interaction positions, and missing deposited energy associated with escaping secondaries, have been identified as the causes of such degradation by means of a detailed Monte Carlo study. As expected, our current experimental resolution and efficiency to three interaction events prevents us from correctly recovering complex structures of radioactive sources. However, given the better spectral information conveyed by three interaction events, we expect an improvement of the image quality of conventional Compton imaging when including such events. In this regard, future development includes the incorporation of the model assessed in this work to the two interaction events model in order to allow using simultaneously two and three interaction events in the image reconstruction.
|
Roser, J., Barrientos, L., Bernabeu, J., Borja-Lloret, M., Muñoz, E., Ros, A., et al. (2022). Joint image reconstruction algorithm in Compton cameras. Phys. Med. Biol., 67(15), 155009–15pp.
Abstract: Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.
|