Home | << 1 2 3 4 5 6 7 8 9 >> |
![]() |
Mertsch, P., Parimbelli, G., de Salas, P. F., Gariazzo, S., Lesgourgues, J., & Pastor, S. (2020). Neutrino clustering in the Milky Way and beyond. J. Cosmol. Astropart. Phys., 01(1), 015–23pp.
Abstract: The standard cosmological model predicts the existence of a Cosmic Neutrino Background, which has not yet been observed directly. Some experiments aiming at its detection are currently under development, despite the tiny kinetic energy of the cosmological relic neutrinos, which makes this task incredibly challenging. Since massive neutrinos are attracted by the gravitational potential of our Galaxy, they can cluster locally. Neutrinos should be more abundant at the Earth position than at an average point in the Universe. This fact may enhance the expected event rate in any future experiment. Past calculations of the local neutrino clustering factor only considered a spherical distribution of matter in the Milky Way and neglected the influence of other nearby objects like the Virgo cluster, although recent N-body simulations suggest that the latter may actually be important. In this paper, we adopt a back-tracking technique, well established in the calculation of cosmic rays fluxes, to perform the first three-dimensional calculation of the number density of relic neutrinos at the Solar System, taking into account not only the matter composition of the Milky Way, but also the contribution of the Andromeda galaxy and the Virgo cluster. The effect of Virgo is indeed found to be relevant and to depend non-trivially on the value of the neutrino mass. Our results show that the local neutrino density is enhanced by 0.53% for a neutrino mass of 10 meV, 12% for 50 meV, 50% for 100 meV or 500% for 300 meV.
|
Moline, A., Schewtschenko, J. A., Palomares-Ruiz, S., Boehm, C., & Baugh, C. M. (2016). Isotropic extragalactic flux from dark matter annihilations: lessons from interacting dark matter scenarios. J. Cosmol. Astropart. Phys., 08(8), 069–23pp.
Abstract: The extragalactic gamma-ray and neutrino emission may have a contribution from dark matter (DM) annihilations. In the case of discrepancies between observations and standard predictions, one could infer the DM pair annihilation cross section into cosmic rays by studying the shape of the energy spectrum. So far all analyses of the extragalactic DM signal have assumed the standard cosmological model (ACDM) as the underlying theory. However, there are alternative DM scenarios where the number of low-mass objects is significantly suppressed. Therefore the characteristics of the gamma-ray and neutrino emission in these models may differ from ACDM as a result. Here we show that the extragalactic isotropic signal in these alternative models has a similar energy dependence to that in ACDM, but the overall normalisation is reduced. The similarities between the energy spectra combined with the flux suppression could lead one to misinterpret possible evidence for models beyond ACDM as being due to CDM particles annihilating with a much weaker cross section than expected.
|
Moline, A., Ibarra, A., & Palomares-Ruiz, S. (2015). Future sensitivity of neutrino telescopes to dark matter annihilations from the cosmic diffuse neutrino signal. J. Cosmol. Astropart. Phys., 06(6), 005–34pp.
Abstract: Cosmological observations and cold dark matter N-body simulations indicate that our Universe is populated by numerous halos, where dark matter particles annihilate, potentially producing Standard Model particles. In this paper we calculate the contribution to the diffuse neutrino background from dark matter annihilations in halos at all redshifts and we estimate the future sensitivity to the annihilation cross section of neutrino telescopes such as IceCube or ANTARES. We consider various parametrizations to describe the internal halo properties and for the halo mass function in order to bracket the theoretical uncertainty in the limits from the modeling of the cosmological annihilation flux. We find that observations of the cosmic diffuse neutrino flux at large angular distances from the galactic center lead to constraints on the dark matter annihilation cross section which are complementary to ( and for some extrapolations of the astrophysical parameters, better than) those stemming from observations of the Milky Way halo, especially for neutrino telescopes not pointing directly to the Milky Way center, as is the case of IceCube.
|
Muñoz, E., Barrio, J., Bernabeu, J., Etxebeste, A., Lacasta, C., Llosa, G., et al. (2018). Study and comparison of different sensitivity models for a two-plane Compton camera. Phys. Med. Biol., 63(13), 135004–19pp.
Abstract: Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with Na-22 sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
Keywords: Compton camera imaging; MLEM; Monte Carlo simulations; image quality
|
Natochii, A. et al, & Marinas, C. (2023). Measured and projected beam backgrounds in the Belle II experiment at the SuperKEKB collider. Nucl. Instrum. Methods Phys. Res. A, 1055, 168550–21pp.
Abstract: The Belle II experiment at the SuperKEKB electron-positron collider aims to collect an unprecedented data set of 50 ab-1 to study CP-violation in the B-meson system and to search for Physics beyond the Standard Model. SuperKEKB is already the world's highest-luminosity collider. In order to collect the planned data set within approximately one decade, the target is to reach a peak luminosity of 6 x 1035 cm-2 s-1by further increasing the beam currents and reducing the beam size at the interaction point by squeezing the betatron function down to betay* = 0.3 mm. To ensure detector longevity and maintain good reconstruction performance, beam backgrounds must remain well controlled. We report on current background rates in Belle II and compare these against simulation. We find that a number of recent refinements have significantly improved the background simulation accuracy. Finally, we estimate the safety margins going forward. We predict that backgrounds should remain high but acceptable until a luminosity of at least 2.8 x 1035 cm-2 s-1is reached for betay* = 0.6 mm. At this point, the most vulnerable Belle II detectors, the Time-of-Propagation (TOP) particle identification system and the Central Drift Chamber (CDC), have predicted background hit rates from single-beam and luminosity backgrounds that add up to approximately half of the maximum acceptable rates.
Keywords: Detector background; Lepton collider; Monte-Carlo simulation
|
Nzongani, U., Zylberman, J., Doncecchi, C. E., Perez, A., Debbasch, F., & Arnault, P. (2023). Quantum circuits for discrete-time quantum walks with position-dependent coin operator. Quantum Inf. Process., 22(7), 270–46pp.
Abstract: The aim of this paper is to build quantum circuits that implement discrete-time quantum walks having an arbitrary position-dependent coin operator. The position of the walker is encoded in base 2: with n wires, each corresponding to one qubit, we encode 2(n) position states. The data necessary to define an arbitrary position-dependent coin operator is therefore exponential in n. Hence, the exponentiality will necessarily appear somewhere in our circuits. We first propose a circuit implementing the position-dependent coin operator, that is naive, in the sense that it has exponential depth and implements sequentially all appropriate position-dependent coin operators. We then propose a circuit that “transfers” all the depth into ancillae, yielding a final depth that is linear in n at the cost of an exponential number of ancillae. Themain idea of this linear-depth circuit is to implement in parallel all coin operators at the different positions. Reducing the depth exponentially at the cost of having an exponential number of ancillae is a goal which has already been achieved for the problem of loading classical data on a quantum circuit (Araujo in Sci Rep 11:6329, 2021) (notice that such a circuit can be used to load the initial state of the walker). Here, we achieve this goal for the problem of applying a position-dependent coin operator in a discrete-time quantum walk. Finally, we extend the result of Welch (New J Phys 16:033040, 2014) from position-dependent unitaries which are diagonal in the position basis to position-dependent 2 x 2-block-diagonal unitaries: indeed, we show that for a position dependence of the coin operator (the block-diagonal unitary) which is smooth enough, one can find an efficient quantum-circuit implementation approximating the coin operator up to an error epsilon (in terms of the spectral norm), the depth and size of which scale as O(1/epsilon). A typical application of the efficient implementation would be the quantum simulation of a relativistic spin-1/2 particle on a lattice, coupled to a smooth external gauge field; notice that recently, quantum spatial-search schemes have been developed which use gauge fields as the oracle, to mark the vertex to be found (Zylberman in Entropy 23:1441, 2021), (Fredon arXiv:2210.13920). A typical application of the linear-depth circuit would be when there is spatial noise on the coin operator (and hence a non-smooth dependence in the position).
Keywords: Quantum walks; Quantum circuits; Quantum simulation
|
Olivares Herrador, J., Latina, A., Aksoy, A., Fuster Martinez, N., Gimeno, B., & Esperante, D. (2024). Implementation of the beam-loading effect in the tracking code RF-track based on a power-diffusive model. Front. Physics, 12, 1348042–11pp.
Abstract: The need to achieve high energies in particle accelerators has led to the development of new accelerator technologies, resulting in higher beam intensities and more compact devices with stronger accelerating fields. In such scenarios, beam-loading effects occur, and intensity-dependent gradient reduction affects the accelerated beam as a consequence of its interaction with the surrounding cavity. In this study, a power-diffusive partial differential equation is derived to account for this effect. Its numerical resolution has been implemented in the tracking code RF-Track, allowing the simulation of apparatuses where transient beam loading plays an important role. Finally, measurements of this effect have been carried out in the CERN Linear Electron Accelerator for Research (CLEAR) facility at CERN, finding good agreement with the RF-Track simulations.
|
Oliveira, C. A. B., Sorel, M., Martin-Albo, J., Gomez-Cadenas, J. J., Ferreira, A. L., & Veloso, J. F. C. A. (2011). Energy resolution studies for NEXT. J. Instrum., 6, P05007–13pp.
Abstract: This work aims to present the current state of simulations of electroluminescence (EL) produced in gas-based detectors with special interest for NEXT – Neutrino Experiment with a Xenon TPC. NEXT is a neutrinoless double beta decay experiment, thus needs outstanding energy resolution which can be achieved by using electroluminescence. The process of light production is reviewed and properties such as EL yield and associated fluctuations, excitation and electroluminescence efficiencies, and energy resolution, are calculated. An EL production region with a 5 mm width gap between two infinite parallel planes is considered, where a uniform electric field is produced. The pressure and temperature considered are 10 bar and 293 K, respectively. The results show that, even for low values of VUV photon detection efficiency, good energy resolution can be achieved: below 0.4% (FWHM) at Q(beta beta) = 2.458 MeV.
Keywords: Scintillators, scintillation and light emission processes (solid, gas and liquid scintillators); Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission etc); Large detector systems for particle and astroparticle physics; Time projection chambers
|
Oliver, S., Rodriguez Bosca, S., & Gimenez-Alventosa, V. (2024). Enabling particle transport on CAD-based geometries for radiation simulations with penRed. Comput. Phys. Commun., 298, 109091–11pp.
Abstract: Geometry construction is a fundamental aspect of any radiation transport simulation, regardless of the Monte Carlo code being used. Typically, this process is tedious, time-consuming, and error-prone. The conventional approach involves defining geometries using mathematical objects or surfaces. However, this method comes with several limitations, especially when dealing with complex models, particularly those with organic shapes. Furthermore, since each code employs its own format and methodology for defining geometries, sharing and reproducing simulations among researchers becomes a challenging task. Consequently, many codes have implemented support for simulating over geometries constructed via Computer-Aided Design (CAD) tools. Unfortunately, this feature is lacking in penRed and other PENELOPE physics-based codes. Therefore, the objective of this work is to implement such support within the penRed framework. New version program summary Program Title: Parallel Engine for Radiation Energy Deposition (penRed) CPC Library link to program files: https://doi.org/10.17632/rkw6tvtngy.2 Developer's repository link: https://github.com/PenRed/PenRed Code Ocean capsule: https://codeocean.com/capsule/1041417/tree Licensing provisions: GNU Affero General Public License v3 Programming language: C++ standard 2011. Journal reference of previous version: V. Gimenez-Alventosa, V. Gimenez Gomez, S. Oliver, PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE, Computer Physics Communications 267 (2021) 108065. doi:https://doi.org/10.1016/j.cpc.2021.108065. Does the new version supersede the previous version?: Yes Reasons for the new version: Implements the capability to simulate on CAD constructed geometries, among many other features and fixes. Summary of revisions: All changes applied through the code versions are summarized in the file CHANGELOG.md in the repository package. Nature of problem: While Monte Carlo codes have proven valuable in simulating complex radiation scenarios, they rely heavily on accurate geometrical representations. In the same way as many other Monte Carlo codes, penRed employs simple geometric quadric surfaces like planes, spheres and cylinders to define geometries. However, since these geometric models offer a certain level of flexibility, these representations have limitations when it comes to simulating highly intricate and irregular shapes. Anatomic structures, for example, require detailed representations of organs, tissues and bones, which are difficult to achieve using basic geometric objects. Similarly, complex devices or intricate mechanical systems may have designs that cannot be accurately represented within the constraints of such geometric models. Moreover, when the complexity of the model increases, geometry construction process becomes more difficult, tedious, time-consuming and error-prone [2]. Also, as each Monte Carlo geometry library uses its own format and construction method, reproducing the same geometry among different codes is a challenging task. Solution method: To face the problems stated above, the objective of this work is to implement the capability to simulate using irregular and adaptable meshed geometries in the penRed framework. This kind of meshes can be constructed using Computer-Aided Design (CAD) tools, the use of which is very widespread and streamline the design process. This feature has been implemented in a new geometry module named “MESH_BODY” specific for this kind of geometries. This one is freely available and usable within the official penRed package1. It can be used since penRed version 1.9.3b and above.
|
Olleros, P., Caballero, L., Domingo-Pardo, C., Babiano, V., Ladarescu, I., Calvo, D., et al. (2018). On the performance of large monolithic LaCl3(Ce) crystals coupled to pixelated silicon photosensors. J. Instrum., 13, P03014–17pp.
Abstract: We investigate the performance of large area radiation detectors, with high energy-and spatial-resolution, intended for the development of a Total Energy Detector with gamma-ray imaging capability, so-called i-TED. This new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements, versus the commonly used C6D6 liquid scintillation total-energy detectors. In this work, we study in detail the impact of the readout photosensor on the energy response of large area (50 x 50 mm(2)) monolithic LaCl3(Ce) crystals, in particular when replacing a conventional mono-cathode photomultiplier tube by an 8 x 8 pixelated silicon photomultiplier. Using the largest commercially available monolithic SiPM array (25 cm(2)), with a pixel size of 6 x 6 mm(2), we have measured an average energy resolution of 3.92% FWHM at 662 keV for crystal thick-nesses of 10, 20 and 30 mm. The results are confronted with detailed Monte Carlo (MC) calculations, where optical processes and properties have been included for the reliable tracking of the scintillation photons. After the experimental validation of the MC model, we use our MC code to explore the impact of a smaller photosensor segmentation on the energy resolution. Our optical MC simulations predict only a marginal deterioration of the spectroscopic performance for pixels of 3 x 3 mm(2).
|