Home | << 1 2 3 4 5 6 7 8 >> |
Mendez, V., Amoros, G., Garcia, F., & Salt, J. (2010). Emergent algorithms for replica location and selection in data grid. Futur. Gener. Comp. Syst., 26(7), 934–946.
Abstract: Grid infrastructures for e-Science projects are growing in magnitude terms. Improvements in data Grid replication algorithms may be critical in many of these infrastructures. This paper shows a decentralized replica optimization service, providing a general Emergent Artificial Intelligence (EAI) algorithm for the problem definition. Our aim is to set up a theoretical framework for emergent heuristics in Grid environments. Further, we describe two EAI approaches, the Particle Swarm Optimization PSO-Grid Multiswarm Federation and the Ant Colony Optimization ACO-Grid Asynchronous Colonies Optimization replica optimization algorithms, with some examples. We also present extended results with best performance and scalability features for PSO-Grid Multiswarrn Federation.
|
Norena, J., Verde, L., Jimenez, R., Pena-Garay, C., & Gomez, C. (2012). Cancelling out systematic uncertainties. Mon. Not. Roy. Astron. Soc., 419(2), 1040–1050.
Abstract: We present a method to minimize, or even cancel out, the nuisance parameters affecting a measurement. Our approach is general and can be applied to any experiment or observation where systematic errors are a concern e.g. are larger than statistical errors. We compare it with the Bayesian technique used to deal with nuisance parameters: marginalization, and show how the method compares and improves by avoiding biases. We illustrate the method with several examples taken from the astrophysics and cosmology world: baryonic acoustic oscillations (BAOs), cosmic clocks, Type Ia supernova (SNIa) luminosity distance, neutrino oscillations and dark matter detection. By applying the method we not only recover some known results but also find some interesting new ones. For BAO experiments we show how to combine radial and angular BAO measurements in order to completely eliminate the dependence on the sound horizon at radiation drag. In the case of exploiting SNIa as standard candles we show how the uncertainty in the luminosity distance by a second parameter modelled as a metallicity dependence can be eliminated or greatly reduced. When using cosmic clocks to measure the expansion rate of the universe, we demonstrate how a particular combination of observables nearly removes the metallicity dependence of the galaxy on determining differential ages, thus removing the agemetallicity degeneracy in stellar populations. We hope that these findings will be useful in future surveys to obtain robust constraints on the dark energy equation of state.
Keywords: methods: statistical; cosmology: theory
|
Yepes, H. (2012). The ANTARES neutrino detector instrumentation. J. Instrum., 7, C01022–9pp.
Abstract: ANTARES is actually the fully operational and the largest neutrino telescope in the Northern hemisphere. Located in the Mediterranean Sea, it consists of a 3D array of 885 photomultiplier tubes (PMTs) arranged in 12 detection lines (25 storeys each), able to detect the Cherenkov light induced by upgoing relativistic muons produced in the interaction of high energy cosmic neutrinos with the detector surroundings. Among its physics goals, the search for neutrino astrophysical sources and the indirect detection of dark matter particles coming from the sun are of particular interest. To reach these goals, good accuracy in track reconstruction is mandatory, so several calibration systems for timing and positioning have been developed. In this contribution we will present the design of the detector, calibration systems, associated equipment and its performance on track reconstruction.
|
Cabrera, M. E., Casas, J. A., Mitsou, V. A., Ruiz de Austri, R., & Terron, J. (2012). Histogram comparison tools for the search of new physics at LHC. Application to the CMSSM. J. High Energy Phys., 04(4), 133–27pp.
Abstract: We propose a rigorous and effective way to compare experimental and theoretical histograms, incorporating the different sources of statistical and systematic uncertainties. This is a useful tool to extract as much information as possible from the comparison between experimental data with theoretical simulations, optimizing the chances of identifying New Physics at the LHC. We illustrate this by showing how a search in the CMSSM parameter space, using Bayesian techniques, can effectively find the correct values of the CMSSM parameters by comparing histograms of events with multijets + missing transverse momentum displayed in the effective-mass variable. The procedure is in fact very efficient to identify the true supersymmetric model, in the case supersymmetry is really there and accessible to the LHC.
|
Olleros, P., Caballero, L., Domingo-Pardo, C., Babiano, V., Ladarescu, I., Calvo, D., et al. (2018). On the performance of large monolithic LaCl3(Ce) crystals coupled to pixelated silicon photosensors. J. Instrum., 13, P03014–17pp.
Abstract: We investigate the performance of large area radiation detectors, with high energy-and spatial-resolution, intended for the development of a Total Energy Detector with gamma-ray imaging capability, so-called i-TED. This new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements, versus the commonly used C6D6 liquid scintillation total-energy detectors. In this work, we study in detail the impact of the readout photosensor on the energy response of large area (50 x 50 mm(2)) monolithic LaCl3(Ce) crystals, in particular when replacing a conventional mono-cathode photomultiplier tube by an 8 x 8 pixelated silicon photomultiplier. Using the largest commercially available monolithic SiPM array (25 cm(2)), with a pixel size of 6 x 6 mm(2), we have measured an average energy resolution of 3.92% FWHM at 662 keV for crystal thick-nesses of 10, 20 and 30 mm. The results are confronted with detailed Monte Carlo (MC) calculations, where optical processes and properties have been included for the reliable tracking of the scintillation photons. After the experimental validation of the MC model, we use our MC code to explore the impact of a smaller photosensor segmentation on the energy resolution. Our optical MC simulations predict only a marginal deterioration of the spectroscopic performance for pixels of 3 x 3 mm(2).
|
Ortiz Arciniega, J. L., Carrio, F., & Valero, A. (2019). FPGA implementation of a deep learning algorithm for real-time signal reconstruction in particle detectors under high pile-up conditions. J. Instrum., 14, P09002–13pp.
Abstract: The analog signals generated in the read-out electronics of particle detectors are shaped prior to the digitization in order to improve the signal to noise ratio (SNR). The real amplitude of the analog signal is then obtained using digital filters, which provides information about the energy deposited in the detector. The classical digital filters have a good performance in ideal situations with Gaussian electronic noise and no pulse shape distortion. However, high-energy particle colliders, such as the Large Hadron Collider (LHC) at CERN, can produce multiple simultaneous events, which produce signal pileup. The performance of classical digital filters deteriorates in these conditions since the signal pulse shape gets distorted. In addition, this type of experiments produces a high rate of collisions, which requires high throughput data acquisitions systems. In order to cope with these harsh requirements, new read-out electronics systems are based on high-performance FPGAs, which permit the utilization of more advanced real-time signal reconstruction algorithms. In this paper, a deep learning method is proposed for real-time signal reconstruction in high pileup particle detectors. The performance of the new method has been studied using simulated data and the results are compared with a classical FIR filter method. In particular, the signals and FIR filter used in the ATLAS Tile Calorimeter are used as benchmark. The implementation, resources usage and performance of the proposed Neural Network algorithm in FPGA are also presented.
|
Gammaldi, V., Zaldivar, B., Sanchez-Conde, M. A., & Coronado-Blazquez, J. (2023). A search for dark matter among Fermi-LAT unidentified sources with systematic features in machine learning. Mon. Not. Roy. Astron. Soc., 520(1), 1348–1361.
Abstract: Around one-third of the point-like sources in the Fermi-LAT catalogues remain as unidentified sources (unIDs) today. Indeed, these unIDs lack a clear, univocal association with a known astrophysical source. If dark matter (DM) is composed of weakly interacting massive particles (WIMPs), there is the exciting possibility that some of these unIDs may actually be DM sources, emitting gamma-rays from WIMPs annihilation. We propose a new approach to solve the standard, machine learning (ML) binary classification problem of disentangling prospective DM sources (simulated data) from astrophysical sources (observed data) among the unIDs of the 4FGL Fermi-LAT catalogue. We artificially build two systematic features for the DM data which are originally inherent to observed data: the detection significance and the uncertainty on the spectral curvature. We do it by sampling from the observed population of unIDs, assuming that the DM distributions would, if any, follow the latter. We consider different ML models: Logistic Regression, Neural Network (NN), Naive Bayes, and Gaussian Process, out of which the best, in terms of classification accuracy, is the NN, achieving around 93 . 3 per cent +/- 0 . 7 per cent performance. Other ML evaluation parameters, such as the True Ne gativ e and True Positive rates, are discussed in our work. Applying the NN to the unIDs sample, we find that the de generac y between some astrophysical and DM sources can be partially solved within this methodology. None the less, we conclude that there are no DM source candidates among the pool of 4FGL Fermi-LAT unIDs.
|
Ferreira, M. N., & Papavassiliou, J. (2023). Gauge Sector Dynamics in QCD. Particles, 6(1), 312–363.
Abstract: The dynamics of the QCD gauge sector give rise to non-perturbative phenomena that are crucial for the internal consistency of the theory; most notably, they account for the generation of a gluon mass through the action of the Schwinger mechanism, the taming of the Landau pole, the ensuing stabilization of the gauge coupling, and the infrared suppression of the three-gluon vertex. In the present work, we review some key advances in the ongoing investigation of this sector within the framework of the continuum Schwinger function methods, supplemented by results obtained from lattice simulations.
|
Trotta, R., Johannesson, G., Moskalenko, I. V., Porter, T. A., Ruiz de Austri, R., & Strong, A. W. (2011). Constraints on Cosmic-Ray Propagation Models from a Global Bayesian Analysis. Astrophys. J., 729(2), 106–16pp.
Abstract: Research in many areas of modern physics such as, e. g., indirect searches for dark matter and particle acceleration in supernova remnant shocks rely heavily on studies of cosmic rays (CRs) and associated diffuse emissions (radio, microwave, X-rays, gamma-rays). While very detailed numerical models of CR propagation exist, a quantitative statistical analysis of such models has been so far hampered by the large computational effort that those models require. Although statistical analyses have been carried out before using semi-analytical models (where the computation is much faster), the evaluation of the results obtained from such models is difficult, as they necessarily suffer from many simplifying assumptions. The main objective of this paper is to present a working method for a full Bayesian parameter estimation for a numerical CR propagation model. For this study, we use the GALPROP code, the most advanced of its kind, which uses astrophysical information, and nuclear and particle data as inputs to self-consistently predict CRs, gamma-rays, synchrotron, and other observables. We demonstrate that a full Bayesian analysis is possible using nested sampling and Markov Chain Monte Carlo methods (implemented in the SuperBayeS code) despite the heavy computational demands of a numerical propagation code. The best-fit values of parameters found in this analysis are in agreement with previous, significantly simpler, studies also based on GALPROP.
|
Rivard, M. J., Granero, D., Perez-Calatayud, J., & Ballester, F. (2010). Influence of photon energy spectra from brachytherapy sources on Monte Carlo simulations of kerma and dose rates in water and air. Med. Phys., 37(2), 869–876.
Abstract: Methods: For Ir-192, I-125, and Pd-103, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of Ir-192, I-125, and Pd-103 spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for Ir-192, I-125, and Pd-103, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
Keywords: biomedical materials; brachytherapy; dosimetry; iodine; iridium; Monte Carlo methods; palladium; radioisotopes
|