Home | [11–20] << 21 22 23 24 25 26 27 28 29 30 >> [31–40] |
Gimenez-Alventosa, V., Gimenez, V., & Oliver, S. (2021). PenRed: An extensible and parallel Monte-Carlo framework for radiation transport based on PENELOPE. Comput. Phys. Commun., 267, 108065–12pp.
Abstract: Monte Carlo methods provide detailed and accurate results for radiation transport simulations. Unfortunately, the high computational cost of these methods limits its usage in real-time applications. Moreover, existing computer codes do not provide a methodology for adapting these kinds of simulations to specific problems without advanced knowledge of the corresponding code system, and this restricts their applicability. To help solve these current limitations, we present PenRed, a general-purpose, standalone, extensible and modular framework code based on PENELOPE for parallel Monte Carlo simulations of electron-photon transport through matter. It has been implemented in C++ programming language and takes advantage of modern object-oriented technologies. In addition, PenRed offers the capability to read and process DICOM images as well as to construct and simulate image-based voxelized geometries, so as to facilitate its usage in medical applications. Our framework has been successfully verified against the original PENELOPE Fortran code. Furthermore, the implemented parallelism has been tested showing a significant improvement in the simulation time without any loss in precision of results. Program summary Program title: PenRed: Parallel Engine for Radiation Energy Deposition. CPC Library link to program files: https://doi .org /10 .17632/rkw6tvtngy.1 Licensing provision: GNU Affero General Public License (AGPL). Programming language: C++ standard 2011. Nature of problem: Monte Carlo simulations usually require a huge amount of computation time to achieve low statistical uncertainties. In addition, many applications necessitate particular characteristics or the extraction of specific quantities from the simulation. However, most available Monte Carlo codes do not provide an efficient parallel and truly modular structure which allows users to easily customise their code to suit their needs without an in-depth knowledge of the code system. Solution method: PenRed is a fully parallel, modular and customizable framework for Monte Carlo simulations of the passage of radiation through matter. It is based on the PENELOPE [1] code system, from which inherits its unique physics models and tracking algorithms for charged particles. PenRed has been coded in C++ following an object-oriented programming paradigm restricted to the C++11 standard. Our engine implements parallelism via a double approach: on the one hand, by using standard C++ threads for shared memory, improving the access and usage of the memory, and, on the other hand, via the MPI standard for distributed memory infrastructures. Notice that both kinds of parallelism can be combined together in the same simulation. Moreover, both threads and MPI processes, can be balanced using the builtin load balance system (RUPER-LB [30]) to maximise the performance on heterogeneous infrastructures. In addition, PenRed provides a modular structure with methods designed to easily extend its functionality. Thus, users can create their own independent modules to adapt our engine to their needs without changing the original modules. Furthermore, user extensions will take advantage of the builtin parallelism without any extra effort or knowledge of parallel programming. Additional comments including restrictions and unusual features: PenRed has been compiled in linux systems withg++ of GCC versions 4.8.5, 7.3.1, 8.3.1 and 9; clang version 3.4.2 and intel C++ compiler (icc) version 19.0.5.281. Since it is a C++11-standard compliant code, PenRed should be able to compile with any compiler with C++11 support. In addition, if the code is compiled without MPI support, it does not require any non standard library. To enable MPI capabilities, the user needs to install whatever available MPI implementation, such as openMPI [24] or mpich [25], which can be found in the repositories of any linux distribution. Finally, to provide DICOM processing support, PenRed can be optionally compiled using the dicom toolkit (dcmtk) [32] library. Thus, PenRed has only two optional dependencies, an MPI implementation and the dcmtk library.
|
Plaza, J., Martinez, T., Becares, V., Cano-Ott, D., Villamarin, D., de Rada, A. P., et al. (2023). Thermal neutron background at Laboratorio Subterraneo de Canfranc (LSC). Astropart Phys., 146, 102793–9pp.
Abstract: The thermal neutron background at Laboratorio Subterraneo de Canfranc (LSC) has been determined using several He-3 proportional counter detectors. Bare and Cd shielded counters were used in a series of long measurements. Pulse shape discrimination techniques were applied to discriminate between neutron and gamma signals as well as other intrinsic contributions. Montecarlo simulations allowed us to estimate the sensitivity of the detectors and calculate values for the background flux of thermal neutrons inside Hall-A of LSC. The obtained value is (3.5 +/- 0.8)x10(-6) n/cm(2)s, and is within an order of magnitude compared to similar facilities.
|
Gomez-Cadenas, J. J., Martin-Albo, J., Menendez, J., Mezzetto, M., Monrabal, F., & Sorel, M. (2024). The search for neutrinoless double-beta decay. Riv. Nuovo Cimento, 46, 619–692.
Abstract: Neutrinos are the only particles in the Standard Model that could be Majorana fermions, that is, completely neutral fermions that are their own antiparticles. The most sensitive known experimental method to verify whether neutrinos are Majorana particles is the search for neutrinoless double-beta decay. The last 2 decades have witnessed the development of a vigorous program of neutrinoless double-beta decay experiments, spanning several isotopes and developing different strategies to handle the backgrounds masking a possible signal. In addition, remarkable progress has been made in the understanding of the nuclear matrix elements of neutrinoless double-beta decay, thus reducing a substantial part of the theoretical uncertainties affecting the particle-physics interpretation of the process. On the other hand, the negative results by several experiments, combined with the hints that the neutrino mass ordering could be normal, may imply very long lifetimes for the neutrinoless double-beta decay process. In this report, we review the main aspects of such process, the recent progress on theoretical ideas and the experimental state of the art. We then consider the experimental challenges to be addressed to increase the sensitivity to detect the process in the likely case that lifetimes are much longer than currently explored, and discuss a selection of the most promising experimental efforts.
Keywords: Neutrinos; Majorana; Double-beta decay; Nuclear matrix elements
|
Garcia Canal, C. A., Tarutina, T., & Vento, V. (2023). Analysis of Nuclear Effects in Structure Functions and Their Connection with the Binding Energy of Nuclei. Braz. J. Phys., 53(6), 161–8pp.
Abstract: We describe nuclear effects in structure functions of nuclei in DIS by means of a multiplicative factor beta(A)(x) which differentiates the structure function of the bound nucleons from that of the free nucleons. Our analysis determines that beta(A)(x) establishes a relation between the quark-gluon dynamics expressed by the bound nucleon structure functions and the nuclear dynamics as described by the well-known semi-empirical Bethe-Weizsacker mass formula. This relation corroborates a connection between the underlying quark-gluon dynamics and the phenomenological nuclear dynamics.
|
Fanchiotti, H., Garcia Canal, C. A., Mayosky, M., Veiga, A., & Vento, V. (2023). The Geometric Phase in Classical Systems and in the Equivalent Quantum Hermitian and Non-Hermitian PT-Symmetric Systems. Braz. J. Phys., 53(6), 143–11pp.
Abstract: The decomplexification procedure allows one to show mathematically (stricto sensu) the equivalence (isomorphism) between the quantum dynamics of a system with a finite number of basis states and a classical dynamics system. This unique way of connecting different dynamics was used in the past to analyze the relationship between the well-known geometric phase present in the quantum evolution discovered by Berry and its generalizations, with their analogs, the Hannay phases, in the classical domain. In here, this analysis is carried out for several quantum hermitian and non-hermitian PT-symmetric Hamiltonians and compared with the Hannay phase analysis in their classical isomorphic equivalent systems. As the equivalence ends in the classical domain with oscillator dynamics, we exploit the analogy to propose resonant electric circuits coupled with a gyrator, to reproduce the geometric phase coming from the theoretical solutions, in simulated laboratory experiments.
Keywords: Geometrical phases; Decomplexification; Resonat circuit; Gyrator
|
Fernandez Casani, A., Orduña, J. M., Sanchez, J., & Gonzalez de la Hoz, S. (2021). A Reliable Large Distributed Object Store Based Platform for Collecting Event Metadata. J. Grid Comput., 19(3), 39–19pp.
Abstract: The Large Hadron Collider (LHC) is about to enter its third run at unprecedented energies. The experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousands of physics users. The ATLAS EventIndex project, currently running in production, builds a complete catalogue of particle collisions, or events, for the ATLAS experiment at the LHC. The distributed nature of the experiment data model is exploited by running jobs at over one hundred Grid data centers worldwide. Millions of files with petabytes of data are indexed, extracting a small quantity of metadata per event, that is conveyed with a data collection system in real time to a central Hadoop instance at CERN. After a successful first implementation based on a messaging system, some issues suggested performance bottlenecks for the challenging higher rates in next runs of the experiment. In this work we characterize the weaknesses of the previous messaging system, regarding complexity, scalability, performance and resource consumption. A new approach based on an object-based storage method was designed and implemented, taking into account the lessons learned and leveraging the ATLAS experience with this kind of systems. We present the experiment that we run during three months in the real production scenario worldwide, in order to evaluate the messaging and object store approaches. The results of the experiment show that the new object-based storage method can efficiently support large-scale data collection for big data environments like the next runs of the ATLAS experiment at the LHC.
Keywords: Grid computing; Hadoop file system; Object-Based storage
|
Ikeno, N., Toledo, G., Liang, W. H., & Oset, E. (2023). Consistency of the Molecular Picture of Omega(2012) with the Latest Belle Results. Few-Body Syst., 64(3), 55–6pp.
Abstract: We report the results of the research on the Omega(2012) state based on themolecular picture and discuss the consistency of the picture with the Belle experimental results. We study the interaction of the (K) over bar Xi*, eta Omega(s-wave) and (K) over bar Xi(d-wave) channels within a coupled channel unitary approach, and obtain the mass and the width of the Omega(2012) state and the decay ratio R-Xi(K) over bar(Xi pi(K) over bar). We also present a mechanism for Omega c -> pi(+)Omega(2012) production through an external emission Cabibbo favoredweak decay mode, where the Omega(2012) is dynamically generated from the above interaction. We find that the results obtained by the molecular picture are consistent with all Belle experimental data.
|
Blanton, T. D., Romero-Lopez, F., & Sharpe, S. R. (2019). Implementing the three-particle quantization condition including higher partial waves. J. High Energy Phys., 03(3), 106–56pp.
Abstract: We present an implementation of the relativistic three-particle quantization condition including both s- and d-wave two-particle channels. For this, we develop a systematic expansion of the three-particle K matrix, K-df,K-3, about threshold, which is the generalization of the effective range expansion of the two-particle K matrix, K-2. Relativistic invariance plays an important role in this expansion. We find that d-wave two-particle channels enter first at quadratic order. We explain how to implement the resulting multichannel quantization condition, and present several examples of its application. We derive the leading dependence of the threshold three-particle state on the two-particle d-wave scattering amplitude, and use this to test our implementation. We show how strong two-particle d-wave interactions can lead to significant effects on the finite-volume three-particle spectrum, including the possibility of a generalized three-particle Efimov-like bound state. We also explore the application to the 3 pi(+) system, which is accessible to lattice QCD simulations, where we study the sensitivity of the spectrum to the components of K-df,K-3. Finally, we investigate the circumstances under which the quantization condition has unphysical solutions.
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Study of Bc+ → χc π+ decays. J. High Energy Phys., 02(2), 173–30pp.
Abstract: A study of B-c(+) -> chi(c) pi(+) decays is reported using proton-proton collision data, collected with the LHCb detector at centre-of-mass energies of 7, 8, and 13TeV, corresponding to an integrated luminosity of 9 fb(-1). The decay B-c(+) -> chi(c2)pi(+) is observed for the first time, with a significance exceeding seven standard deviations. The relative branching fraction with respect to the B-c(+) -> J/psi pi(+) decay is measured to be BBc+ ->chi c2 pi+/BBc+ -> (J/psi pi+) = 0.37 +/- 0.06 +/- 0.02 +/- 0.01, where the first uncertainty is statistical, the second is systematic, and the third is due to the knowledge of the chi(c2) -> J/psi gamma branching fraction. No significant B-c(+) -> chi(+)(c1 pi) signal is observed and an upper limit for the relative branching fraction for the B-c(+) -> chi(c1)pi(+) and B-c(+) -> chi(c2)pi(+) decays of BBc+ ->chi c1 pi+/BBc+ -> chi(c2)pi(+) < 0.49 is set at the 90% confidence level.
Keywords: B Physics; Branching fraction; Hadron-Hadron Scattering
|
SCiMMA and SNEWS Collaborations(Baxter, A. L. et al), & Colomer, M. (2022). Collaborative experience between scientific software projects using Agile Scrum development. Softw.-Pract. Exp., 52, 2077–2096.
Abstract: Developing sustainable software for the scientific community requires expertise in software engineering and domain science. This can be challenging due to the unique needs of scientific software, the insufficient resources for software engineering practices in the scientific community, and the complexity of developing for evolving scientific contexts. While open-source software can partially address these concerns, it can introduce complicating dependencies and delay development. These issues can be reduced if scientists and software developers collaborate. We present a case study wherein scientists from the SuperNova Early Warning System collaborated with software developers from the Scalable Cyberinfrastructure for Multi-Messenger Astrophysics project. The collaboration addressed the difficulties of open-source software development, but presented additional risks to each team. For the scientists, there was a concern of relying on external systems and lacking control in the development process. For the developers, there was a risk in supporting a user-group while maintaining core development. These issues were mitigated by creating a second Agile Scrum framework in parallel with the developers' ongoing Agile Scrum process. This Agile collaboration promoted communication, ensured that the scientists had an active role in development, and allowed the developers to evaluate and implement the scientists' software requirements. The collaboration provided benefits for each group: the scientists actuated their development by using an existing platform, and the developers utilized the scientists' use-case to improve their systems. This case study suggests that scientists and software developers can avoid scientific computing issues by collaborating and that Agile Scrum methods can address emergent concerns.
|