|
Bordes, J., Hong-Mo, C., & Tsun, T. S. (2022). Resolving an ambiguity of Higgs couplings in the FSM, greatly improving thereby the model's predictive range and prospects. Int. J. Mod. Phys. A, 37(27), 2250167–10pp.
Abstract: We show that, after resolving what was thought to be an ambiguity in the Higgs coupling, the FSM gives, apart from two extra terms (i) and (ii) to be specified below, an effective action in the standard sector which has the same form as the SM action, the two differing only in the values of the mass and mixing parameters of quarks and leptons which the SM takes as Finputs from experiment while the FSM obtains as a result of a fit with a few parameters. Hence, to the accuracy that these two sets of parameters agree in value, and they do to a good extent as shown in earlier work,' the FSM should give the same result as the SM in all the circumstances where the latter has been successfully applied, except for the noted modifications due to (i) and (ii). If so, it would be a big step forward for the FSM. The correction terms are: (i) a mixing between the SM's gamma – Z with a new vector boson in the hidden sector, (ii) a mixing between the standard Higgs with a new scalar boson also in the hidden sector. And these have been shown a few years back to lead to (i') an enhancement of the W mass over the SM value,(2) – and (ii') effects consistent with the g – 2 and some other anomalies,(3) precisely the two deviations from the SM reported by experiments(4,5) recently much in the news.
|
|
|
Flores, M. M., Kim, J. S., Rolbiecki, K., & Ruiz de Austri, R. (2023). Updated LHC bounds on MUED after run 2. Int. J. Mod. Phys. A, 38(1), 2350002–14pp.
Abstract: We present updated LHC limits on the minimal universal extra dimensions (MUEDs) model from the Run 2 searches. We scan the parameter space against a number of searches implemented in the public code CheckMATE and derive up-to-date limits on the MUED parameter space from 13TeV searches. The strongest constraints come from a search dedicated to squarks and gluinos with one isolated lepton, jets and missing transverse energy. In the procedure, we take into account initial state radiation and stress its importance in the MUED searches, which is not always appreciated.
|
|
|
Araujo Filho, A. A., Reis, J. A. A. S., & Ghosh, S. (2023). Quantum gases on a torus. Int. J. Geom. Methods Mod. Phys., 20(10), 2350178–19pp.
Abstract: This paper is aimed at studying the thermodynamic properties of quantum gases confined to a torus. To do that, we consider noninteracting gases within the grand canonical ensemble formalism. In this context, fermions and bosons are taken into account and the calculations are properly provided in both analytical and numerical manners. In particular, the system turns out to be sensitive to the topological parameter under consideration: the winding number. Furthermore, we also derive a model in order to take into account interacting quantum gases. To corroborate our results, we implement such a method for two different scenarios: a ring and a torus.
|
|
|
Araujo Filho, A. A. (2023). Thermodynamics of massless particles in curved spacetime. Int. J. Geom. Methods Mod. Phys., 12(13), 2350226–40pp.
Abstract: This work is devoted to study the behavior of massless particles within the context of curved spacetime. In essence, we investigate the consequences of the scale factor C(?) of the Friedmann-Robertson-Walker metric in the Einstein-aether formalism to study photon-like particles. To do so, we consider the system within the canonical ensemble formalism in order to derive the following thermodynamic state quantities: spectral radiance, Helmholtz free energy, pressure, entropy, mean energy and the heat capacity. Moreover, the correction to the Stefan-Boltzmann law and the equation of states are also provided. Particularly, we separate our study within three distinct cases, i.e. s = 0, p = 0; s = 1, p = 1; s = 2, p = 1. In the first one, the results are derived numerically. Nevertheless, for the rest of the cases, all the calculations are accomplished analytically showing explicitly the dependence of the scale factor C(?) and the Riemann zeta function ?(s). Furthermore, our analyses are accomplished in general taking into account three different regimes of temperature of the universe, i.e. the inflationary era (T = 10(13)GeV), the electroweak epoch (T = 10(3)GeV) and the cosmic microwave background (T = 10(-13)GeV).
|
|
|
Bordes, J., Chan, H. M., & Tsou, S. T. (2023). A vacuum transition in the FSM with a possible new take on the horizon problem in cosmology. Int. J. Mod. Phys. A, 38(25), 2350124–32pp.
Abstract: The framed standard model (FSM), constructed to explain the empirical mass and mixing patterns (including CP phases) of quarks and leptons, in which it has done quite well, gives otherwise the same result as the standard model (SM) in almost all areas in particle physics where the SM has been successfully applied, except for a few specified deviations such as the W mass and the g-2 of muons, that is, just where experiment is showing departures from what SM predicts. It predicts further the existence of a hidden sector of particles some of which may function as dark matter. In this paper, we first note that the above results involve, surprisingly, the FSM undergoing a vacuum transition (VTR1) at a scale of around 17MeV, where the vacuum expectation values of the colour framons (framed vectors promoted into fields) which are all nonzero above that scale acquire some vanishing components below it. This implies that the metric pertaining to these vanishing components would vanish also. Important consequences should then ensue, but these occur mostly in the unknown hidden sector where empirical confirmation is hard at present to come by, but they give small reflections in the standard sector, some of which may have already been seen. However, one notes that if, going off at a tangent, one imagines colour to be embedded, Kaluza-Klein (KK) fashion, into a higher-dimensional space-time, then this VTR1 would cause 2 of the compactified dimensions to collapse. This might mean then that when the universe cooled to the corresponding temperature of 1011 K when it was about 10-3 s old, this VTR1 collapse would cause the three spatial dimensions of the universe to expand to compensate. The resultant expansion is estimated, using FSM parameters previously determined from particle physics, to be capable, when extrapolated backwards in time, of bringing the present universe back inside the then horizon, solving thus formally the horizon problem. Besides, VTR1 being a global phenomenon in the FSM, it would switch on and off automatically and simultaneously over all space, thus requiring seemingly no additional strategy for a graceful exit. However, this scenario has not been checked for consistency with other properties of the universe and is to be taken thus not as a candidate solution of the horizon problem but only as an observation from particle physics which might be of interest to cosmologists and experts in the early universe. For particle physicists also, it might serve as an indicator for how relevant this VTR1 can be, even if the KK assumption is not made.
|
|
|
Bordes, J., Chan, H. M., & Tsou, S. T. (2023). Search for new physics in semileptonic decays of K and B as implied by the g-2 anomaly in FSM. Int. J. Mod. Phys. A, 38, 2350177–24pp.
Abstract: The framed standard model (FSM), constructed to explain, with some success, why there should be three and apparently only three generations of quarks and leptons in nature falling into a hierarchical mass and mixing pattern,(10) suggests also, among other things, a scalar boson U, with mass around 17 MeV and small couplings to quarks and leptons,(11) which might explain(9) the g – 2 anomaly reported in experiment.(12) The U arises in FSM initially as a state in the predicted “hidden sector” with mass around 17 MeV, which mixes with the standard model (SM) Higgs h(W), acquiring thereby a coupling to quarks and leptons and a mass just below 17 MeV. The initial purpose of this paper is to check whether this proposal is compatible with experiment on semileptonic decays of Ks and Bs where the U can also appear. The answer to this we find is affirmative, in that the contribution of U to new physics as calculated in the FSM remains within the experimental bounds, but only if m(U) lies within a narrow range just below the unmixed mass. As a result from this, one has an estimate m(U) similar to 15-17 MeV for the mass of U, and from some further considerations the estimate Gamma(U) similar to 0.02 eV for its width, both of which may be useful for an eventual search for it in experiment. If found, it will be, for the FSM, not just the discovery of a predicted new particle, but the opening of a window into a whole “hidden sector” containing at least some, perhaps even the bulk, of the dark matter in the universe.
|
|
|
Ahyoune, S. et al, Gimeno, B., & Reina-Valero, J. (2023). A Proposal for a Low-Frequency Axion Search in the 1-2 μeV Range and Below with the BabyIAXO Magnet. Ann. Phys., 535(12), 2300326–23pp.
Abstract: In the near future BabyIAXO will be the most powerful axion helioscope, relying on a custom-made magnet of two bores of 70 cm diameter and 10 m long, with a total available magnetic volume of more than 7 m(3). In this document, it proposes and describe the implementation of low-frequency axion haloscope setups suitable for operation inside the BabyIAXO magnet. The RADES proposal has a potential sensitivity to the axion-photon coupling g(alpha gamma) down to values corresponding to the KSVZ model, in the (currently unexplored) mass range between 1 and 2 μeV, after a total effective exposure of 440 days. This mass range is covered by the use of four differently dimensioned 5-meter-long cavities, equipped with a tuning mechanism based on inner turning plates. A setup like the one proposed will also allow an exploration of the same mass range for hidden photons coupled to photons. An additional complementary apparatus is proposed using LC circuits and exploring the low energy range (approximate to 10(-4)-10(-1)mu eV). The setup includes a cryostat and cooling system to cool down the BabyIAXO bore down to about 5 K, as well as an appropriate low-noise signal amplification and detection chain.
|
|
|
Fernandez Casani, A., Garcia Montoro, C., Gonzalez de la Hoz, S., Salt, J., Sanchez, J., & Villaplana Perez, M. (2023). Big Data Analytics for the ATLAS EventIndex Project with Apache Spark. Comput. Math. Methods, 2023, 6900908–19pp.
Abstract: The ATLAS EventIndex was designed to provide a global event catalogue and limited event-level metadata for ATLAS experiment of the Large Hadron Collider (LHC) and their analysis groups and users during Run 2 (2015-2018) and has been running in production since. The LHC Run 3, started in 2022, has seen increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. A new core storage service is being developed in HBase/Phoenix, and there is work in progress to provide at least the same functionality as the current one for increased data ingestion and search rates and with increasing volumes of stored data. In addition, new tools are being developed for solving the needed access cases within the new storage. This paper describes a new tool using Spark and implemented in Scala for accessing the big data quantities of the EventIndex project stored in HBase/Phoenix. With this tool, we can offer data discovery capabilities at different granularities, providing Spark Dataframes that can be used or refined within the same framework. Data analytic cases of the EventIndex project are implemented, like the search for duplicates of events from the same or different datasets. An algorithm and implementation for the calculation of overlap matrices of events across different datasets are presented. Our approach can be used by other higher-level tools and users, to ease access to the data in a performant and standard way using Spark abstractions. The provided tools decouple data access from the actual data schema, which makes it convenient to hide complexity and possible changes on the backed storage.
|
|
|
Ma, Y. Z., Vijande, J., Ballester, F., Tedgren, A. C., Granero, D., Haworth, A., et al. (2017). A generic TG-186 shielded applicator for commissioning model-based dose calculation algorithms for high-dose-rate Ir-192 brachytherapy. Med. Phys., 44(11), 5961–5976.
Abstract: PurposeA joint working group was created by the American Association of Physicists in Medicine (AAPM), the European Society for Radiotherapy and Oncology (ESTRO), and the Australasian Brachytherapy Group (ABG) with the charge, among others, to develop a set of well-defined test case plans and perform calculations and comparisons with model-based dose calculation algorithms (MBDCAs). Its main goal is to facilitate a smooth transition from the AAPM Task Group No. 43 (TG-43) dose calculation formalism, widely being used in clinical practice for brachytherapy, to the one proposed by Task Group No. 186 (TG-186) for MBDCAs. To do so, in this work a hypothetical, generic high-dose rate (HDR) Ir-192 shielded applicator has been designed and benchmarked. MethodsA generic HDR Ir-192 shielded applicator was designed based on three commercially available gynecological applicators as well as a virtual cubic water phantom that can be imported into any DICOM-RT compatible treatment planning system (TPS). The absorbed dose distribution around the applicator with the TG-186 Ir-192 source located at one dwell position at its center was computed using two commercial TPSs incorporating MBDCAs (Oncentra((R)) Brachy with Advanced Collapsed-cone Engine, ACE, and BrachyVision ACUROS) and state-of-the-art Monte Carlo (MC) codes, including ALGEBRA, BrachyDose, egs_brachy, Geant4, MCNP6, and Penelope2008. TPS-based volumetric dose distributions for the previously reported source centered in water and source displaced test cases, and the new source centered in applicator test case, were analyzed here using the MCNP6 dose distribution as a reference. Volumetric dose comparisons of TPS results against results for the other MC codes were also performed. Distributions of local and global dose difference ratios are reported. ResultsThe local dose differences among MC codes are comparable to the statistical uncertainties of the reference datasets for the source centered in water and source displaced test cases and for the clinically relevant part of the unshielded volume in the source centered in applicator case. Larger local differences appear in the shielded volume or at large distances. Considering clinically relevant regions, global dose differences are smaller than the local ones. The most disadvantageous case for the MBDCAs is the one including the shielded applicator. In this case, ACUROS agrees with MC within [-4.2%, +4.2%] for the majority of voxels (95%) while presenting dose differences within [-0.12%, +0.12%] of the dose at a clinically relevant reference point. For ACE, 95% of the total volume presents differences with respect to MC in the range [-1.7%, +0.4%] of the dose at the reference point. ConclusionsThe combination of the generic source and generic shielded applicator, together with the previously developed test cases and reference datasets (available in the Brachytherapy Source Registry), lay a solid foundation in supporting uniform commissioning procedures and direct comparisons among treatment planning systems for HDR Ir-192 brachytherapy.
|
|
|
Hirn, J., Garcia, J. E., Montesinos-Navarro, A., Sanchez-Martin, R., Sanz, V., & Verdu, M. (2022). A deep Generative Artificial Intelligence system to predict species coexistence patterns. Methods Ecol. Evol., 13, 1052–1061.
Abstract: Predicting coexistence patterns is a current challenge to understand diversity maintenance, especially in rich communities where these patterns' complexity is magnified through indirect interactions that prevent their approximation with classical experimental approaches. We explore cutting-edge Machine Learning techniques called Generative Artificial Intelligence (GenAI) to predict species coexistence patterns in vegetation patches, training generative adversarial networks (GAN) and variational AutoEncoders (VAE) that are then used to unravel some of the mechanisms behind community assemblage. The GAN accurately reproduces real patches' species composition and plant species' affinity to different soil types, and the VAE also reaches a high level of accuracy, above 99%. Using the artificially generated patches, we found that high-order interactions tend to suppress the positive effects of low-order interactions. Finally, by reconstructing successional trajectories, we could identify the pioneer species with larger potential to generate a high diversity of distinct patches in terms of species composition. Understanding the complexity of species coexistence patterns in diverse ecological communities requires new approaches beyond heuristic rules. Generative Artificial Intelligence can be a powerful tool to this end as it allows to overcome the inherent dimensionality of this challenge.
|
|