|
Phong, V. H. et al, Agramunt, J., Algora, A., Domingo-Pardo, C., Morales, A. I., Rubio, B., et al. (2022). Beta-Delayed One and Two Neutron Emission Probabilities South-East of Sn-132 and the Odd-Even Systematics in r-Process Nuclide Abundances. Phys. Rev. Lett., 129(18), 172701–7pp.
Abstract: The beta-delayed one- and two-neutron emission probabilities (P-1n and P-2n) of 20 neutron-rich nuclei with N >= 82 have been measured at the RIBF facility of the RIKEN Nishina Center. P-1n of Ag-130;131, Cd-133;134, In-135;136, and (138;13)9Sn were determined for the first time, and stringent upper limits were placed on P-2n for nearly all cases. beta-delayed two-neutron emission (beta 2n) was unambiguously identified in Cd-133 and In-135;136, and their P-2n were measured. Weak beta 2n was also detected from Sn-137;138. Our results highlight the effect of the N = 82 and Z = 50 shell closures on beta-delayed neutron emission probability and provide stringent benchmarks for newly developed macroscopic-microscopic and self-consistent global models with the inclusion of a statistical treatment of neutron and. emission. The impact of our measurements on r-process nucleosynthesis was studied in a neutron star merger scenario. Our P-1n and P-2n have a direct impact on the
|
|
|
Bombacigno, F., Boudet, S., Olmo, G. J., & Montani, G. (2021). Big bounce and future time singularity resolution in Bianchi I cosmologies: The projective invariant Nieh-Yan case. Phys. Rev. D, 103(12), 124031.
Abstract: We extend the notion of the Nieh-Yan invariant to generic metric-affine geometries, where both torsion and nonmetricity are taken into account. Notably, we show that the properties of projective invariance and topologicity can be independently accommodated by a suitable choice of the parameters featuring this new Nieh-Yan term. We then consider a special class of modified theories of gravity able to promote the Immirzi parameter to a dynamical scalar field coupled to the Nieh-Yan form, and we discuss in more detail the dynamics of the effective scalar tensor theory stemming from such a revised theoretical framework. We focus, in particular, on cosmological Bianchi I models and we derive classical solutions where the initial singularity is safely removed in favor of a big bounce, which is ultimately driven by the nonminimal coupling with the Immirzi field. These solutions, moreover, turn out to be characterized by finite time singularities, but we show that such critical points do not spoil the geodesic completeness and wave regularity of these spacetimes.
|
|
|
Fernandez Casani, A., Garcia Montoro, C., Gonzalez de la Hoz, S., Salt, J., Sanchez, J., & Villaplana Perez, M. (2023). Big Data Analytics for the ATLAS EventIndex Project with Apache Spark. Comput. Math. Methods, 2023, 6900908–19pp.
Abstract: The ATLAS EventIndex was designed to provide a global event catalogue and limited event-level metadata for ATLAS experiment of the Large Hadron Collider (LHC) and their analysis groups and users during Run 2 (2015-2018) and has been running in production since. The LHC Run 3, started in 2022, has seen increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. A new core storage service is being developed in HBase/Phoenix, and there is work in progress to provide at least the same functionality as the current one for increased data ingestion and search rates and with increasing volumes of stored data. In addition, new tools are being developed for solving the needed access cases within the new storage. This paper describes a new tool using Spark and implemented in Scala for accessing the big data quantities of the EventIndex project stored in HBase/Phoenix. With this tool, we can offer data discovery capabilities at different granularities, providing Spark Dataframes that can be used or refined within the same framework. Data analytic cases of the EventIndex project are implemented, like the search for duplicates of events from the same or different datasets. An algorithm and implementation for the calculation of overlap matrices of events across different datasets are presented. Our approach can be used by other higher-level tools and users, to ease access to the data in a performant and standard way using Spark abstractions. The provided tools decouple data access from the actual data schema, which makes it convenient to hide complexity and possible changes on the backed storage.
|
|
|
Maso-Ferrando, A., Sanchis-Gual, N., Font, J. A., & Olmo, G. J. (2023). Birth of baby universes from gravitational collapse in a modified-gravity scenario. J. Cosmol. Astropart. Phys., 06(6), 028–19pp.
Abstract: We consider equilibrium models of spherical boson stars in Palatini f (R) = R + CR2 gravity and study their collapse when perturbed. The Einstein-Klein-Gordon system is solved using a recently established correspondence in an Einstein frame representation. We find that, in that frame, the endpoint is a nonrotating black hole surrounded by a quasi -stationary cloud of scalar field. However, the dynamics in the f (R) frame is dramatically different. The innermost region of the collapsing object exhibits the formation of a finite -size, exponentially-expanding baby universe connected with the outer (parent) universe via a minimal area surface (a throat or umbilical cord). Our simulations indicate that this surface is at all times hidden inside a horizon, causally disconnecting the baby universe from observers above the horizon. The implications of our findings in other areas of gravitational physics are also discussed.
|
|
|
Navarro-Salas, J. (2024). Black holes, conformal symmetry, and fundamental fields. Class. Quantum Gravity, 41(8), 085003–14pp.
Abstract: Cosmic censorship protects the outside world from black hole singularities and paves the way for assigning entropy to gravity at the event horizons. We point out a tension between cosmic censorship and the quantum backreacted geometry of Schwarzschild black holes, induced by vacuum polarization and driven by the conformal anomaly. A similar tension appears for the Weyl curvature hypothesis at the Big Bang singularity. We argue that the requirement of exact conformal symmetry resolves both conflicts and has major implications for constraining the set of fundamental constituents of the Standard Model.
|
|
|
NEXT Collaboration(Simon, A. et al), Carcel, S., Carrion, J. V., Diaz, J., Felkai, R., Lopez-March, N., et al. (2021). Boosting background suppression in the NEXT experiment through Richardson-Lucy deconvolution. J. High Energy Phys., 07(7), 146–38pp.
Abstract: Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of similar to 10(27) yr, requiring suppressing backgrounds to < 1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution and the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of <similar to> 5 when reconstructing electron-positron pairs in the Tl-208 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterraneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of similar to 10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV e(-)e(+) pairs, it leads to a background rejection factor of 27 at 57% signal efficiency.
|
|
|
Beltran Jimenez, J., Delhom, A., Olmo, G. J., & Orazi, E. (2021). Born-Infeld gravity: Constraints from light-by-light scattering and an effective field theory perspective. Phys. Lett. B, 820, 136479–6pp.
Abstract: By using a novel technique that establishes a correspondence between general relativity and metric-affine theories based on the Ricci tensor, we are able to set stringent constraints on the free parameter of Born-Infeld gravity from the ones recently obtained for Born-Infeld electrodynamics by using light-by light scattering data from ATLAS. We also discuss how these gravity theories plus matter fit within an effective field theory framework.
|
|
|
Maso-Ferrando, A., Sanchis-Gual, N., Font, J. A., & Olmo, G. J. (2021). Boson stars in Palatini f(R) gravity. Class. Quantum Gravity, 38(19), 194003–25pp.
Abstract: We explore equilibrium solutions of spherically symmetric boson stars in the Palatini formulation of f (R) gravity. We account for the modifications introduced in the gravitational sector by using a recently established correspondence between modified gravity with scalar matter and general relativity with modified scalar matter. We focus on the quadratic theory f (R) = R + xi R-2 and compare its solutions with those found in general relativity, exploring both positive and negative values of the coupling parameter xi. As matter source, a complex, massive scalar field with and without self-interaction terms is considered. Our results show that the existence curves of boson stars in Palatini f (R) gravity are fairly similar to those found in general relativity. Major differences are observed for negative values of the coupling parameter which results in a repulsive gravitational component for high enough scalar field density distributions. Adding self-interactions makes the degeneracy between f (R) and general relativity even more pronounced, leaving very little room for observational discrimination between the two theories.
|
|
|
Bodenstein, S., Bordes, J., Dominguez, C. A., Peñarrocha, J., & Schilcher, K. (2012). Bottom-quark mass from finite energy QCD sum rules. Phys. Rev. D, 85(3), 034003–5pp.
Abstract: Finite energy QCD sum rules involving both inverse-and positive-moment integration kernels are employed to determine the bottom-quark mass. The result obtained in the (MS) over bar scheme at a reference scale of 10 GeV is m (m) over bar (b)(10 GeV) = 3623(9) MeV. This value translates into a scale-invariant mass (m) over bar (b)((m) over bar (b)) = 4171(9) MeV. This result has the lowest total uncertainty of any method, and is less sensitive to a number of systematic uncertainties that affect other QCD sum rule determinations.
|
|
|
Pich, A., Rosell, I., & Sanz-Cillero, J. J. (2020). Bottom-up approach within the electroweak effective theory: Constraining heavy resonances. Phys. Rev. D, 102(3), 035012–12pp.
Abstract: The LHC has confirmed the existence of a mass gap between the known particles and possible new states. Effective field theory is then the appropriate tool to search for low-energy signals of physics beyond the Standard Model. We adopt the general formalism of the electroweak effective theory, with a nonlinear realization of the electroweak symmetry breaking, where the Higgs is a singlet with independent couplings. At higher energies we consider a generic resonance Lagrangian which follows the above-mentioned nonlinear realization and couples the light particles to bosonic heavy resonances with J(P) = 0(+/-) and J(P) = 1(+/-). Integrating out the resonances and assuming a proper short-distance behavior, it is possible to determine or to constrain most of the bosonic low-energy constants in terms of resonance masses. Therefore, the current experimental bounds on these bosonic low-energy constants allow us to constrain the resonance masses above the TeV scale, by following a typical bottom-up approach, i.e., the fit of the low-energy constants to precise experimental data enables us to learn about the high-energy scales, the underlying theory behind the Standard Model.
|
|