Albiol, A., Albiol, F., Paredes, R., Plasencia-Martinez, J. M., Blanco Barrio, A., Garcia Santos, J. M., et al. (2022). A comparison of Covid-19 early detection between convolutional neural networks and radiologists. Insights Imaging, 13(1), 122–12pp.
Abstract: Background The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience. Methods The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx. Results Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx. Conclusion The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.
|
Albiol, F., Corbi, A., & Albiol, A. (2019). Densitometric Radiographic Imaging With Contour Sensors. IEEE Access, 7, 18902–18914.
Abstract: We present the technical/physical foundations of a new imaging technique that combines ordinary radiographic information (generated by conventional X-ray settings) with the patient's volume to derive densitometric images. Traditionally, these images provide quantitative information about tissues densities. In our approach, they graphically enhance either soft or bony regions. After measuring the patient's volume with contour recognition devices, the physical traversed lengths within it (as the Roentgen beam intersects the patient) are calculated and pixel-wise associated with the original radiograph (X). In order to derive this map of lengths (L), the camera equations of the X-ray system and the contour sensor are determined. The patient's surface is also translated to the point-of-view of the X-ray beam and all its entrance/exit points are sought with the help of ray-casting methods. The derived L is applied to X as a physical operation (subtraction), obtaining soft tissue-(D-S) or bone-enhanced (D'(B)) figures. In the D-S type, the contained graphical information can be linearly mapped to the average electronic density (traversed by the X-ray beam). This feature represents an interesting proof-of-concept of associating density data to radiographs, but most important, their intensity histogram is objectively compressed, i.e., the dynamic range is more shrunk (compared against the corresponding X). This leads to other advantages: improvement in the visibility of border/edge areas (high gradient), extended manual window level/width manipulations during screening, and immediate correction of underexposed X instances. In the D-B' type, high-density elements are highlighted and easier to discern. All these results can be achieved with low-energy beam exposures, saving costs and dose. Future work will deepen this clinical side of our research. In contrast with other image-based modifiers, the proposed method is grounded on the measurement of a physical entity: the span of the X-ray beam within a body while undertaking a radiographic examination.
|
Alcaide, J., Banerjee, S., Chala, M., & Titov, A. (2019). Probes of the Standard Model effective field theory extended with a right-handed neutrino. J. High Energy Phys., 08(8), 031–18pp.
Abstract: If neutrinos are Dirac particles and, as suggested by the so far null LHC results, any new physics lies at energies well above the electroweak scale, the Standard Model effective field theory has to be extended with operators involving the right-handed neutrinos. In this paper, we study this effective field theory and set constraints on the different dimension-six interactions. To that aim, we use LHC searches for associated production of light (and tau) leptons with missing energy, monojet searches, as well as pion and tau decays. Our bounds are generally above the TeV for order one couplings. One particular exception is given by operators involving top quarks. These provide new signals in top decays not yet studied at colliders. Thus, we also design an LHC analysis to explore these signatures in the tt production. Our results are also valid if the right-handed neutrinos are Majorana and long-lived.
|
Alcaide, J., & Mileo, N. I. (2020). LHC sensitivity to singly charged scalars decaying into electrons and muons. Phys. Rev. D, 102(7), 075030–11pp.
Abstract: Current LHC searches for nonsupersymmetric singly charged scalars, based on two-Higgs-doublet models, in general, focus the analysis in third-generation fermions in the final state. However, singly charged scalars in alternative extensions of the scalar sector involve Yukawa couplings not proportional to the mass of the fermions. Assuming the scalar decays into electrons and muons, it can manifest cleaner experimental signatures. In this paper, we suggest that a singly charged scalar singlet, with electroweak production, can start to be probed in the near future with dedicated search strategies. Depending on the strength of the Yukawa couplings, two independent scenarios arc considered: direct pair production (small couplings) and single production via a virtual neutrino exchange (large couplings). We show that, up to a mass as large as 500 GeV, most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC. Our results also apply to other frameworks, provided the singly charged scalar exhibits similar production patterns and dominant decay modes.
|
n_TOF Collaboration(Alcayne, V. et al), Balibrea-Correa, J., Domingo-Pardo, C., Lerendegui-Marco, J., Babiano-Suarez, V., & Ladarescu, I. (2024). A Segmented Total Energy Detector (sTED) optimized for (n,γ) cross-section measurements at n_TOF EAR2. Radiat. Phys. Chem., 217, 11pp.
Abstract: The neutron time-of-flight facility nTOF at CERN is a spallation source dedicated to measurements of neutroninduced reaction cross-sections of interest in nuclear technologies, astrophysics, and other applications. Since 2014, Experimental ARea 2 (EAR2) is operational and delivers a neutron fluence of similar to 4 center dot 10(7) neutrons per nominal proton pulse, which is similar to 50 times higher than the one of Experimental ARea 1 (EAR1) of similar to 8 center dot 10(5) neutrons per pulse. The high neutron flux at EAR2 results in high counting rates in the detectors that challenged the previously existing capture detection systems. For this reason, a Segmented Total Energy Detector (sTED) has been developed to overcome the limitations in the detector's response, by reducing the active volume per module and by using a photo-multiplier (PMT) optimized for high counting rates. This paper presents the main characteristics of the sTED, including energy and time resolution, response to gamma-rays, and provides as well details of the use of the Pulse Height Weighting Technique (PHWT) with this detector. The sTED has been validated to perform neutron-capture cross-section measurements in EAR2 in the neutron energy range from thermal up to at least 400 keV. The detector has already been successfully used in several measurements at nTOF EAR2.
|
Aldana, M., & Lledo, M. A. (2023). The Fuzzy Bit. Symmetry-Basel, 15(12), 2103–25pp.
Abstract: In this paper, the formulation of Quantum Mechanics in terms of fuzzy logic and fuzzy sets is explored. A result by Pykacz, which establishes a correspondence between (quantum) logics (lattices with certain properties) and certain families of fuzzy sets, is applied to the Birkhoff-von Neumann logic, the lattice of projectors of a Hilbert space. Three cases are considered: the qubit, two qubits entangled, and a qutrit 'nested' inside the two entangled qubits. The membership functions of the fuzzy sets are explicitly computed and all the connectives of the fuzzy sets are interpreted as operations with these particular membership functions. In this way, a complete picture of the standard quantum logic in terms of fuzzy sets is obtained for the systems considered.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., et al. (2020). Search for direct production of electroweakinos in final states with one lepton, missing transverse momentum and a Higgs boson decaying into two b-jets in pp collisions at root s=13 TeV with the ATLAS detector. Eur. Phys. J. C, 80(8), 691–29pp.
Abstract: The results of a search for electroweakino pair production pp -> (chi) over tilde (+/-)(1) (chi) over tilde (0)(2) in which the chargino ((chi) over tilde (+/-)(1)) decays into a W boson and the lightest neutralino ((chi) over tilde (0)(1)), while the heavier neutralino ((chi) over tilde (0)(2)) decays into the Standard Model 125 GeV Higgs boson and a second (chi) over tilde (0)(1) are presented. The signal selection requires a pair of b-tagged jets consistent with those from a Higgs boson decay, and either an electron or a muon from the W boson decay, together with missing transverse momentum from the corresponding neutrino and the stable neutralinos. The analysis is based on data corresponding to 139 fb(-1) of root s = 13 TeV pp collisions provided by the Large Hadron Collider and recorded by the ATLAS detector. No statistically significant evidence of an excess of events above the Standard Model expectation is found. Limits are set on the direct production of the electroweakinos in simplified models, assuming pure wino cross-sections. Masses of (chi) over tilde (+/-)(1) (chi) over tilde (0)(2) up to 740 GeV are excluded at 95% confidence level for a massless (chi) over tilde (0)(1).
|
Alencar, G., Estrada, M., Muniz, C. R., & Olmo, G. J. (2023). Dymnikova GUP-corrected black holes. J. Cosmol. Astropart. Phys., 11(11), 100–23pp.
Abstract: We consider the impact of Generalized Uncertainty Principle (GUP) effects on the Dymnikova regular black hole. The minimum length scale introduced by the GUP modifies the energy density associated with the gravitational source, referred to as the Dymnikova vacuum, based on its analogy with the gravitational counterpart of the Schwinger effect. We present an approximated analytical solution (together with exact numerical results for comparison) that encompasses a wide range of black hole sizes, whose properties crucially depend on the ratio between the de Sitter core radius and the GUP scale. The emergence of a wormhole inside the de Sitter core in the innermost region of the object is one of the most relevant features of this family of solutions. Our findings demonstrate that these solutions remain singularity free, confirming the robustness of the Dymnikova regular black hole under GUP corrections. Regarding energy conditions, we find that the violation of the strong, weak, and null energy conditions which is characteristic of the pure Dymnikova case does not occur at Planckian scales in the GUP corrected solution. This contrast suggests a departure from conventional expectations and highlights the influence of quantum corrections and the GUP in modifying the energy conditions near the Planck scale.
|
Alexandre, J., Mavromatos, N. E., Mitsou, V. A., & Musumeci, E. (2024). Resummation schemes for high-electric-charge objects leading to improved experimental mass limits. Phys. Rev. D, 109(3), 036026–20pp.
Abstract: High-electric-charge compact objects (HECOs) appear in several theoretical particle physics models beyond the Standard Model, and are actively searched for in current colliders, such as the Large Hadron Collider at CERN. In such searches, mass bounds of these objects have been placed, using Drell-Yan and photon-fusion processes at tree level so far. However, such mass-bound estimates are not reliable, given that, as a result of the large values of the electric charge of the HECO, perturbative quantum electrodynamics calculations break down. In this work, we perform a Dyson-Schwinger resummation scheme (as opposed to lattice strong-coupling approach), which makes the computation of the pertinent HECO-production cross sections reliable, thus allowing us to extract improved mass bounds for such objects from ATLAS and MoEDAL searches.
|
HAWC Collaboration(Alfaro, R. et al), & Salesa Greus, F. (2022). Gamma/hadron separation with the HAWC observatory. Nucl. Instrum. Methods Phys. Res. A, 1039, 166984–13pp.
Abstract: The High Altitude Water Cherenkov (HAWC) gamma-ray observatory observes atmospheric showers produced by incident gamma rays and cosmic rays with energy from 300 GeV to more than 100 TeV. A crucial phase in analyzing gamma-ray sources using ground-based gamma-ray detectors like HAWC is to identify the showers produced by gamma rays or hadrons. The HAWC observatory records roughly 25,000 events per second, with hadrons representing the vast majority (> 99.9%) of these events. The standard gamma/hadron separation technique in HAWC uses a simple rectangular cut involving only two parameters. This work describes the implementation of more sophisticated gamma/hadron separation techniques, via machine learning methods (boosted decision trees and neural networks), and summarizes the resulting improvements in gamma/hadron separation obtained in HAWC.
|