|
Albiol, F., Corbi, A., & Albiol, A. (2019). Densitometric Radiographic Imaging With Contour Sensors. IEEE Access, 7, 18902–18914.
Abstract: We present the technical/physical foundations of a new imaging technique that combines ordinary radiographic information (generated by conventional X-ray settings) with the patient's volume to derive densitometric images. Traditionally, these images provide quantitative information about tissues densities. In our approach, they graphically enhance either soft or bony regions. After measuring the patient's volume with contour recognition devices, the physical traversed lengths within it (as the Roentgen beam intersects the patient) are calculated and pixel-wise associated with the original radiograph (X). In order to derive this map of lengths (L), the camera equations of the X-ray system and the contour sensor are determined. The patient's surface is also translated to the point-of-view of the X-ray beam and all its entrance/exit points are sought with the help of ray-casting methods. The derived L is applied to X as a physical operation (subtraction), obtaining soft tissue-(D-S) or bone-enhanced (D'(B)) figures. In the D-S type, the contained graphical information can be linearly mapped to the average electronic density (traversed by the X-ray beam). This feature represents an interesting proof-of-concept of associating density data to radiographs, but most important, their intensity histogram is objectively compressed, i.e., the dynamic range is more shrunk (compared against the corresponding X). This leads to other advantages: improvement in the visibility of border/edge areas (high gradient), extended manual window level/width manipulations during screening, and immediate correction of underexposed X instances. In the D-B' type, high-density elements are highlighted and easier to discern. All these results can be achieved with low-energy beam exposures, saving costs and dose. Future work will deepen this clinical side of our research. In contrast with other image-based modifiers, the proposed method is grounded on the measurement of a physical entity: the span of the X-ray beam within a body while undertaking a radiographic examination.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2017). 3D measurements in conventional X-ray imaging with RGB-D sensors. Med. Eng. Phys., 42, 73–79.
Abstract: A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and Xray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2017). Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings. Radiol. Phys. Technol., 10(1), 68–81.
Abstract: We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang's approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.
|
|
|
Alcaide, J., Banerjee, S., Chala, M., & Titov, A. (2019). Probes of the Standard Model effective field theory extended with a right-handed neutrino. J. High Energy Phys., 08(8), 031–18pp.
Abstract: If neutrinos are Dirac particles and, as suggested by the so far null LHC results, any new physics lies at energies well above the electroweak scale, the Standard Model effective field theory has to be extended with operators involving the right-handed neutrinos. In this paper, we study this effective field theory and set constraints on the different dimension-six interactions. To that aim, we use LHC searches for associated production of light (and tau) leptons with missing energy, monojet searches, as well as pion and tau decays. Our bounds are generally above the TeV for order one couplings. One particular exception is given by operators involving top quarks. These provide new signals in top decays not yet studied at colliders. Thus, we also design an LHC analysis to explore these signatures in the tt production. Our results are also valid if the right-handed neutrinos are Majorana and long-lived.
|
|
|
Alcaide, J., Chala, M., & Santamaria, A. (2018). LHC signals of radiatively-induced neutrino masses and implications for the Zee-Babu model. Phys. Lett. B, 779, 107–116.
Abstract: Contrary to the see-saw models, extended Higgs sectors leading to radiatively-induced neutrino masses do require the extra particles to be at the TeV scale. However, these new states have often exotic decays, to which experimental LHC searches performed so far, focused on scalars decaying into pairs of same-sign leptons, are not sensitive. In this paper we show that their experimental signatures can start to be tested with current LHC data if dedicated multi-region analyses correlating different observables are used. We also provide high-accuracy estimations of the complicated Standard Model backgrounds involved. For the case of the Zee-Babu model, we show that regions not yet constrained by neutrino data and low-energy experiments can be already probed, while most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC.
|
|
|
Alcaide, J., Das, D., & Santamaria, A. (2017). A model of neutrino mass and dark matter with large neutrinoless double beta decay. J. High Energy Phys., 04(4), 049–21pp.
Abstract: We propose a model where neutrino masses are generated at three loop order but neutrinoless double beta decay occurs at one loop. Thus we can have large neutrinoless double beta decay observable in the future experiments even when the neutrino masses are very small. The model receives strong constraints from the neutrino data and lepton flavor violating decays, which substantially reduces the number of free parameters. Our model also opens up the possibility of having several new scalars below the TeV regime, which can be explored at the collider experiments. Additionally, our model also has an unbroken Z(2) symmetry which allows us to identify a viable Dark Matter candidate.
|
|
|
Alcaide, J., & Mileo, N. I. (2020). LHC sensitivity to singly charged scalars decaying into electrons and muons. Phys. Rev. D, 102(7), 075030–11pp.
Abstract: Current LHC searches for nonsupersymmetric singly charged scalars, based on two-Higgs-doublet models, in general, focus the analysis in third-generation fermions in the final state. However, singly charged scalars in alternative extensions of the scalar sector involve Yukawa couplings not proportional to the mass of the fermions. Assuming the scalar decays into electrons and muons, it can manifest cleaner experimental signatures. In this paper, we suggest that a singly charged scalar singlet, with electroweak production, can start to be probed in the near future with dedicated search strategies. Depending on the strength of the Yukawa couplings, two independent scenarios arc considered: direct pair production (small couplings) and single production via a virtual neutrino exchange (large couplings). We show that, up to a mass as large as 500 GeV, most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC. Our results also apply to other frameworks, provided the singly charged scalar exhibits similar production patterns and dominant decay modes.
|
|
|
Alcaide, J., Salvado, J., & Santamaria, A. (2018). Fitting flavour symmetries: the case of two-zero neutrino mass textures. J. High Energy Phys., 07(7), 164–18pp.
Abstract: We present a numeric method for the analysis of the fermion mass matrices predicted in flavour models. The method does not require any previous algebraic work, it offers a chi(2) comparison test and an easy estimate of confidence intervals. It can also be used to study the stability of the results when the predictions are disturbed by small perturbations. We have applied the method to the case of two-zero neutrino mass textures using the latest available fits on neutrino oscillations, derived the available parameter space for each texture and compared them. Textures A(1) and A(2) seem favoured because they give a small chi(2), allow for large regions in parameter space and give neutrino masses compatible with Cosmology limits. The other “allowed” textures remain allowed although with a very constrained parameter space, which, in some cases, could be in conflict with Cosmology. We have also revisited the “forbidden” textures and studied the stability of the results when the texture zeroes are not exact. Most of the forbidden textures remain forbidden, but textures F-1 and F-3 are particularly sensitive to small perturbations and could become allowed.
|
|
|
Aldana, M., & Lledo, M. A. (2023). The Fuzzy Bit. Symmetry-Basel, 15(12), 2103–25pp.
Abstract: In this paper, the formulation of Quantum Mechanics in terms of fuzzy logic and fuzzy sets is explored. A result by Pykacz, which establishes a correspondence between (quantum) logics (lattices with certain properties) and certain families of fuzzy sets, is applied to the Birkhoff-von Neumann logic, the lattice of projectors of a Hilbert space. Three cases are considered: the qubit, two qubits entangled, and a qutrit 'nested' inside the two entangled qubits. The membership functions of the fuzzy sets are explicitly computed and all the connectives of the fuzzy sets are interpreted as operations with these particular membership functions. In this way, a complete picture of the standard quantum logic in terms of fuzzy sets is obtained for the systems considered.
|
|
|
Alencar, G., Estrada, M., Muniz, C. R., & Olmo, G. J. (2023). Dymnikova GUP-corrected black holes. J. Cosmol. Astropart. Phys., 11(11), 100–23pp.
Abstract: We consider the impact of Generalized Uncertainty Principle (GUP) effects on the Dymnikova regular black hole. The minimum length scale introduced by the GUP modifies the energy density associated with the gravitational source, referred to as the Dymnikova vacuum, based on its analogy with the gravitational counterpart of the Schwinger effect. We present an approximated analytical solution (together with exact numerical results for comparison) that encompasses a wide range of black hole sizes, whose properties crucially depend on the ratio between the de Sitter core radius and the GUP scale. The emergence of a wormhole inside the de Sitter core in the innermost region of the object is one of the most relevant features of this family of solutions. Our findings demonstrate that these solutions remain singularity free, confirming the robustness of the Dymnikova regular black hole under GUP corrections. Regarding energy conditions, we find that the violation of the strong, weak, and null energy conditions which is characteristic of the pure Dymnikova case does not occur at Planckian scales in the GUP corrected solution. This contrast suggests a departure from conventional expectations and highlights the influence of quantum corrections and the GUP in modifying the energy conditions near the Planck scale.
|
|