|
Albiol, A., Albiol, F., Paredes, R., Plasencia-Martinez, J. M., Blanco Barrio, A., Garcia Santos, J. M., et al. (2022). A comparison of Covid-19 early detection between convolutional neural networks and radiologists. Insights Imaging, 13(1), 122–12pp.
Abstract: Background The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience. Methods The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx. Results Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx. Conclusion The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.
|
|
|
Albiol, A., Corbi, A., & Albiol, F. (2017). Automatic intensity windowing of mammographic images based on a perceptual metric. Med. Phys., 44(4), 1369–1378.
Abstract: Purpose: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. Methods: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at . Results: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. Conclusions: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2019). Densitometric Radiographic Imaging With Contour Sensors. IEEE Access, 7, 18902–18914.
Abstract: We present the technical/physical foundations of a new imaging technique that combines ordinary radiographic information (generated by conventional X-ray settings) with the patient's volume to derive densitometric images. Traditionally, these images provide quantitative information about tissues densities. In our approach, they graphically enhance either soft or bony regions. After measuring the patient's volume with contour recognition devices, the physical traversed lengths within it (as the Roentgen beam intersects the patient) are calculated and pixel-wise associated with the original radiograph (X). In order to derive this map of lengths (L), the camera equations of the X-ray system and the contour sensor are determined. The patient's surface is also translated to the point-of-view of the X-ray beam and all its entrance/exit points are sought with the help of ray-casting methods. The derived L is applied to X as a physical operation (subtraction), obtaining soft tissue-(D-S) or bone-enhanced (D'(B)) figures. In the D-S type, the contained graphical information can be linearly mapped to the average electronic density (traversed by the X-ray beam). This feature represents an interesting proof-of-concept of associating density data to radiographs, but most important, their intensity histogram is objectively compressed, i.e., the dynamic range is more shrunk (compared against the corresponding X). This leads to other advantages: improvement in the visibility of border/edge areas (high gradient), extended manual window level/width manipulations during screening, and immediate correction of underexposed X instances. In the D-B' type, high-density elements are highlighted and easier to discern. All these results can be achieved with low-energy beam exposures, saving costs and dose. Future work will deepen this clinical side of our research. In contrast with other image-based modifiers, the proposed method is grounded on the measurement of a physical entity: the span of the X-ray beam within a body while undertaking a radiographic examination.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2017). 3D measurements in conventional X-ray imaging with RGB-D sensors. Med. Eng. Phys., 42, 73–79.
Abstract: A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and Xray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2017). Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings. Radiol. Phys. Technol., 10(1), 68–81.
Abstract: We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang's approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.
|
|
|
Alcaide, J., Banerjee, S., Chala, M., & Titov, A. (2019). Probes of the Standard Model effective field theory extended with a right-handed neutrino. J. High Energy Phys., 08(8), 031–18pp.
Abstract: If neutrinos are Dirac particles and, as suggested by the so far null LHC results, any new physics lies at energies well above the electroweak scale, the Standard Model effective field theory has to be extended with operators involving the right-handed neutrinos. In this paper, we study this effective field theory and set constraints on the different dimension-six interactions. To that aim, we use LHC searches for associated production of light (and tau) leptons with missing energy, monojet searches, as well as pion and tau decays. Our bounds are generally above the TeV for order one couplings. One particular exception is given by operators involving top quarks. These provide new signals in top decays not yet studied at colliders. Thus, we also design an LHC analysis to explore these signatures in the tt production. Our results are also valid if the right-handed neutrinos are Majorana and long-lived.
|
|
|
Alcaide, J., Chala, M., & Santamaria, A. (2018). LHC signals of radiatively-induced neutrino masses and implications for the Zee-Babu model. Phys. Lett. B, 779, 107–116.
Abstract: Contrary to the see-saw models, extended Higgs sectors leading to radiatively-induced neutrino masses do require the extra particles to be at the TeV scale. However, these new states have often exotic decays, to which experimental LHC searches performed so far, focused on scalars decaying into pairs of same-sign leptons, are not sensitive. In this paper we show that their experimental signatures can start to be tested with current LHC data if dedicated multi-region analyses correlating different observables are used. We also provide high-accuracy estimations of the complicated Standard Model backgrounds involved. For the case of the Zee-Babu model, we show that regions not yet constrained by neutrino data and low-energy experiments can be already probed, while most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC.
|
|
|
Alcaide, J., Das, D., & Santamaria, A. (2017). A model of neutrino mass and dark matter with large neutrinoless double beta decay. J. High Energy Phys., 04(4), 049–21pp.
Abstract: We propose a model where neutrino masses are generated at three loop order but neutrinoless double beta decay occurs at one loop. Thus we can have large neutrinoless double beta decay observable in the future experiments even when the neutrino masses are very small. The model receives strong constraints from the neutrino data and lepton flavor violating decays, which substantially reduces the number of free parameters. Our model also opens up the possibility of having several new scalars below the TeV regime, which can be explored at the collider experiments. Additionally, our model also has an unbroken Z(2) symmetry which allows us to identify a viable Dark Matter candidate.
|
|
|
Alcaide, J., & Mileo, N. I. (2020). LHC sensitivity to singly charged scalars decaying into electrons and muons. Phys. Rev. D, 102(7), 075030–11pp.
Abstract: Current LHC searches for nonsupersymmetric singly charged scalars, based on two-Higgs-doublet models, in general, focus the analysis in third-generation fermions in the final state. However, singly charged scalars in alternative extensions of the scalar sector involve Yukawa couplings not proportional to the mass of the fermions. Assuming the scalar decays into electrons and muons, it can manifest cleaner experimental signatures. In this paper, we suggest that a singly charged scalar singlet, with electroweak production, can start to be probed in the near future with dedicated search strategies. Depending on the strength of the Yukawa couplings, two independent scenarios arc considered: direct pair production (small couplings) and single production via a virtual neutrino exchange (large couplings). We show that, up to a mass as large as 500 GeV, most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC. Our results also apply to other frameworks, provided the singly charged scalar exhibits similar production patterns and dominant decay modes.
|
|
|
Alcaide, J., Salvado, J., & Santamaria, A. (2018). Fitting flavour symmetries: the case of two-zero neutrino mass textures. J. High Energy Phys., 07(7), 164–18pp.
Abstract: We present a numeric method for the analysis of the fermion mass matrices predicted in flavour models. The method does not require any previous algebraic work, it offers a chi(2) comparison test and an easy estimate of confidence intervals. It can also be used to study the stability of the results when the predictions are disturbed by small perturbations. We have applied the method to the case of two-zero neutrino mass textures using the latest available fits on neutrino oscillations, derived the available parameter space for each texture and compared them. Textures A(1) and A(2) seem favoured because they give a small chi(2), allow for large regions in parameter space and give neutrino masses compatible with Cosmology limits. The other “allowed” textures remain allowed although with a very constrained parameter space, which, in some cases, could be in conflict with Cosmology. We have also revisited the “forbidden” textures and studied the stability of the results when the texture zeroes are not exact. Most of the forbidden textures remain forbidden, but textures F-1 and F-3 are particularly sensitive to small perturbations and could become allowed.
|
|