|
Garcilazo, H., Valcarce, A., & Vijande, J. (2020). Neutral baryonic systems with strangeness. Int. J. Mod. Phys. E, 29(1), 1930009–22pp.
Abstract: We review the status as regards to the existence of three- and four-body bound states made of neutrons and Lambda hyperons. For interesting cases, the coupling to neutral baryonic systems made of charged particles of different strangeness has been addressed. There are strong arguments showing that the Lambda nn system has no bound states. Lambda Lambda nn strong stable states are not favored by our current knowledge of the strangeness -1 and -2 baryon-baryon interactions. However, a possible Xi(-) t quasibound state decaying to Lambda Lambda nn might exist in nature. Similarly, there is a broad agreement about the nonexistence of Lambda Lambda n bound states. However, the coupling to Xi NN states opens the door to a resonance above the Lambda Lambda n threshold.
|
|
|
Wang, D. (2023). Finslerian Universe May Reconcile Tensions Between High and Low Redshift Probes. Int. J. Theor. Phys., 62(8), 184–11pp.
Abstract: To reconcile the current tensions between high and low redshift observations, we perform the first constraints on the Finslerian cosmological models including the effective dark matter and dark energy components. We find that all the four Finslerian models could alleviate effectively the Hubble constant (H-0) tension and the amplitude of the root-mean-square density fluctu-ations (s(8)) tension between the Planck measurements and the local Universe observations at the 68% confidence level. The addition of a massless sterile neutrino and a varying total mass of active neutrinos to the base Finslerian two-parameter model, respectively, reduces the H-0 tension from 3.4s to 1.9s and alleviates the s8 tension better than the other three Finslerian models. Computing the Bayesian evidence, with respect to ACDM model, our analysis shows a weak preference for the base Finslerian model and moderate preferences for its three one-parameter extensions. Based on the model-independent Gaussian Processes, we propose a new linear relation which can describe the current redshift space distortions data very well. Using the most stringent constraints we can provide, we have also obtained the limits of typical model parameters for three one-parameter extensional models.
|
|
|
Schaffter, T. et al, Albiol, F., & Caballero, L. (2020). Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open, 3(3), e200265–15pp.
Abstract: Importance Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results Overall, 144231 screening mammograms from 85580 US women (952 cancer positive <= 12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166578 examinations from 68008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation. Question How do deep learning algorithms perform compared with radiologists in screening mammography interpretation? Findings In this diagnostic accuracy study using 144231 screening mammograms from 85580 women from the United States and 166578 screening mammograms from 68008 women from Sweden, no single artificial intelligence algorithm outperformed US community radiologist benchmarks; including clinical data and prior mammograms did not improve artificial intelligence performance. However, combining best-performing artificial intelligence algorithms with single-radiologist assessment demonstrated increased specificity. Meaning Integrating artificial intelligence to mammography interpretation in single-radiologist settings could yield significant performance improvements, with the potential to reduce health care system expenditures and address resource scarcity experienced in population-based screening programs. This diagnostic accuracy study evaluates whether artificial intelligence can overcome human mammography interpretation limits with a rigorous, unbiased evaluation of machine learning algorithms.
|
|
|
KM3NeT Collaboration(Aiello, S. et al), Calvo, D., Coleiro, A., Colomer, M., Gozzini, S. R., Hernandez-Rey, J. J., et al. (2019). KM3NeT front-end and readout electronics system: hardware, firmware, and software. J. Astron. Telesc. Instrum. Syst., 5(4), 046001–15pp.
Abstract: The KM3NeT research infrastructure being built at the bottom of the Mediterranean Sea will host water-Cherenkov telescopes for the detection of cosmic neutrinos. The neutrino telescopes will consist of large volume three-dimensional grids of optical modules to detect the Cherenkov light from charged particles produced by neutrino-induced interactions. Each optical module houses 31 3-in. photomultiplier tubes, instrumentation for calibration of the photomultiplier signal and positioning of the optical module, and all associated electronics boards. By design, the total electrical power consumption of an optical module has been capped at seven Watts. We present an overview of the front-end and readout electronics system inside the optical module, which has been designed for a 1-ns synchronization between the clocks of all optical modules in the grid during a life time of at least 20 years. (C) 2019 Society of Photo-Optical Instrumentation Engineers (SPIE)
|
|
|
KM3NeT Collaboration(Aiello, S. et al), Alves Garre, S., Calvo, D., Carretero, V., Colomer, M., Corredoira, I., et al. (2021). Architecture and performance of the KM3NeT front-end firmware. J. Astron. Telesc. Instrum. Syst., 7(1), 016001–24pp.
Abstract: The KM3NeT infrastructure consists of two deep-sea neutrino telescopes being deployed in the Mediterranean Sea. The telescopes will detect extraterrestrial and atmospheric neutrinos by means of the incident photons induced by the passage of relativistic charged particles through the seawater as a consequence of a neutrino interaction. The telescopes are configured in a three-dimensional grid of digital optical modules, each hosting 31 photomultipliers. The photomultiplier signals produced by the incident Cherenkov photons are converted into digital information consisting of the integrated pulse duration and the time at which it surpasses a chosen threshold. The digitization is done by means of time to digital converters (TDCs) embedded in the field programmable gate array of the central logic board. Subsequently, a state machine formats the acquired data for its transmission to shore. We present the architecture and performance of the front-end firmware consisting of the TDCs and the state machine.
|
|
|
Carrasco-Ribelles, L. A., Pardo-Mas, J. R., Tortajada, S., Saez, C., Valdivieso, B., & Garcia-Gomez, J. M. (2021). Predicting morbidity by local similarities in multi-scale patient trajectories. J. Biomed. Inform., 120, 103837–9pp.
Abstract: Patient Trajectories (PTs) are a method of representing the temporal evolution of patients. They can include information from different sources and be used in socio-medical or clinical domains. PTs have generally been used to generate and study the most common trajectories in, for instance, the development of a disease. On the other hand, healthcare predictive models generally rely on static snapshots of patient information. Only a few works about prediction in healthcare have been found that use PTs, and therefore benefit from their temporal dimension. All of them, however, have used PTs created from single-source information. Therefore, the use of longitudinal multi-scale data to build PTs and use them to obtain predictions about health conditions is yet to be explored. Our hypothesis is that local similarities on small chunks of PTs can identify similar patients concerning their future morbidities. The objectives of this work are (1) to develop a methodology to identify local similarities between PTs before the occurrence of morbidities to predict these on new query individuals; and (2) to validate this methodology on risk prediction of cardiovascular diseases (CVD) occurrence in patients with diabetes. We have proposed a novel formal definition of PTs based on sequences of longitudinal multi-scale data. Moreover, a dynamic programming methodology to identify local alignments on PTs for predicting future morbidities is proposed. Both the proposed methodology for PT definition and the alignment algorithm are generic to be applied on any clinical domain. We validated this solution for predicting CVD in patients with diabetes and we achieved a precision of 0.33, a recall of 0.72 and a specificity of 0.38. Therefore, the proposed solution in the diabetes use case can result of utmost utility to secondary screening.
|
|
|
Hinarejos, M., Bañuls, M. C., & Perez, A. (2013). A Study of Wigner Functions for Discrete-Time Quantum Walks. J. Comput. Theor. Nanosci., 10(7), 1626–1633.
Abstract: We perform a systematic study of the discrete time Quantum Walk on one dimension using Wigner functions, which are generalized to include the chirality (or coin) degree of freedom. In particular, we analyze the evolution of the negative volume in phase space, as a function of time, for different initial states. This negativity can be used to quantify the degree of departure of the system from a classical state. We also relate this quantity to the entanglement between the coin and walker subspaces.
|
|
|
Folgado, M. G., & Sanz, V. (2022). Exploring the political pulse of a country using data science tools. J. Comput. Soc. Sci., 5, 987–1000.
Abstract: In this paper we illustrate the use of Data Science techniques to analyse complex human communication. In particular, we consider tweets from leaders of political parties as a dynamical proxy to political programmes and ideas. We also study the temporal evolution of their contents as a reaction to specific events. We analyse levels of positive and negative sentiment in the tweets using new tools adapted to social media. We also train a Fully-Connected Neural Network (FCNN) to recognise the political affiliation of a tweet. The FCNN is able to predict the origin of the tweet with a precision in the range of 71-75%, and the political leaning (left or right) with a precision of around 90%. This study is meant to be viewed as an example of how to use Twitter data and different types of Data Science tools for a political analysis.
|
|
|
Choi, K. Y., Lopez-Fogliani, D. E., Muñoz, C., & Ruiz de Austri, R. (2010). Gamma-ray detection from gravitino dark matter decay in the μnu SSM. J. Cosmol. Astropart. Phys., 03(3), 028–14pp.
Abstract: The μnu SSM provides a solution to the mu-problem of the MSSM and explains the origin of neutrino masses by simply using right-handed neutrino superfields. Given that R-parity is broken in this model, the gravitino is a natural candidate for dark matter since its lifetime becomes much longer than the age of the Universe. We consider the implications of gravitino dark matter in the μnu SSM, analyzing in particular the prospects for detecting gamma rays from decaying gravitinos. If the gravitino explains the whole dark matter component, a gravitino mass larger than 20 GeV is disfavored by the isotropic diffuse photon background measurements. On the other hand, a gravitino with a mass range between 0.1 – 20 GeV gives rise to a signal that might be observed by the FERMI satellite. In this way important regions of the parameter space of the μnu SSM can be checked.
|
|
|
Gomez-Cadenas, J. J., Martin-Albo, J., Sorel, M., Ferrario, P., Monrabal, F., Muñoz, J., et al. (2011). Sense and sensitivity of double beta decay experiments. J. Cosmol. Astropart. Phys., 06(6), 007–30pp.
Abstract: The search for neutrinoless double beta decay is a very active field in which the number of proposals for next-generation experiments has proliferated. In this paper we attempt to address both the sense and the sensitivity of such proposals. Sensitivity comes first, by means of proposing a simple and unambiguous statistical recipe to derive the sensitivity to a putative Majorana neutrino mass, m(beta beta). In order to make sense of how the different experimental approaches compare, we apply this recipe to a selection of proposals, comparing the resulting sensitivities. We also propose a “physics-motivated range” (PMR) of the nuclear matrix elements as a unifying criterium between the different nuclear models. The expected performance of the proposals is parametrized in terms of only four numbers: energy resolution, background rate (per unit time, isotope mass and energy), detection efficiency, and beta beta isotope mass. For each proposal, both a reference and an optimistic scenario for the experimental performance are studied. In the reference scenario we find that all the proposals will be able to partially explore the degenerate spectrum, without fully covering it, although four of them (KamLAND-Zen, CUORE, NEXT and EXO) will approach the 50 meV boundary. In the optimistic scenario, we find that CUORE and the xenon-based proposals (KamLAND-Zen, EXO and NEXT) will explore a significant fraction of the inverse hierarchy, with NEXT covering it almost fully. For the long term future, we argue that Xe-136-based experiments may provide the best case for a 1-ton scale experiment, given the potentially very low backgrounds achievable and the expected scalability to large isotope masses.
|
|