|
Wang, D. (2023). Finslerian Universe May Reconcile Tensions Between High and Low Redshift Probes. Int. J. Theor. Phys., 62(8), 184–11pp.
Abstract: To reconcile the current tensions between high and low redshift observations, we perform the first constraints on the Finslerian cosmological models including the effective dark matter and dark energy components. We find that all the four Finslerian models could alleviate effectively the Hubble constant (H-0) tension and the amplitude of the root-mean-square density fluctu-ations (s(8)) tension between the Planck measurements and the local Universe observations at the 68% confidence level. The addition of a massless sterile neutrino and a varying total mass of active neutrinos to the base Finslerian two-parameter model, respectively, reduces the H-0 tension from 3.4s to 1.9s and alleviates the s8 tension better than the other three Finslerian models. Computing the Bayesian evidence, with respect to ACDM model, our analysis shows a weak preference for the base Finslerian model and moderate preferences for its three one-parameter extensions. Based on the model-independent Gaussian Processes, we propose a new linear relation which can describe the current redshift space distortions data very well. Using the most stringent constraints we can provide, we have also obtained the limits of typical model parameters for three one-parameter extensional models.
|
|
|
Schaffter, T. et al, Albiol, F., & Caballero, L. (2020). Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open, 3(3), e200265–15pp.
Abstract: Importance Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results Overall, 144231 screening mammograms from 85580 US women (952 cancer positive <= 12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166578 examinations from 68008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation. Question How do deep learning algorithms perform compared with radiologists in screening mammography interpretation? Findings In this diagnostic accuracy study using 144231 screening mammograms from 85580 women from the United States and 166578 screening mammograms from 68008 women from Sweden, no single artificial intelligence algorithm outperformed US community radiologist benchmarks; including clinical data and prior mammograms did not improve artificial intelligence performance. However, combining best-performing artificial intelligence algorithms with single-radiologist assessment demonstrated increased specificity. Meaning Integrating artificial intelligence to mammography interpretation in single-radiologist settings could yield significant performance improvements, with the potential to reduce health care system expenditures and address resource scarcity experienced in population-based screening programs. This diagnostic accuracy study evaluates whether artificial intelligence can overcome human mammography interpretation limits with a rigorous, unbiased evaluation of machine learning algorithms.
|
|
|
KM3NeT Collaboration(Aiello, S. et al), Calvo, D., Coleiro, A., Colomer, M., Gozzini, S. R., Hernandez-Rey, J. J., et al. (2019). KM3NeT front-end and readout electronics system: hardware, firmware, and software. J. Astron. Telesc. Instrum. Syst., 5(4), 046001–15pp.
Abstract: The KM3NeT research infrastructure being built at the bottom of the Mediterranean Sea will host water-Cherenkov telescopes for the detection of cosmic neutrinos. The neutrino telescopes will consist of large volume three-dimensional grids of optical modules to detect the Cherenkov light from charged particles produced by neutrino-induced interactions. Each optical module houses 31 3-in. photomultiplier tubes, instrumentation for calibration of the photomultiplier signal and positioning of the optical module, and all associated electronics boards. By design, the total electrical power consumption of an optical module has been capped at seven Watts. We present an overview of the front-end and readout electronics system inside the optical module, which has been designed for a 1-ns synchronization between the clocks of all optical modules in the grid during a life time of at least 20 years. (C) 2019 Society of Photo-Optical Instrumentation Engineers (SPIE)
|
|
|
KM3NeT Collaboration(Aiello, S. et al), Alves Garre, S., Calvo, D., Carretero, V., Colomer, M., Corredoira, I., et al. (2021). Architecture and performance of the KM3NeT front-end firmware. J. Astron. Telesc. Instrum. Syst., 7(1), 016001–24pp.
Abstract: The KM3NeT infrastructure consists of two deep-sea neutrino telescopes being deployed in the Mediterranean Sea. The telescopes will detect extraterrestrial and atmospheric neutrinos by means of the incident photons induced by the passage of relativistic charged particles through the seawater as a consequence of a neutrino interaction. The telescopes are configured in a three-dimensional grid of digital optical modules, each hosting 31 photomultipliers. The photomultiplier signals produced by the incident Cherenkov photons are converted into digital information consisting of the integrated pulse duration and the time at which it surpasses a chosen threshold. The digitization is done by means of time to digital converters (TDCs) embedded in the field programmable gate array of the central logic board. Subsequently, a state machine formats the acquired data for its transmission to shore. We present the architecture and performance of the front-end firmware consisting of the TDCs and the state machine.
|
|
|
Carrasco-Ribelles, L. A., Pardo-Mas, J. R., Tortajada, S., Saez, C., Valdivieso, B., & Garcia-Gomez, J. M. (2021). Predicting morbidity by local similarities in multi-scale patient trajectories. J. Biomed. Inform., 120, 103837–9pp.
Abstract: Patient Trajectories (PTs) are a method of representing the temporal evolution of patients. They can include information from different sources and be used in socio-medical or clinical domains. PTs have generally been used to generate and study the most common trajectories in, for instance, the development of a disease. On the other hand, healthcare predictive models generally rely on static snapshots of patient information. Only a few works about prediction in healthcare have been found that use PTs, and therefore benefit from their temporal dimension. All of them, however, have used PTs created from single-source information. Therefore, the use of longitudinal multi-scale data to build PTs and use them to obtain predictions about health conditions is yet to be explored. Our hypothesis is that local similarities on small chunks of PTs can identify similar patients concerning their future morbidities. The objectives of this work are (1) to develop a methodology to identify local similarities between PTs before the occurrence of morbidities to predict these on new query individuals; and (2) to validate this methodology on risk prediction of cardiovascular diseases (CVD) occurrence in patients with diabetes. We have proposed a novel formal definition of PTs based on sequences of longitudinal multi-scale data. Moreover, a dynamic programming methodology to identify local alignments on PTs for predicting future morbidities is proposed. Both the proposed methodology for PT definition and the alignment algorithm are generic to be applied on any clinical domain. We validated this solution for predicting CVD in patients with diabetes and we achieved a precision of 0.33, a recall of 0.72 and a specificity of 0.38. Therefore, the proposed solution in the diabetes use case can result of utmost utility to secondary screening.
|
|
|
Folgado, M. G., & Sanz, V. (2022). Exploring the political pulse of a country using data science tools. J. Comput. Soc. Sci., 5, 987–1000.
Abstract: In this paper we illustrate the use of Data Science techniques to analyse complex human communication. In particular, we consider tweets from leaders of political parties as a dynamical proxy to political programmes and ideas. We also study the temporal evolution of their contents as a reaction to specific events. We analyse levels of positive and negative sentiment in the tweets using new tools adapted to social media. We also train a Fully-Connected Neural Network (FCNN) to recognise the political affiliation of a tweet. The FCNN is able to predict the origin of the tweet with a precision in the range of 71-75%, and the political leaning (left or right) with a precision of around 90%. This study is meant to be viewed as an example of how to use Twitter data and different types of Data Science tools for a political analysis.
|
|
|
Kuo, J. L., Lattanzi, M., Cheung, K., & Valle, J. W. F. (2018). Decaying warm dark matter and structure formation. J. Cosmol. Astropart. Phys., 12(12), 026–24pp.
Abstract: We examine the cosmology of warm dark matter (WDM), both stable and decaying, from the point of view of structure formation. We compare the matter power spectrum associated to WDM masses of 1.5 keV and 0.158 keV, with that expected for the stable cold dark matter ACDM Xi SCDM paradigm, taken as our reference model. We scrutinize the effects associated to the warm nature of dark matter, as well as the fact that it decays. The decaying warm dark matter (DWDM) scenario is well-motivated, emerging in a broad class of particle physics theories where neutrino masses arise from the spontaneous breaking of a continuous global lepton number symmetry. The majoron arises as a Nambu-Goldstone boson, and picks up a mass from gravitational effects, that explicitly violate global symmetries. The majoron necessarily decays to neutrinos, with an amplitude proportional to their tiny mass, which typically gives it cosmologically long lifetimes. Using N-body simulations we show that our DWDM picture leads to a viable alternative to the ACDM scenario, with predictions that can differ substantially on small scales.
|
|
|
Caputo, A., Regis, M., Taoso, M., & Witte, S. J. (2019). Detecting the stimulated decay of axions at radio frequencies. J. Cosmol. Astropart. Phys., 03(3), 027–22pp.
Abstract: Assuming axion-like particles account for the entirety of the dark matter in the Universe, we study the possibility of detecting their decay into photons at radio frequencies. We discuss different astrophysical targets, such as dwarf spheroidal galaxies, the Galactic Center and halo, and galaxy clusters. The presence of an ambient radiation field leads to a stimulated enhancement of the decay rate; depending on the environment and the mass of the axion, the effect of stimulated emission may amplify the photon flux by serval orders of magnitude. For axion-photon couplings allowed by astrophysical and laboratory constraints (and possibly favored by stellar cooling), we find the signal to be within the reach of next-generation radio telescopes such as the Square Kilometer Array.
|
|
|
Oldengott, I. M., Barenboim, G., Kahlen, S., Salvado, J., & Schwarz, D. J. (2019). How to relax the cosmological neutrino mass bound. J. Cosmol. Astropart. Phys., 04(4), 049–18pp.
Abstract: We study the impact of non-standard momentum distributions of cosmic neutrinos on the anisotropy spectrum of the cosmic microwave background and the matter power spectrum of the large scale structure. We show that the neutrino distribution has almost no unique observable imprint, as it is almost entirely degenerate with the effective number of neutrino flavours, N-eff, and the neutrino mass, m(nu). Performing a Markov chain Monte Carlo analysis with current cosmological data, we demonstrate that the neutrino mass bound heavily depends on the assumed momentum distribution of relic neutrinos. The message of this work is simple and has to our knowledge not been pointed out clearly before: cosmology allows that neutrinos have larger masses if their average momentum is larger than that of a perfectly thermal distribution. Here we provide an example in which the mass limits are relaxed by a factor of two.
|
|
|
Amoroso, S., Caron, S., Jueid, A., Ruiz de Austri, R., & Skands, P. (2019). Estimating QCD uncertainties in Monte Carlo event generators for gamma-ray dark matter searches. J. Cosmol. Astropart. Phys., 05(5), 007–44pp.
Abstract: Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as W(*) -> qq-', photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the PYTHIA 8.2 Monte Carlo generator from a fit to LEP and SLD data at the Z peak. For the first time we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
|
|