Anderson, P. R., Siahmazgi, S. G., Clark, R. D., & Fabbri, A. (2020). Method to compute the stress-energy tensor for a quantized scalar field when a black hole forms from the collapse of a null shell. Phys. Rev. D, 102(12), 125035–26pp.
Abstract: A method is given to compute the stress-energy tensor for a massless minimally coupled scalar field in a spacetime where a black hole forms from the collapse of a spherically symmetric null shell in four dimensions. Part of the method involves matching the modes for the in vacuum state to a complete set of modes in Schwarzschild spacetime. The other part involves subtracting from the unrenormalized expression for the stress-energy tensor when the field is in the in vacuum state, the corresponding expression when the field is in the Unruh state and adding to this the renormalized stress-energy tensor for the field in the Unruh state. The method is shown to work in the two-dimensional case where the results are known.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., Castillo Gimenez, V., et al. (2020). Alignment of the ATLAS Inner Detector in Run 2. Eur. Phys. J. C, 80(12), 1194–41pp.
Abstract: The performance of the ATLAS Inner Detector alignment has been studied using pp collision data at v s = 13 TeV collected by the ATLAS experiment during Run 2 (2015-2018) of the Large Hadron Collider (LHC). The goal of the detector alignment is to determine the detector geometry as accurately as possible and correct for time-dependent movements. The Inner Detector alignment is based on the minimization of track-hit residuals in a sequence of hierarchical levels, from global mechanical assembly structures to local sensors. Subsequent levels have increasing numbers of degrees of freedom; in total there are almost 750,000. The alignment determines detector geometry on both short and long timescales, where short timescales describe movementswithin anLHCfill. The performance and possible track parameter biases originating from systematic detector deformations are evaluated. Momentum biases are studied using resonances decaying to muons or to electrons. The residual sagitta bias and momentum scale bias after alignment are reduced to less than similar to 0.1 TeV-1 and 0.9 x 10(-3), respectively. Impact parameter biases are also evaluated using tracks within jets.
|
Nada, A., & Ramos, A. (2021). An analysis of systematic effects in finite size scaling studies using the gradient flow. Eur. Phys. J. C, 81(1), 1–19pp.
Abstract: We propose a new strategy for the determination of the step scaling function sigma (u) in finite size scaling studies using the gradient flow. In this approach the determination of sigma (u) is broken in two pieces: a change of the flow time at fixed physical size, and a change of the size of the system at fixed flow time. Using both perturbative arguments and a set of simulations in the pure gauge theory we show that this approach leads to a better control over the continuum extrapolations. Following this new proposal we determine the running coupling at high energies in the pure gauge theory and re-examine the determination of the Lambda -parameter, with special care on the perturbative truncation uncertainties.
|
AMON Team, H. A. W. C. and I. C. C.(A. S., H.A. et al), & Salesa Greus, F. (2021). Multimessenger Gamma-Ray and Neutrino Coincidence Alerts Using HAWC and IceCube Subthreshold Data. Astrophys. J., 906(1), 63–10pp.
Abstract: The High Altitude Water Cerenkov (HAWC) and IceCube observatories, through the Astrophysical Multimessenger Observatory Network (AMON) framework, have developed a multimessenger joint search for extragalactic astrophysical sources. This analysis looks for sources that emit both cosmic neutrinos and gamma rays that are produced in photohadronic or hadronic interactions. The AMON system is running continuously, receiving subthreshold data (i.e., data that are not suited on their own to do astrophysical searches) from HAWC and IceCube, and combining them in real time. Here we present the analysis algorithm, as well as results from archival data collected between 2015 June and 2018 August, with a total live time of 3.0 yr. During this period we found two coincident events that have a false-alarm rate (FAR) of <1 coincidence yr(-1), consistent with the background expectations. The real-time implementation of the analysis in the AMON system began on 2019 November 20 and issues alerts to the community through the Gamma-ray Coordinates Network with an FAR threshold of <4 coincidences yr(-1).
|
Coloma, P., Huber, P., & Schwetz, T. (2021). Statistical interpretation of sterile neutrino oscillation searches at reactors. Eur. Phys. J. C, 81(1), 2–13pp.
Abstract: A considerable experimental effort is currently under way to test the persistent hints for oscillations due to an eV-scale sterile neutrino in the data of various reactor neutrino experiments. The assessment of the statistical significance of these hints is usually based on Wilks' theorem, whereby the assumption is made that the log-likelihood is chi 2-distributed. However, it is well known that the preconditions for the validity of Wilks' theorem are not fulfilled for neutrino oscillation experiments. In this work we derive a simple asymptotic form of the actual distribution of the log-likelihood based on reinterpreting the problem as fitting white Gaussian noise. From this formalism we show that, even in the absence of a sterile neutrino, the expectation value for the maximum likelihood estimate of the mixing angle remains non-zero with attendant large values of the log-likelihood. Our analytical results are then confirmed by numerical simulations of a toy reactor experiment. Finally, we apply this framework to the data of the Neutrino-4 experiment and show that the null hypothesis of no-oscillation is rejected at the 2.6 sigma level, compared to 3.2 sigma obtained under the assumption that Wilks' theorem applies.
|
Schaffter, T. et al, Albiol, F., & Caballero, L. (2020). Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open, 3(3), e200265–15pp.
Abstract: Importance Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results Overall, 144231 screening mammograms from 85580 US women (952 cancer positive <= 12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166578 examinations from 68008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation. Question How do deep learning algorithms perform compared with radiologists in screening mammography interpretation? Findings In this diagnostic accuracy study using 144231 screening mammograms from 85580 women from the United States and 166578 screening mammograms from 68008 women from Sweden, no single artificial intelligence algorithm outperformed US community radiologist benchmarks; including clinical data and prior mammograms did not improve artificial intelligence performance. However, combining best-performing artificial intelligence algorithms with single-radiologist assessment demonstrated increased specificity. Meaning Integrating artificial intelligence to mammography interpretation in single-radiologist settings could yield significant performance improvements, with the potential to reduce health care system expenditures and address resource scarcity experienced in population-based screening programs. This diagnostic accuracy study evaluates whether artificial intelligence can overcome human mammography interpretation limits with a rigorous, unbiased evaluation of machine learning algorithms.
|
LHCb Collaboration(Aaij, R. et al), Henry, L., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., et al. (2021). Observation of a new Xi(0)(b) state. Phys. Rev. D, 103(1), 012004–17pp.
Abstract: Using a proton-proton collision data sample collected by the LHCb experiment, corresponding to an integrated luminosity of 8.5 fb(-1), the observation of a new excited Xi(0)(b) resonance decaying to the Xi(-)(b)pi(+) final state is presented. The state, referred to as Xi(b) (6227)(0), has a measured mass and natural width of m(Xi(b)(6227)(0)) = 6227.1(-1.5)(+1.4) +/- 0.5 MeV and Gamma(Xi(b)(6227)(0)) = 18.6(-4.1)(+5.0) +/- 1.4 MeV, where the uncertainties are statistical and systematic. The production rate of the Xi(b)(6227)(0) state relative to that of the Xi(-)(b) baryon in the kinematic region 2 < eta < 5 and p(T) < 30 GeV is measured to be f(Xi b(6227)0)/f(Xi b)(-) B(Xi(b)(6227)(0) -> Xi(-)(b)pi(+)) = 0.045 +/- 0.008 +/- 0.004, where B(Xi(b)(6227)(0) -> Xi(-)(b)pi(+)) is the branching fraction of the decay, and f(Xi b(6227)0) and f(Xi b-) represent fragmentation fractions. Improved measurements of the mass and natural width of the previously observedf Xi(b)(6227)(-) state, along with the mass of the Xi(-)(b) baryon, are also reported. Both measurements are significantly more precise than, and consistent with, previously reported values.
|
Linster, M., Lopez-Pavon, J., & Ziegler, R. (2021). Neutrino observables from a U(2) flavor symmetry. Phys. Rev. D, 103(1), 015020–9pp.
Abstract: We study the predictions for CP phases and absolute neutrino mass scale for broad classes of models with a U(2)-like flavor symmetry. For this purpose we consider the same special textures in neutrino and charged lepton mass matrices that are successful in the quark sector. While in the neutrino sector the U(2) structure enforces two texture zeros, the contribution of the charged lepton sector to the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix can be parametrized by two rotation angles. Restricting to the cases where at least one of these angles is small, we obtain three representative scenarios. In all scenarios we obtain a narrow prediction for the sum of neutrino masses in the range of 60-75 meV, possibly in the reach of upcoming galaxy survey experiments. All scenarios can be excluded if near-future experimental date provide evidence for either neutrinoless double-beta decay or inverted neutrino mass ordering.
|
ATLAS Collaboration(Aad, G. et al), Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Cardillo, F., Castillo, F. L., et al. (2021). Search for phenomena beyond the Standard Model in events with large b-jet multiplicity using the ATLAS detector at the LHC. Eur. Phys. J. C, 81(1), 11–29pp.
Abstract: A search is presented for new phenomena in events characterised by high jet multiplicity, no leptons (electrons or muons), and four or more jets originating from the fragmentation of b-quarks (b-jets). The search uses 139fb(-1)of s root = 13 TeV proton-proton collision data collected by the ATLAS experiment at the Large Hadron Collider during Run 2. The dominant Standard Model background originates from multijet production and is estimated using a data-driven technique based on an extrapolation from events with low b-jet multiplicity to the high b-jet multiplicities used in the search. No significant excess over the Standard Model expectation is observed and 95% confidence-level limits that constrain simplified models of R-parity-violating supersymmetry are determined. The exclusion limits reach 950 GeV in top-squark mass in the models considered.
|
ATLAS Collaboration(Aaboud, M. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., et al. (2021). Measurement of the jet mass in high transverse momentum Z(-> b(b)over-bar)gamma production at root s=13 TeV using the ATLAS detector. Phys. Lett. B, 812, 135991–23pp.
Abstract: The integrated fiducial cross-section and unfolded differential jet mass spectrum of high transverse momentum Z -> b (b) over bar decays are measured in Z gamma events in proton-proton collisions at root s = 13 TeV. The data analysed were collected between 2015 and 2016 with the ATLAS detector at the Large Hadron Collider and correspond to an integrated luminosity of 36.1 fb(-1). Photons are required to have a transverse momentum p(T) > 175 GeV. The Z -> b (b) over bar decay is reconstructed using a jet with p(T) > 200 GeV, found with the anti-k(t) R = 1.0 jet algorithm, and groomed to remove soft and wide-angle radiation and to mitigate contributions from the underlying event and additional proton-proton collisions. Two different but related measurements are performed using two jet grooming definitions for reconstructing the Z -> b (b) over bar decay: trimming and soft drop. These algorithms differ in their experimental and phenomenological implications regarding jet mass reconstruction and theoretical precision. To identify Zbosons, b-tagged R = 0.2 track-jets matched to the groomed large-R calorimeter jet are used as a proxy for the b-quarks. The signal yield is determined from fits of the data-driven background templates to the different jet mass distributions for the two grooming methods. Integrated fiducial cross-sections and unfolded jet mass spectra for each grooming method are compared with leading-order theoretical predictions. The results are found to be in good agreement with Standard Model expectations within the current statistical and systematic uncertainties.
|