Babeluk, M. et al, & Marinas, C. (2023). CMOS MAPS upgrade for the Belle II Vertex Detector. Nucl. Instrum. Methods Phys. Res. A, 1048, 168015–5pp.
Abstract: The success of the Belle II experiment in Japan relies on the very high instantaneous luminosity, close to 6x1035 cm-2 s-1, expected from the SuperKEKB collider. The corresponding beam conditions at such luminosity levels generate large rates of background particles and creates stringent constraints on the vertex detector, adding to the physics requirements. Current prospects for the occupancy rates in the present vertex detector (VXD) at full luminosity fall close to the acceptable limits and bear large uncertainties. In this context, the Belle II collaboration is considering the possibility to install an upgraded VXD system around 2027 to provide a sufficient safety margin with respect to the expected background rate and possibly enhance tracking and vertexing performance. The VTX collaboration has started the design of a fully pixelated VXD, called VTX, based on fast and highly granular Depleted Monolithic Active Pixel Sensors (DMAPS) integrated on light support structures. The two main technical features of the VTX proposal are the usage of a single sensor type over all the layers of the system and the overall material budget below 2% of radiation length, compared to the current VXD which has two different sensor technologies and about 3% of radiation length. A dedicated sensor (OBELIX), taylored to the specific needs of Belle II, is under development, evolving from the existing TJ-Monopix2 sensor. The time-stamping precision below 100 ns will allow all VTX layers to take part in the track finding strategy contrary to the current situation. The first two detection layers are designed according to a self-supported all-silicon ladder concept, where 4 contiguous sensors are diced out of a wafer, thinned and interconnected with post-processed redistribution layers. The outermost detection layers follow a more conventional approach with a cold plate and carbon fibre support structure, and light flex cables interconnecting the sensors. This document will review the context, technical details and development status of the proposed Belle II VTX.
|
Baxter, D., Collar, J. I., Coloma, P., Dahl, C. E., Esteban, I., Ferrario, P., et al. (2020). Coherent elastic neutrino-nucleus scattering at the European Spallation Source. J. High Energy Phys., 02(2), 123–38pp.
Abstract: The European Spallation Source (ESS), presently well on its way to completion, will soon provide the most intense neutron beams for multi-disciplinary science. Fortuitously, it will also generate the largest pulsed neutrino flux suitable for the detection of Coherent Elastic Neutrino-Nucleus Scattering (CE nu NS), a process recently measured for the first time at ORNL's Spallation Neutron Source. We describe innovative detector technologies maximally able to profit from the order-of-magnitude increase in neutrino flux provided by the ESS, along with their sensitivity to a rich particle physics phenomenology accessible through high-statistics, precision CE nu NS measurements.
|
SCiMMA and SNEWS Collaborations(Baxter, A. L. et al), & Colomer, M. (2022). Collaborative experience between scientific software projects using Agile Scrum development. Softw.-Pract. Exp., 52, 2077–2096.
Abstract: Developing sustainable software for the scientific community requires expertise in software engineering and domain science. This can be challenging due to the unique needs of scientific software, the insufficient resources for software engineering practices in the scientific community, and the complexity of developing for evolving scientific contexts. While open-source software can partially address these concerns, it can introduce complicating dependencies and delay development. These issues can be reduced if scientists and software developers collaborate. We present a case study wherein scientists from the SuperNova Early Warning System collaborated with software developers from the Scalable Cyberinfrastructure for Multi-Messenger Astrophysics project. The collaboration addressed the difficulties of open-source software development, but presented additional risks to each team. For the scientists, there was a concern of relying on external systems and lacking control in the development process. For the developers, there was a risk in supporting a user-group while maintaining core development. These issues were mitigated by creating a second Agile Scrum framework in parallel with the developers' ongoing Agile Scrum process. This Agile collaboration promoted communication, ensured that the scientists had an active role in development, and allowed the developers to evaluate and implement the scientists' software requirements. The collaboration provided benefits for each group: the scientists actuated their development by using an existing platform, and the developers utilized the scientists' use-case to improve their systems. This case study suggests that scientists and software developers can avoid scientific computing issues by collaborating and that Agile Scrum methods can address emergent concerns.
|
Pakarinen, J. et al, & Algora, A. (2017). Collectivity in Pb-196, Pb-198 isotopes probed in Coulomb-excitation experiments at REX-ISOLDE. J. Phys. G, 44(6), 064009–10pp.
Abstract: The neutron-deficient Pb-196,Pb-198 isotopes have been studied in Coulomb-excitation experiments employing the Miniball gamma-ray spectrometer and radioactive ion beams from the REX-ISOLDE post-accelerator at CERN. The reduced transition probabilities of the first excited 2(+) states in Pb-196 and Pb-198 nuclei have been measured for the first time. Values of B (E2) = 18.2(-4.1)(+4.8) W. u. and B (E2) = 13.1(-3.5)(+4.9) W. u., were obtained, respectively. The experiment sheds light on the development of collectivity when moving from the regime governed by the generalised seniority scheme to a region, where intruding structures, associated with different deformed shapes, start to come down in energy and approach the spherical ground state.
|
Gimenez-Alventosa, V., Antunes, P. C. G., Vijande, J., Ballester, F., Perez-Calatayud, J., & Andreo, P. (2017). Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy. Phys. Med. Biol., 62(1), 146–164.
Abstract: The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).
|
Lloret, E., Picouet, P. A., Trbojevich, R., & Fernandez, A. (2016). Colour stability of cooked ham packed under modified atmospheres in polyamide nanocomposite blends. LWT-Food Sci. Technol., 66, 582–589.
Abstract: Two novel blends containing a low-density polyethylene (LDPE) and a neat polyamide (PA) or a polyamide nanocomposite (PAN) layers were fabricated and their technological potential was evaluated during the refrigeration of cooked ham in modified atmospheres (MAP). Nanoclays were homogeneously distributed and nearly exfoliated, and they lowered significantly the oxygen transmission rate (OTR) of the PAN films. Due to the lower OTR, the headspace oxygen level in PAN pouches do not rise above 0.26% but it approached 2% in PA pouches at day 20. The residual oxygen levels were key for colour change during MAP storage of cooked ham. Cooked ham redness and reflectivity were stable during 27 days in PAN pouches while a strong colour deterioration took place after day 7 in PA pouches. Other parameters such as moisture content and water activity remained unaltered, and pH development was related to microbial growth and independent of the packaging polymer. The evolution of cooked ham colour in PAN was comparable to a high-barrier commercial polymer, and was acceptable for commercial sale for 27 days, showing excellent perspectives for polyamide nanocomposites in the storage of cooked ham.
|
de Gouvea, A., De Romeri, V., & Ternes, C. A. (2021). Combined analysis of neutrino decoherence at reactor experiments. J. High Energy Phys., 06(6), 042–12pp.
Abstract: Reactor experiments are well suited to probe the possible loss of coherence of neutrino oscillations due to wave-packets separation. We combine data from the short-baseline experiments Daya Bay and the Reactor Experiment for Neutrino Oscillation (RENO) and from the long baseline reactor experiment KamLAND to obtain the best current limit on the reactor antineutrino wave-packet width, sigma > 2.1 x 10(-4) nm at 90% CL. We also find that the determination of standard oscillation parameters is robust, i.e., it is mostly insensitive to the presence of hypothetical decoherence effects once one combines the results of the different reactor neutrino experiments.
|
KM3NeT Collaboration(Aiello, S. et al), Alves Garre, S., Calvo, D., Carretero, V., Colomer, M., Garcia Soto, A., et al. (2022). Combined sensitivity of JUNO and KM3NeT/ORCA to the neutrino mass ordering. J. High Energy Phys., 03(3), 055–31pp.
Abstract: This article presents the potential of a combined analysis of the JUNO and KM3NeT/ORCA experiments to determine the neutrino mass ordering. This combination is particularly interesting as it significantly boosts the potential of either detector, beyond simply adding their neutrino mass ordering sensitivities, by removing a degeneracy in the determination of Delta M-31(2) between the two experiments when assuming the wrong ordering. The study is based on the latest projected performances for JUNO, and on simulation tools using a full Monte Carlo approach to the KM3NeT/ORCA response with a careful assessment of its energy systematics. From this analysis, a 5 sigma determination of the neutrino mass ordering is expected after 6 years of joint data taking for any value of the oscillation parameters. This sensitivity would be achieved after only 2 years of joint data taking assuming the current global best-fit values for those parameters for normal ordering.
|
van Beekveld, M., Caron, S., Hendriks, L., Jackson, P., Leinweber, A., Otten, S., et al. (2021). Combining outlier analysis algorithms to identify new physics at the LHC. J. High Energy Phys., 09(9), 024–33pp.
Abstract: The lack of evidence for new physics at the Large Hadron Collider so far has prompted the development of model-independent search techniques. In this study, we compare the anomaly scores of a variety of anomaly detection techniques: an isolation forest, a Gaussian mixture model, a static autoencoder, and a beta-variational autoencoder (VAE), where we define the reconstruction loss of the latter as a weighted combination of regression and classification terms. We apply these algorithms to the 4-vectors of simulated LHC data, but also investigate the performance when the non-VAE algorithms are applied to the latent space variables created by the VAE. In addition, we assess the performance when the anomaly scores of these algorithms are combined in various ways. Using supersymmetric benchmark points, we find that the logical AND combination of the anomaly scores yielded from algorithms trained in the latent space of the VAE is the most effective discriminator of all methods tested.
|
Autieri, A., Cieri, L., Ferrera, G., & Sborlini, G. F. R. (2023). Combining QED and QCD transverse-momentum resummation for W and Z boson production at hadron colliders. J. High Energy Phys., 07(7), 104–30pp.
Abstract: In this article, we consider the transverse momentum (qT) distribution of W and Z bosons produced in hadronic collisions. We combine the qT resummation for QED and QCD radiation including the QED soft emissions from the W boson in the final state. In particular, we perform the resummation of enhanced logarithmic contributions due to soft and collinear emissions at next-to-leading accuracy in QED, leading-order accuracy for mixed QED-QCD and next-to-next-to-leading accuracy in QCD. In the small-qT region we consistently include in our results the next-to-next-to-leading order (i.e. two loops) QCD corrections and the next-to-leading order (i.e. one loop) electroweak corrections. The matching with the fixed-order calculation at large qT has been performed at next-to-leading order in QCD (i.e. at O(alpha(2)(S))) and at leading order in QED. We show numerical results for W and Z production at the Tevatron and the LHC. Finally, we consider the effect of combined QCD and QED resummation for the ratio of W and Z qT distributions, and we study the impact of the QED corrections providing an estimate of the corresponding perturbative uncertainties.
|