BRIKEN Collaboration(Tolosa-Delgado, A. et al), Agramunt, J., Tain, J. L., Algora, A., Domingo-Pardo, C., Morales, A. I., et al. (2019). Commissioning of the BRIKEN detector for the measurement of very exotic beta-delayed neutron emitters. Nucl. Instrum. Methods Phys. Res. A, 925, 133–147.
Abstract: A new detection system has been installed at the RIKEN Nishina Center (Japan) to investigate decay properties of very neutron-rich nuclei. The setup consists of three main parts: a moderated neutron counter, a detection system sensitive to the implantation and decay of radioactive ions, and gamma-ray detectors. We describe here the setup, the commissioning experiment and some selected results demonstrating its performance for the measurement of half-lives and beta-delayed neutron emission probabilities. The methodology followed in the analysis of the data is described in detail. Particular emphasis is placed on the correction of the accidental neutron background.
|
Candela-Juan, C., Vijande, J., Garcia-Martinez, T., Niatsetski, Y., Nauta, G., Schuurman, J., et al. (2015). Comparison and uncertainty evaluation of different calibration protocols and ionization chambers for low-energy surface brachytherapy dosimetry. Med. Phys., 42(8), 4954–4964.
Abstract: Purpose: A surface electronic brachytherapy (EBT) device is in fact an x-ray source collimated with specific applicators. Low-energy (<100 kVp) x-ray beam dosimetry faces several challenges that need to be addressed. A number of calibration protocols have been published for x-ray beam dosimetry. The media in which measurements are performed are the fundamental difference between them. The aim of this study was to evaluate the surface dose rate of a low-energy x-ray source with small field applicators using different calibration standards and different small-volume ionization chambers, comparing the values and uncertainties of each methodology. Methods: The surface dose rate of the EBT unit Esteya (Elekta Brachytherapy, The Netherlands), a 69.5 kVp x-ray source with applicators of 10, 15, 20, 25, and 30 mm diameter, was evaluated using the AAPM TG-61 (based on air kerma) and International Atomic Energy Agency (IAEA) TRS-398 (based on absorbed dose to water) dosimetry protocols for low-energy photon beams. A plane parallel T34013 ionization chamber (PTW Freiburg, Germany) calibrated in terms of both absorbed dose to water and air kerma was used to compare the two dosimetry protocols. Another PTW chamber of the same model was used to evaluate the reproducibility between these chambers. Measurements were also performed with two different Exradin A20 (Standard Imaging, Inc., Middleton, WI) chambers calibrated in terms of air kerma. Results: Differences between surface dose rates measured in air and in water using the T34013 chamber range from 1.6% to 3.3%. No field size dependence has been observed. Differences are below 3.7% when measurements with the A20 and the T34013 chambers calibrated in air are compared. Estimated uncertainty (with coverage factor k = 1) for the T34013 chamber calibrated in water is 2.2%-2.4%, whereas it increases to 2.5% and 2.7% for the A20 and T34013 chambers calibrated in air, respectively. The output factors, measured with the PTW chambers, differ by less than 1.1% for any applicator size when compared to the output factors that were measured with the A20 chamber. Conclusions: Measurements using both dosimetric protocols are consistent, once the overall uncertainties are considered. There is also consistency between measurements performed with both chambers calibrated in air. Both the T34013 and A20 chambers have negligible stem effect. Any x-ray surface brachytherapy system, including Esteya, can be characterized using either one of these calibration protocols and ionization chambers. Having less correction factors, lower uncertainty, and based on measurements, performed in closer to clinical conditions, the TRS-398 protocol seems to be the preferred option.
|
Cabello, J., & Rafecas, M. (2012). Comparison of basis functions for 3D PET reconstruction using a Monte Carlo system matrix. Phys. Med. Biol., 57(7), 1759–1777.
Abstract: In emission tomography, iterative statistical methods are accepted as the reconstruction algorithms that achieve the best image quality. The accuracy of these methods relies partly on the quality of the system response matrix (SRM) that characterizes the scanner. The more physical phenomena included in the SRM, the higher the SRM quality, and therefore higher image quality is obtained from the reconstruction process. High-resolution small animal scanners contain as many as 10(3)-10(4) small crystal pairs, while the field of view (FOV) is divided into hundreds of thousands of small voxels. These two characteristics have a significant impact on the number of elements to be calculated in the SRM. Monte Carlo (MC) methods have gained popularity as a way of calculating the SRM, due to the increased accuracy achievable, at the cost of introducing some statistical noise and long simulation times. In the work presented here the SRM is calculated using MC methods exploiting the cylindrical symmetries of the scanner, significantly reducing the simulation time necessary to calculate a high statistical quality SRM and the storage space necessary. The use of cylindrical symmetries makes polar voxels a convenient basis function. Alternatively, spherically symmetric basis functions result in improved noise properties compared to cubic and polar basis functions. The quality of reconstructed images using polar voxels, spherically symmetric basis functions on a polar grid, cubic voxels and post-reconstruction filtered polar and cubic voxels is compared from a noise and spatial resolution perspective. This study demonstrates that polar voxels perform as well as cubic voxels, reducing the simulation time necessary to calculate the SRM and the disk space necessary to store it. Results showed that spherically symmetric functions outperform polar and cubic basis functions in terms of noise properties, at the cost of slightly degraded spatial resolution, larger SRM file size and longer reconstruction times. However, we demonstrate that post-reconstruction smoothing, usually applied in emission imaging to reduce the level of noise, can produce a spatial resolution degradation of similar to 50%, while spherically symmetric basis functions produce a degradation of only similar to 6%, compared to polar and cubic voxels, at the same noise level. Therefore, the image quality trade-off obtained with blobs is higher than that obtained with cubic or polar voxels.
|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., Cabrera Urban, S., et al. (2023). Comparison of inclusive and photon-tagged jet suppression in 5.02 TeV Pb+Pb collisions with ATLAS. Phys. Lett. B, 846, 138154–27pp.
Abstract: Parton energy loss in the quark-gluon plasma (QGP) is studied with a measurement of photon-tagged jet production in 1.7 nb-1 of Pb+Pb data and 260 pb-1 of pp data, both at root sNN = 5.02 TeV, with the ATLAS detector. The process pp -> gamma +jet+X and its analogue in Pb+Pb collisions is measured in events containing an isolated photon with transverse momentum (pT) above 50 GeV and reported as a function of jet pT. This selection results in a sample of jets with a steeply falling pT distribution that are mostly initiated by the showering of quarks. The pp and Pb+Pb measurements are used to report the nuclear modification factor, RAA, and the fractional energy loss, Sloss, for photon-tagged jets. In addition, the results are compared with the analogous ones for inclusive jets, which have a significantly smaller quark-initiated fraction. The RAA and Sloss values are found to be significantly different between those for photon-tagged jets and inclusive jets, demonstrating that energy loss in the QGP is sensitive to the colour-charge of the initiating parton. The results are also compared with a variety of theoretical models of colour-charge-dependent energy loss.
|
Buchalla, G., Cata, O., Celis, A., Knecht, M., & Krause, C. (2018). Complete one-loop renormalization of the Higgs-electroweak chiral Lagrangian. Nucl. Phys. B, 928, 93–106.
Abstract: Employing background-field method and super-heat-kernel expansion, we compute the complete oneloop renormalization of the electroweak chiral Lagrangian with a light Higgs boson. Earlier results from purely scalar fluctuations are confirmed as a special case. We also recover the one-loop renormalization of the conventional Standard Model in the appropriate limit.
|
Campanario, F., Czyz, H., Gluza, J., Gunia, M., Riemann, T., Rodrigo, G., et al. (2014). Complete QED NLO contributions to the reaction e(+)e(-) -> mu(+)mu(-)gamma and their implementation in the event generator PHOKHARA. J. High Energy Phys., 02(2), 114–27pp.
Abstract: KLOE and Babar have an observed discrepancy of 2% to 5% in the invariant pion pair production cross section. These measurements are based on approximate NLO mu(+)mu(-)gamma cross section predictions of the Monte Carlo event generator PHOKHARA7.0. In this article, the complete NLO radiative corrections to mu(+)mu(-)gamma production are calculated and implemented in the Monte Carlo event generator PHOKHARA9.0. Numerical reliability is guaranteed by two independent approaches to the real and the virtual corrections. The novel features include the contribution of pentagon diagrams in the virtual corrections, which form a gauge-invariant set when combined with their box diagram partners. They may contribute to certain distributions at the percent level. Also the real emission was complemented with two-photon final state emission contributions not included in the generator PHOKHARA7.0. We demonstrate that the numerical influence reaches, for realistic charge-averaged experimental setups, not more than 0.1% at KLOE and 0.3% at BaBar energies. As a result, we exclude the approximations in earlier versions of PHOKHARA as origin of the observed experimental discrepancy.
|
Falkowski, A., Gonzalez-Alonso, M., & Naviliat-Cuncic, O. (2021). Comprehensive analysis of beta decays within and beyond the Standard Model. J. High Energy Phys., 04(4), 126–36pp.
Abstract: Precision measurements in allowed nuclear beta decays and neutron decay are reviewed and analyzed both within the Standard Model and looking for new physics. The analysis incorporates the most recent experimental and theoretical developments. The results are interpreted in terms of Wilson coefficients describing the effective interactions between leptons and nucleons (or quarks) that are responsible for beta decay. New global fits are performed incorporating a comprehensive list of precision measurements in neutron decay, superallowed 0(+)-> 0(+) transitions, and other nuclear decays that include, for the first time, data from mirror beta transitions. The results confirm the V-A character of the interaction and translate into updated values for V-ud and g(A) at the 10(-4) level. We also place new stringent limits on exotic couplings involving left-handed and right-handed neutrinos, which benefit significantly from the inclusion of mirror decays in the analysis.
|
Stadler, J., Boehm, C., & Mena, O. (2019). Comprehensive study of neutrino-dark matter mixed damping. J. Cosmol. Astropart. Phys., 08(8), 014–23pp.
Abstract: Mixed damping is a physical effect that occurs when a heavy species is coupled to a relativistic fluid which is itself free streaming. As a cross-case between collisional damping and free-streaming, it is crucial in the context of neutrino-dark matter interactions. In this work, we establish the parameter space relevant for mixed damping, and we derive an analytical approximation for the evolution of dark matter perturbations in the mixed damping regime to illustrate the physical processes responsible for the suppression of cosmological perturbations. Although extended Boltzmann codes implementing neutrino-dark matter scattering terms automatically include mixed damping, this effect has not been systematically studied. In order to obtain reliable numerical results, it is mandatory to reconsider several aspects of neutrino-dark matter interactions, such as the initial conditions, the ultra-relativistic fluid approximation and high order multiple moments in the neutrino distribution. Such a precise treatment ensures the correct assessment of the relevance of mixed damping in neutrino-dark matter interactions.
|
AGATA and PRISMA Collaborations(Gadea, A. et al). (2011). Conceptual design and infrastructure for the installation of the first AGATA sub-array at LNL. Nucl. Instrum. Methods Phys. Res. A, 654(1), 88–96.
Abstract: The first implementation of the AGATA spectrometer consisting of five triple germanium detector clusters has been installed at Laboratori Nazionali di Legnaro, INFN. This setup has two major goals, the first one is to validate the gamma-tracking concept and the second is to perform an experimental physics program using the stable beams delivered by the Tandem-PIAVE-ALPI accelerator complex. A large variety of physics topics will be addressed during this campaign, aiming to investigate both neutron and proton-rich nuclei. The setup has been designed to be coupled with the large-acceptance magnetic-spectrometer PRISMA. Therefore, the in-beam prompt gamma rays detected with AGATA will be measured in coincidence with the products of multinucleon-transfer and deep-inelastic reactions measured by PRISMA. Moreover, the setup is versatile enough to host ancillary detectors, including the heavy-ion detector DANTE, the gamma-ray detector array HELENA, the Cologne plunger for lifetime measurements and the Si-pad telescope TRACE. In this paper the design; characteristics and performance figures of the setup will be described.
|
AGATA Collaboration, Farnea, E., Recchia, F., Bazzacco, D., Kroll, T., Podolyak, Z., et al. (2010). Conceptual design and Monte Carlo simulations of the AGATA array. Nucl. Instrum. Methods Phys. Res. A, 621(1-3), 331–343.
Abstract: The aim of the Advanced GAmma Tracking Array (AGATA) project is the construction of an array based on the novel concepts of pulse shape analysis and gamma-ray tracking with highly segmented Ge semiconductor detectors. The conceptual design of AGATA and its performance evaluation under different experimental conditions has required the development of a suitable Monte Carlo code. In this article, the description of the code as well as simulation results relevant for AGATA, are presented.
|