|
CALICE Collaboration(Lai, S. et al), & Irles, A. (2024). Software compensation for highly granular calorimeters using machine learning. J. Instrum., 19(4), P04037–28pp.
Abstract: A neural network for software compensation was developed for the highly granular CALICE Analogue Hadronic Calorimeter (AHCAL). The neural network uses spatial and temporal event information from the AHCAL and energy information, which is expected to improve sensitivity to shower development and the neutron fraction of the hadron shower. The neural network method produced a depth-dependent energy weighting and a time-dependent threshold for enhancing energy deposits consistent with the timescale of evaporation neutrons. Additionally, it was observed to learn an energy-weighting indicative of longitudinal leakage correction. In addition, the method produced a linear detector response and outperformed a published control method regarding resolution for every particle energy studied.
|
|
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|
|
|
ATLAS Collaboration(Abat, E. et al), Bernabeu Verdu, J., Castillo Gimenez, V., Costa, M. J., Escobar, C., Ferrer, A., et al. (2011). A layer correlation technique for pion energy calibration at the 2004 ATLAS Combined Beam Test. J. Instrum., 6, P06001–35pp.
Abstract: A new method for calibrating the hadron response of a segmented calorimeter is developed and successfully applied to beam test data. It is based on a principal component analysis of energy deposits in the calorimeter layers, exploiting longitudinal shower development information to improve the measured energy resolution. Corrections for invisible hadronic energy and energy lost in dead material in front of and between the calorimeters of the ATLAS experiment were calculated with simulated Geant4 Monte Carlo events and used to reconstruct the energy of pions impinging on the calorimeters during the 2004 Barrel Combined Beam Test at the CERN H8 area. For pion beams with energies between 20 GeV and 180 GeV, the particle energy is reconstructed within 3% and the energy resolution is improved by between 11% and 25% compared to the resolution at the electromagnetic scale.
|
|
|
ANTARES Collaboration(Albert, A. et al), Barrios-Marti, J., Hernandez-Rey, J. J., Illuminati, G., Lotze, M., Tönnis, C., et al. (2020). Model-independent search for neutrino sources with the ANTARES neutrino telescope. Astropart Phys., 114, 35–47.
Abstract: A novel method to analyse the spatial distribution of neutrino candidates recorded with the ANTARES neutrino telescope is introduced, searching for an excess of neutrinos in a region of arbitrary size and shape from any direction in the sky. Techniques originating from the domains of machine learning, pattern recognition and image processing are used to purify the sample of neutrino candidates and for the analysis of the obtained skymap. In contrast to a dedicated search for a specific neutrino emission model, this approach is sensitive to a wide range of possible morphologies of potential sources of high-energy neutrino emission. The application of these methods to ANTARES data yields a large-scale excess with a post-trial significance of 2.5 sigma. Applied to public data from IceCube in its IC40 configuration, an excess consistent with the results from ANTARES is observed with a post-trial significance of 2.1 sigma.
|
|
|
Bouhova-Thacker, E., Kostyukhin, V., Koffas, T., Liebig, W., Limper, M., Piacquadio, G. N., et al. (2010). Expected Performance of Vertex Reconstruction in the ATLAS Experiment at the LHC. IEEE Trans. Nucl. Sci., 57(2), 760–767.
Abstract: In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10(34) cm(-2) s(-1)) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper is the expected performance of the vertex reconstruction used in the ATLAS experiment. The algorithms for the reconstruction of primary and secondary vertices as well as for finding photon conversions and vertex reconstruction in jets are described. The implementation of vertex algorithms which follows a very modular design based on object-oriented C++ is presented. A user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.
|
|