|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fernandez Martinez, P., et al. (2015). Modelling Z -> ττ processes in ATLAS with τ-embedded Z -> μμ data. J. Instrum., 10, P09018–41pp.
Abstract: This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z -> tau tau decays. In Z -> μμevents selected from proton-proton collision data recorded at root s = 8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by tau leptons from simulated Z -> tau tau decays at the level of reconstructed tracks and calorimeter cells. The tau lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and tau leptons as well as the detector response to the tau decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called tau-embedding method is particularly relevant for Higgs boson searches and analyses in tau tau final states, where Z -> tau tau decays constitute a large irreducible background that cannot be obtained directly from data control samples. In this paper, the relevant concepts are discussed based on the implementation used in the ATLAS Standard Model H -> tau tau analysis of the full datataset recorded during 2011 and 2012.
|
|
|
ATLAS Collaboration(Aad, G. et al), Amos, K. R., Aparisi Pozo, J. A., Bailey, A. J., Bouchhar, N., Cabrera Urban, S., et al. (2023). Tools for estimating fake/non-prompt lepton backgrounds with the ATLAS detector at the LHC. J. Instrum., 18(11), T11004–61pp.
Abstract: Measurements and searches performed with the ATLAS detector at the CERN LHC often involve signatures with one or more prompt leptons. Such analyses are subject to 'fake/non-prompt' lepton backgrounds, where either a hadron or a lepton from a hadron decay or an electron from a photon conversion satisfies the prompt-lepton selection criteria. These backgrounds often arise within a hadronic jet because of particle decays in the showering process, particle misidentification or particle interactions with the detector material. As it is challenging to model these processes with high accuracy in simulation, their estimation typically uses data-driven methods. Three methods for carrying out this estimation are described, along with their implementation in ATLAS and their performance.
|
|
|
ATLAS Collaboration(Aad, G. et al), Cabrera Urban, S., Castillo Gimenez, V., Costa, M. J., Fassi, F., Ferrer, A., et al. (2013). Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run. J. Instrum., 8, P07004–72pp.
Abstract: This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.
|
|
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Curvature-bias corrections using a pseudomass method. J. Instrum., 19(3), P03010–22pp.
Abstract: Momentum measurements for very high momentum charged particles, such as muons from electroweak vector boson decays, are particularly susceptible to charge-dependent curvature biases that arise from misalignments of tracking detectors. Low momentum charged particles used in alignment procedures have limited sensitivity to coherent displacements of such detectors, and therefore are unable to fully constrain these misalignments to the precision necessary for studies of electroweak physics. Additional approaches are therefore required to understand and correct for these effects. In this paper the curvature biases present at the LHCb detector are studied using the pseudomass method in proton-proton collision data recorded at centre of mass energy root s = 13 TeV during 2016, 2017 and 2018. The biases are determined using Z -> mu(+)mu(-) decays in intervals defined by the data-taking period, magnet polarity and muon direction. Correcting for these biases, which are typically at the 10(-4) GeV-1 level, improves the Z -> mu(+)mu(-) mass resolution by roughly 18% and eliminates several pathological trends in the kinematic-dependence of the mean dimuon invariant mass.
|
|
|
LHCb Collaboration(Aaij, R. et al), Jaimes Elles, S. J., Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Rebollo De Miguel, M., et al. (2024). Momentum scale calibration of the LHCb spectrometer. J. Instrum., 19(2), P02008–21pp.
Abstract: For accurate determination of particle masses accurate knowledge of the momentum scale of the detectors is crucial. The procedure used to calibrate the momentum scale of the LHCb spectrometer is described and illustrated using the performance obtained with an integrated luminosity of 1.6 fb-1 collected during 2016 in pp running. The procedure uses large samples of J/qi -> mu+mu- and B+ -> J/qiK+ decays and leads to a relative accuracy of 3 x 10-4 on the momentum scale.
|
|
|
LHCb Collaboration(Aaij, R. et al), Jashal, B. K., Martinez-Vidal, F., Oyanguren, A., Remon Alepuz, C., & Ruiz Vidal, J. (2022). Identification of charm jets at LHCb. J. Instrum., 17(2), P02028–23pp.
Abstract: The identification of charm jets is achieved at LHCb for data collected in 2015-2018 using a method based on the properties of displaced vertices reconstructed and matched with jets. The performance of this method is determined using a dijet calibration dataset recorded by the LHCb detector and selected such that the jets are unbiased in quantities used in the tagging algorithm. The charm-tagging efficiency is reported as a function of the transverse momentum of the jet. The measured efficiencies are compared to those obtained from simulation and found to be in good agreement.
|
|
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2015). B flavour tagging using charm decays at the LHCb experiment. J. Instrum., 10, P10005–16pp.
Abstract: An algorithm is described for tagging the flavour content at production of neutral B mesons in the LHCb experiment. The algorithm exploits the correlation of the flavour of a B meson with the charge of a reconstructed secondary charm hadron from the decay of the other b hadron produced in the proton-proton collision. Charm hadron candidates are identified in a number of fully or partially reconstructed Cabibbo-favoured decay modes. The algorithm is calibrated on the self-tagged decay modes B+ -> J/psi K+ and B-0 -> J/psi K*(0) using 3.0fb(-1) of data collected by the LHCb experiment at pp centre-of-mass energies of 7TeV and 8TeV. Its tagging power on these samples of B -> J/psi X decays is (0.30 +/- 0.01 +/- 0.01) %.
|
|
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2015). Identification of beauty and charm quark jets at LHCb. J. Instrum., 10, P06013–29pp.
Abstract: Identification of jets originating from beauty and charm quarks is important for measuring Standard Model processes and for searching for new physics. The performance of algorithms developed to select b- and c-quark jets is measured using data recorded by LHCb from proton-proton collisions at root s = 7TeV in 2011 and at root s = 8TeV in 2012. The efficiency for identifying a b (c) jet is about 65%(25%) with a probability for misidentifying a light-parton jet of 0.3% for jets with transverse momentum pT > 20GeV and pseudorapidity 2 : 2 < eta < 4.2. The dependence of the performance on the pT and eta of the jet is also measured.
|
|
|
LHCb Collaboration(Aaij, R. et al), Martinez-Vidal, F., Oyanguren, A., Ruiz Valls, P., & Sanchez Mayordomo, C. (2016). A new algorithm for identifying the flavour of B-s(0) mesons at LHCb. J. Instrum., 11, P05010–23pp.
Abstract: A new algorithm for the determination of the initial flavour of B-s(0) mesons is presented. The algorithm is based on two neural networks and exploits the b hadron production mechanism at a hadron collider. The first network is trained to select charged kaons produced in association with the B-s(0) meson. The second network combines the kaon charges to assign the B-s(0) flavour and estimates the probability of a wrong assignment. The algorithm is calibrated using data corresponding to an integrated luminosity of 3 fb(-1) collected by the LHCb experiment in proton-proton collisions at 7 and 8 TeV centre-of-mass energies. The calibration is performed in two ways: by resolving the B-s(0)-B-s(0) flavour oscillations in B-s(0) -> D-s(-)pi(+) decays, and by analysing flavour-specific B-s2*(5840)(0) -> B+K- decays. The tagging power measured in B-s(0) -> D-s(-)pi(+) decays is found to be (1.80 +/- 0.19 ( stat) +/- 0.18 (syst))%, which is an improvement of about 50% compared to a similar algorithm previously used in the LHCb experiment.
|
|
|
NEXT Collaboration(Renner, J. et al), Benlloch-Rodriguez, J., Botas, A., Ferrario, P., Gomez-Cadenas, J. J., Alvarez, V., et al. (2017). Background rejection in NEXT using deep neural networks. J. Instrum., 12, T01004–21pp.
Abstract: We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
|
|