Escudero, M., Hooper, D., & Witte, S. J. (2017). Updated collider and direct detection constraints on Dark Matter models for the Galactic Center gamma-ray excess. J. Cosmol. Astropart. Phys., 02(2), 038–21pp.
Abstract: Utilizing an exhaustive set of simplified models, we revisit dark matter scenarios potentially capable of generating the observed Galactic Center gamma-ray excess, updating constraints from the LUX and PandaX- II experiments, as well as from the LHC and other colliders. We identify a variety of pseudoscalar mediated models that remain consistent with all constraints. In contrast, dark matter candidates which annihilate through a spin-1 mediator are ruled out by direct detection constraints unless the mass of the mediator is near an annihilation resonance, or the mediator has a purely vector coupling to the dark matter and a purely axial coupling to Standard Model fermions. All scenarios in which the dark matter annihilates throught-channel processes are now ruled out by a combination of the constraints from LUX/ PandaX-II and the LHC.
|
Esteve, R., Toledo, J., Monrabal, F., Lorca, D., Serra, L., Mari, A., et al. (2012). The trigger system in the NEXT-DEMO detector. J. Instrum., 7, C12001–9pp.
Abstract: NEXT-DEMO is a prototype of NEXT (Neutrino Experiment with Xenon TPC), an experiment to search for neutrino-less double beta decay using a 100 kg radio-pure, 90 % enriched (136Xe isotope) high-pressure gaseous xenon TPC with electroluminescence readout. The detector is based on a PMT plane for energy measurements and a SiPM tracking plane for topological event filtering. The experiment will be located in the Canfranc Underground Laboratory in Spain. Front-end electronics, trigger and data-acquisition systems (DAQ) have been built. The DAQ is an implementation of the Scalable Readout System (RD51 collaboration) based on FPGA. Our approach for trigger is to have a distributed and reconfigurable system in the DAQ itself. Moreover, the trigger allows on-line triggering based on the detection of primary or secondary scintillation light, or a combination of both, that arrives to the PMT plane.
|
Etxebeste, A., Barrio, J., Bernabeu, J., Lacasta, C., Llosa, G., Muñoz, E., et al. (2019). Study of sensitivity and resolution for full ring PET prototypes based on continuous crystals and analytical modeling of the light distribution. Phys. Med. Biol., 64(3), 035015–17pp.
Abstract: Sensitivity and spatial resolution are the main parameters to maximize in the performance of a PET scanner. For this purpose, detectors consisting of a combination of continuous crystals optically coupled to segmented photodetectors have been employed. With the use of continuous crystals the sensitivity is increased with respect to the pixelated crystals. In addition, spatial resolution is no longer limited to the crystal size. The main drawback is the difficulty in determining the interaction position. In this work, we present the characterization of the performance of a full ring based on cuboid continuous crystals coupled to SiPMs. To this end, we have employed the simulations developed in a previous work for our experimental detector head. Sensitivity could be further enhanced by using tapered crystals. This enhancement is obtained by increasing the solid angle coverage, reducing the wedge-shaped gaps between contiguous detectors. The performance of the scanners based on both crystal geometries was characterized following NEMA NU 4-2008 standardized protocol in order to compare them. An average sensitivity gain over the entire axial field of view of 13.63% has been obtained with tapered geometry while similar performance of the spatial resolution has been proven with both scanners. The activity at which NECR and true peak occur is smaller and the peak value is greater for tapered crystals than for cuboid crystals. Moreover, a higher degree of homogeneity was obtained in the sensitivity map due to the tighter packing of the crystals, which reduces the gaps and results in a better recovery of homogeneous regions than for the cuboid configuration. Some of the results obtained, such as spatial resolution, depend on the interaction position estimation and may vary if other method is employed.
|
Etxebeste, A., Barrio, J., Muñoz, E., Oliver, J. F., Solaz, C., & Llosa, G. (2016). 3D position determination in monolithic crystals coupled to SiPMs for PET. Phys. Med. Biol., 61(10), 3914–3934.
Abstract: The interest in using continuous monolithic crystals in positron emission tomography (PET) has grown in the last years. Coupled to silicon photomultipliers (SiPMs), the detector can combine high sensitivity and high resolution, the two main factors to be maximized in a positron emission tomograph. In this work, the position determination capability of a detector comprised of a 12 x 12 x 10 mm(3) LYSO crystal coupled to an 8 x 8-pixel array of SiPMs is evaluated. The 3D interaction position of.-rays is estimated using an analytical model of the light distribution including reflections on the facets of the crystal. Monte Carlo simulations have been performed to evaluate different crystal reflectors and geometries. The method has been characterized and applied to different cases. Intrinsic resolution obtained with the position estimation method used in this work, applied to experimental data, achieves sub-millimetre resolution values. Average resolution over the detector surface for 5 mm thick crystal is similar to 0.9 mm FWHM and similar to 1.2 mm FWHM for 10 mm thick crystal. Depth of interaction resolution is close to 2 mm FWHM in both cases, while the FWTM is similar to 5.3 mm for 5 mm thick crystal and similar to 9.6 mm for 10 mm thick crystal.
|
Cabrera, M. E., Casas, J. A., Mitsou, V. A., Ruiz de Austri, R., & Terron, J. (2012). Histogram comparison tools for the search of new physics at LHC. Application to the CMSSM. J. High Energy Phys., 04(4), 133–27pp.
Abstract: We propose a rigorous and effective way to compare experimental and theoretical histograms, incorporating the different sources of statistical and systematic uncertainties. This is a useful tool to extract as much information as possible from the comparison between experimental data with theoretical simulations, optimizing the chances of identifying New Physics at the LHC. We illustrate this by showing how a search in the CMSSM parameter space, using Bayesian techniques, can effectively find the correct values of the CMSSM parameters by comparing histograms of events with multijets + missing transverse momentum displayed in the effective-mass variable. The procedure is in fact very efficient to identify the true supersymmetric model, in the case supersymmetry is really there and accessible to the LHC.
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|
Falkowski, A., Gonzalez-Alonso, M., Palavric, A., & Rodriguez-Sanchez, A. (2024). Constraints on subleading interactions in beta decay Lagrangian. J. High Energy Phys., 02(2), 091–54pp.
Abstract: We discuss the effective field theory (EFT) for nuclear beta decay. The general quark-level EFT describing charged-current interactions between quarks and leptons is matched to the nucleon-level non-relativistic EFT at the OMeV momentum scale characteristic for beta transitions. The matching takes into account, for the first time, the effect of all possible beyond-the-Standard-Model interactions at the subleading order in the recoil momentum. We calculate the impact of all the Wilson coefficients of the leading and subleading EFT Lagrangian on the differential decay width in allowed beta transitions. As an example application, we show how the existing experimental data constrain the subleading Wilson coefficients corresponding to pseudoscalar, weak magnetism, and induced tensor interactions. The data display a 3.5 sigma evidence for nucleon weak magnetism, in agreement with the theory prediction based on isospin symmetry.
|
Falkowski, A., Gonzalez-Alonso, M., & Tabrizi, Z. (2020). Consistent QFT description of non-standard neutrino interactions. J. High Energy Phys., 11(11), 048–23pp.
Abstract: Neutrino oscillations are precision probes of new physics. Apart from neutrino masses and mixings, they are also sensitive to possible deviations of low-energy interactions between quarks and leptons from the Standard Model predictions. In this paper we develop a systematic description of such non-standard interactions (NSI) in oscillation experiments within the quantum field theory framework. We calculate the event rate and oscillation probability in the presence of general NSI, starting from the effective field theory (EFT) in which new physics modifies the flavor or Lorentz structure of charged-current interactions between leptons and quarks. We also provide the matching between the EFT Wilson coefficients and the widely used simplified quantum-mechanical approach, where new physics is encoded in a set of production and detection NSI parameters. Finally, we discuss the consistency conditions for the standard NSI approach to correctly reproduce the quantum field theory result.
|
Falkowski, A., Gonzalez-Alonso, M., & Naviliat-Cuncic, O. (2021). Comprehensive analysis of beta decays within and beyond the Standard Model. J. High Energy Phys., 04(4), 126–36pp.
Abstract: Precision measurements in allowed nuclear beta decays and neutron decay are reviewed and analyzed both within the Standard Model and looking for new physics. The analysis incorporates the most recent experimental and theoretical developments. The results are interpreted in terms of Wilson coefficients describing the effective interactions between leptons and nucleons (or quarks) that are responsible for beta decay. New global fits are performed incorporating a comprehensive list of precision measurements in neutron decay, superallowed 0(+)-> 0(+) transitions, and other nuclear decays that include, for the first time, data from mirror beta transitions. The results confirm the V-A character of the interaction and translate into updated values for V-ud and g(A) at the 10(-4) level. We also place new stringent limits on exotic couplings involving left-handed and right-handed neutrinos, which benefit significantly from the inclusion of mirror decays in the analysis.
|
Falkowski, A., Gonzalez-Alonso, M., Kopp, J., Soreq, Y., & Tabrizi, Z. (2021). EFT at FASER nu. J. High Energy Phys., 10(10), 086–46pp.
Abstract: We investigate the sensitivity of the FASER nu detector to new physics in the form of non-standard neutrino interactions. FASER nu, which will be installed 480 m downstream of the ATLAS interaction point, will for the first time study interactions of multi-TeV neutrinos from a controlled source. Our formalism – which is applicable to any current and future neutrino experiment – is based on the Standard Model Effective Theory (SMEFT) and its counterpart, Weak Effective Field Theory (WEFT), below the electroweak scale. Starting from the WEFT Lagrangian, we compute the coefficients that modify neutrino production in meson decays and detection via deep-inelastic scattering, and we express the new physics effects in terms of modified flavor transition probabilities. For some coupling structures, we find that FASER nu will be able to constrain interactions that are two to three orders of magnitude weaker than Standard Model weak interactions, implying that the experiment will be indirectly probing new physics at the multi-TeV scale. In some cases, FASER nu constraints will become comparable to existing limits – some of them derived for the first time in this paper – already with 150 fb(-1) of data.
|