|
Racker, J., Pena, M., & Rius, N. (2012). Leptogenesis with small violation of B – L. J. Cosmol. Astropart. Phys., 07(7), 030–18pp.
Abstract: We analyze leptogenesis in the context of seesaw models with almost conserved lepton number, focusing on the L-conserving contribution to the flavoured CP asymmetries. We find that, contrary to previous claims, successful leptogenesis is feasible for masses of the lightest heavy neutrino as low as M-1 similar to 10(6) GeV, without resorting to the resonant enhancement of the CP asymmetry for strongly degenerate heavy neutrinos. This lower limit renders thermal leptogenesis compatible with the gravitino bound in supersymmetric scenarios.
|
|
|
Perez, A., & Romanelli, A. (2013). Spatially Dependent Decoherence and Anomalous Diffussion of Quantum Walks. J. Comput. Theor. Nanosci., 10(7), 1591–1595.
Abstract: We analyze the long time behavior of a discrete time quantum walk subject to decoherence with a strong spatial dependence, acting on one half of the lattice. We show that, except for limiting cases on the decoherence parameter, the quantum walk at late times behaves sub-ballistically, meaning that the characteristic features of the quantum walk are not completely spoiled. Contrarily to expectations, the asymptotic behavior is non Markovian, and depends on the amount of decoherence. This feature can be clearly shown on the long time value of the Generalized Chiral Distribution (GCD).
|
|
|
Hinarejos, M., Bañuls, M. C., & Perez, A. (2013). A Study of Wigner Functions for Discrete-Time Quantum Walks. J. Comput. Theor. Nanosci., 10(7), 1626–1633.
Abstract: We perform a systematic study of the discrete time Quantum Walk on one dimension using Wigner functions, which are generalized to include the chirality (or coin) degree of freedom. In particular, we analyze the evolution of the negative volume in phase space, as a function of time, for different initial states. This negativity can be used to quantify the degree of departure of the system from a classical state. We also relate this quantity to the entanglement between the coin and walker subspaces.
|
|
|
Capozziello, S., Harko, T., Koivisto, T. S., Lobo, F. S. N., & Olmo, G. J. (2013). The virial theorem and the dark matter problem in hybrid metric-Palatini gravity. J. Cosmol. Astropart. Phys., 07(7), 024–19pp.
Abstract: Hybrid metric-Palatini gravity is a recently proposed theory, consisting of the superposition of the metric Einstein-Hilbert Lagrangian with an f(R) term constructed a la Palatini. The theory predicts the existence of a long-range scalar field, which passes the Solar System observational constraints, even if the scalar field is very light, and modifies the cosmological and galactic dynamics. Thus, the theory opens new possibilities to approach, in the same theoretical framework, the problems of both dark energy and dark matter. In this work, we consider the generalized virial theorem in the scalar-tensor representation of the hybrid metric-Palatini gravity. More specifically, taking into account the relativistic collisionless Boltzmann equation, we show that the supplementary geometric terms in the gravitational field equations provide an effective contribution to the gravitational potential energy. We show that the total virial mass is proportional to the effective mass associated with the new terms generated by the effective scalar field, and the baryonic mass. In addition to this, we also consider astrophysical applications of the model and show that the model predicts that the mass associated to the scalar field and its effects extend beyond the virial radius of the clusters of galaxies. In the context of the galaxy cluster velocity dispersion profiles predicted by the hybrid metric-Palatini model, the generalized virial theorem can be an efficient tool in observationally testing the viability of this class of generalized gravity models.
|
|
|
Lobo, F. S. N., Olmo, G. J., & Rubiera-Garcia, D. (2013). Semiclassical geons as solitonic black hole remnants. J. Cosmol. Astropart. Phys., 07(7), 011–10pp.
Abstract: We find that the end state of black hole evaporation could be represented by non-singular and without event horizon stable solitonic remnants with masses of the order the Planck scale and up to similar to 16 units of charge. Though these objects are locally indistinguishable from spherically symmetric, massive electric (or magnetic) charges, they turn out to be sourceless geons containing a wormhole generated by the electromagnetic field. Our results are obtained by interpreting semiclassical corrections to Einstein's theory in the first-order (Palatini) formalism, which yields second-order equations and avoids the instabilities of the usual (metric) formulation of quadratic gravity. We also discuss the potential relevance of these solutions for primordial black holes and the dark matter problem.
|
|
|
Agarwalla, S. K., Prakash, S., & Sankar, S. U. (2013). Resolving the octant of theta(23) with T2K and NOvA. J. High Energy Phys., 07(7), 131–24pp.
Abstract: Preliminary results of MINOS experiment indicate that theta(23) is not maximal. Global fits to world neutrino data suggest two nearly degenerate solutions for theta(23): one in the lower octant (LO: theta(23) < 45 degrees) and the other in the higher octant (HO: theta(23) > 45 degrees). v(mu) -> v(e) oscillations in superbeam experiments are sensitive to the octant and are capable of resolving this degeneracy. We study the prospects of this resolution by the current T2K and upcoming NOvA experiments. Because of the hierarchy-delta(CP) degeneracy and the octant delta(CP) degeneracy, the impact of hierarchy on octant resolution has to be taken into account. As in the case of hierarchy determination, there exist favorable (unfavorable) values of delta(CP) for which octant resolution is easy (challenging). However, for octant resolution the unfavorable delta(CP) values of the neutrino data are favorable for the anti-neutrino data and vice-verse. This is in contrast to the case of hierarchy determination. In this paper, we compute the combined sensitivity of T2K and NOvA to resolve the octant ambiguity. If sin(2)theta(23) – 0.41, then NOvA can rule out all the values of theta(23) in HO at 2 sigma C.L., irrespective of the hierarchy and delta(CP). Addition of T2K data improves the octant sensitivity. If T2K were to have equal neutrino and anti-neutrino runs of 2.5 years each, a 2 sigma resolution of the octant becomes possible provided sin(2) theta(23) <= 0.43 or >= 0.58 for any value of delta(CP).
|
|
|
Blennow, M., Coloma, P., Donini, A., & Fernandez-Martinez, E. (2013). Gain fractions of future neutrino oscillation facilities over T2K and NOvA. J. High Energy Phys., 07(7), 159–23pp.
Abstract: We evaluate the probability of future neutrino oscillation facilities to discover leptonic CP violation and/or measure the neutrino mass hierarchy. We study how this probability is affected by positive or negative hints for these observables to be found at T2K and NO nu A. We consider the following facilities: LBNE; T2HK; and the 10 GeV Neutrino Factory (NF10), and show how their discovery probabilities change with the running time of T2K and NO nu A conditioned to their results. We find that, if after 15 years T2K and NO nu A have not observed a 90% CL hint of CP violation, then LBNE and T2HK have less than a 10% chance of achieving a 5 sigma discovery, whereas NF10 still has a similar to 40% chance to do so. Conversely, if T2K and NO nu A have an early 90% CL hint in 5 years from now, T2HK has a rather large chance to achieve a 5 sigma CP violation discovery (75% or 55%, depending on whether the mass hierarchy is known or not). This is to be compared with the 90% (30%) probability that NF10 (LBNE) would have to observe the same signal at 5 sigma. A hierarchy measurement at 5 sigma is achievable at both LBNE and NF10 with more than 90% probability, irrespectively of the outcome of T2K and NO nu A. We also find that if LBNE or a similar very long baseline super-beam is the only next generation facility to be built, then it is very useful to continue running T2K and NO nu A (or at least T2K) beyond their original schedule in order to increase the CP violation discovery chances, given their complementarity.
|
|
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2013). The health of SUSY after the Higgs discovery and the XENON100 data. J. High Energy Phys., 07(7), 182–47pp.
Abstract: We analyze the implications for the status and prospects of supersymmetry of the Higgs discovery and the last XENON data. We focus mainly, but not only, on the CMSSM and NUHM models. Using a Bayesian approach we determine the distribution of probability in the parameter space of these scenarios. This shows that, most probably, they are now beyond the LHC reach. This negative chances increase further (at more than 95% c.l.) if one includes dark matter constraints in the analysis, in particular the last XENON100 data. However, the models would be probed completely by XENON1T. The mass of the LSP neutralino gets essentially fixed around 1TeV. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises automatically from the careful Bayesian analysis itself, and allows to scan the whole parameter space. In this way, we can explain and resolve the apparent discrepancies between the previous results in the literature. Although SUSY has become hard to detect at LHC, this does not necessarily mean that is very fine-tuned. We use Bayesian techniques to show the experimental Higgs mass is at similar to 2 sigma off the CMSSM or NUHM expectation. This is substantial but not dramatic. Although the CMSSM or the NUHM are unlikely to show up at the LHC, they are still interesting and plausible models after the Higgs observation; and, if they are true, the chances of discovering them in future dark matter experiments are quite high.
|
|
|
Baker, M. J., Bordes, J., Dominguez, C. A., Peñarrocha, J., & Schilcher, K. (2014). B meson decay constants f(Bc), f(Bs) and f(B) from QCD sum rules. J. High Energy Phys., 07(7), 032–16pp.
Abstract: Finite energy QCD sum rules with Legendre polynomial integration kernels are used to determine the heavy meson decay constant f(Bc), and revisit f(B) and f(Bs). Results exhibit excellent stability in a wide range of values of the integration radius in the complex squared energy plane, and of the order of the Legendre polynomial. Results are f(Bc) = 528 +/- 19 MeV, f(B) = 186 +/- 14 MeV, and f(Bs) = 222 +/- 12 MeV.
|
|
|
Aristizabal Sierra, D., Tortola, M., Valle, J. W. F., & Vicente, A. (2014). Leptogenesis with a dynamical seesaw scale. J. Cosmol. Astropart. Phys., 07(7), 052–20pp.
Abstract: In the simplest type-I seesaw leptogenesis scenario right-handed neutrino annihilation processes are absent. However, in the presence of new interactions these processes are possible and can affect the resulting B – L asymmetry in an important way. A prominent example is provided by models with spontaneous lepton number violation, where the existence of new dynamical degrees of freedom can play a crucial role. In this context, we provide a model-independent discussion of the effects of right-handed neutrino annihilations. We show that in the weak washout regime, as long as the scattering processes remain slow compared with the Hubble expansion rate throughout the relevant temperature range, the efficiency can be largely enhanced, reaching in some cases maximal values. Moreover, the B – L asymmetry yield turns out to be independent upon initial conditions, in contrast to the “standard” case. On the other hand, when the annihilation processes are fast, the right-handed neutrino distribution tends to a thermal one down to low temperatures, implying a drastic suppression of the efficiency which in some cases can render the B – L generation mechanism inoperative.
|
|