Home | [1–10] << 11 12 13 14 15 16 17 18 19 20 >> [21–30] |
Witte, S. J., Rosauro-Alcaraz, S., McDermott, S. D., & Poulin, V. (2020). Dark photon dark matter in the presence of inhomogeneous structure. J. High Energy Phys., 06(6), 35pp.
Abstract: Dark photon dark matter will resonantly convert into visible photons when the dark photon mass is equal to the plasma frequency of the ambient medium. In cosmological contexts, this transition leads to an extremely efficient, albeit short-lived, heating of the surrounding gas. Existing work in this field has been predominantly focused on understanding the implications of these resonant transitions in the limit that the plasma frequency of the Universe can be treated as being perfectly homogeneous, i.e. neglecting inhomogeneities in the electron number density. In this work we focus on the implications of heating from dark photon dark matter in the presence of inhomogeneous structure (which is particularly relevant for dark photons with masses in the range 10(-15) eV less than or similar to m(A ') less than or similar to 10(-12) eV), emphasizing both the importance of inhomogeneous energy injection, as well as the sensitivity of cosmological observations to the inhomogeneities themselves. More specifically, we derive modified constraints on dark photon dark matter from the Ly-alpha forest, and show that the presence of inhomogeneities allows one to extend constraints to masses outside of the range that would be obtainable in the homogeneous limit, while only slightly relaxing their strength. We then project sensitivity for near-future cosmological surveys that are hoping to measure the 21cm transition in neutral hydrogen prior to reionization, and demonstrate that these experiments will be extremely useful in improving sensitivity to masses near similar to 10(-14) eV, potentially by several orders of magnitude. Finally, we discuss implications for reionization, early star formation, and late-time y-type spectral distortions, and show that probes which are inherently sensitive to the inhomogeneous state of the Universe could resolve signatures unique to the light dark photon dark matter scenario, and thus offer a fantastic potential for a positive detection.
|
Di Valentino, E., Melchiorri, A., & Mena, O. (2013). Dark radiation sterile neutrino candidates after Planck data. J. Cosmol. Astropart. Phys., 11(11), 018–13pp.
Abstract: Recent Cosmic Microwave Background (CMB) results from the Planck satellite, combined with previous CMB data and Hubble constant measurements from the Hubble Space Telescope, provide a constraint on the effective number of relativistic degrees of freedom 3.62(-0.48)(+0.50) at 95% CL. New Planck data provide a unique opportunity to place limits on models containing relativistic species at the decoupling epoch. We present here the bounds on sterile neutrino models combining Planck data with galaxy clustering information. Assuming N-eff active plus sterile massive neutrino species, in the case of a Planck+WP+HighL+HST analysis we find m(nu,sterile)(eff) < 0.36 eV and 3.14 < N-eff < 4.15 at 95% CL, while using Planck+WP+HighL data in combination with the full shape of the galaxy power spectrum from the Baryon Oscillation Spectroscopic Survey BOSS Data Relase 9 measurements, we find that 3.30 < N-eff < 4.43 and m(nu,sterile)(eff) < 0.33 eV both at 95% CL with the three active neutrinos having the minimum mass allowed in the normal hierarchy scheme, i.e. Sigma m(nu) similar to 0.06 eV. These values compromise the viability of the (3 + 2) massive sterile neutrino models for the parameter region indicated by global fits of neutrino oscillation data. Within the (3 + 1) massive sterile neutrino scenario, we find m(nu,sterile)(eff) < 0.34 eV at 95% CL. While the existence of one extra sterile massive neutrino state is compatible with current oscillation data, the values for the sterile neutrino mass preferred by oscillation analyses are significantly higher than the current cosmological bound. We review as well the bounds on extended dark sectors with additional light species based on the latest Planck CMB observations.
|
Bernardoni, F., Blossier, B., Bulava, J., Della Morte, M., Fritzsch, P., Garron, N., et al. (2014). Decay constants of B-mesons from non-perturbative HQET with two light dynamical quarks. Phys. Lett. B, 735, 349–356.
Abstract: We present a computation of B-meson decay constants from lattice QCD simulations within the framework of Heavy Quark Effective Theory for the b-quark. The next-to-leading order corrections in the HQET expansion are included non-perturbatively. Based on N-f = 2 gauge field ensembles, covering three lattice spacings a approximate to (0.08-0.05) fm and pion masses down to 190 MeV, a variational method for extracting hadronic matrix elements is used to keep systematic errors under control. In addition we perform a careful autocorrelation analysis in the extrapolation to the continuum and to the physical pion mass limits. Our final results read f(B) = 186(13) MeV, f(Bs) = 224(14) MeV and f(Bs)/f(B) = 1.203(65). A comparison with other results in the literature does not reveal a dependence on the number of dynamical quarks, and effects from truncating HQET appear to be negligible.
Keywords: Lattice QCD; Heavy Quark Effective Theory; Bottom quarks; Meson decay
|
Amerio, A., Calore, F., Serpico, P. D., & Zaldivar, B. (2024). Deepening gamma-ray point-source catalogues with sub-threshold information. J. Cosmol. Astropart. Phys., 03(3), 055–18pp.
Abstract: We propose a novel statistical method to extend Fermi-LAT catalogues of highlatitude -y-ray sources below their nominal threshold. To do so, we rely on the determination of the differential source -count distribution of sub -threshold sources which only provides the statistical flux distribution of faint sources. By simulating ensembles of synthetic skies, we assess quantitatively the likelihood for pixels in the sky with relatively low -test statistics to be due to sources, therefore complementing the source -count distribution with spatial information. Besides being useful to orient efforts towards multi -messenger and multi -wavelength identification of new -y-ray sources, we expect the results to be especially advantageous for statistical applications such as cross -correlation analyses.
Keywords: gamma ray theory; Frequentist statistics
|
Bordes, J., Chan, H. M., & Tsou, S. T. (2021). delta(CP) for leptons and a new take on CP physics with the FSM. Int. J. Mod. Phys. A, 36, 2150236–22pp.
Abstract: A bonus of the framed Standard Model (FSM), constructed initially to explain the mass and mixing patterns of quarks and leptons, is a solution (without axions) of the strong CP problem by cancelling the theta-angle term theta(I) Tr(H-mu v H-mu v*) in coloura by a chiral transformation on a quark zero mode which is inherent in FSM, and produces thereby a CP-violating phase in the CKM matrix similar in size to what is observed.' Extending here to flavour, one finds that there are two terms proportional to Tr(G(mu v) G(mu v)*): (a) in the action from flavour instantons with unknown coefficient, say theta(I)', (b) induced by the above FSM solution to the strong CP-problem with therefore known coefficient theta(C)'. Both terms can be cancelled in the FSM by a chiral transformation on the lepton zero mode to give a Jarlskog invariant J' in the PMNS matrix for leptons of order 10(-2), as is hinted by the experiment. But if, as suggested in Ref. 2, the term theta(I)' is to be cancelled by a chiral transformation in the predicted hidden sector to solve the strong CP problem therein, leaving only the term theta(C)' to be cancelled by the chiral transformation on leptons, then the following prediction results: J' similar to -0.012 (delta(CP)'similar to (1.11)pi) which is (i) of the right order, (ii) of the right sign and (iii) in the range favoured by the present experiment. Together with the earlier result for quarks, this offers an attractive unified treatment of all known CP physics.
|
Albiol, F., Corbi, A., & Albiol, A. (2019). Densitometric Radiographic Imaging With Contour Sensors. IEEE Access, 7, 18902–18914.
Abstract: We present the technical/physical foundations of a new imaging technique that combines ordinary radiographic information (generated by conventional X-ray settings) with the patient's volume to derive densitometric images. Traditionally, these images provide quantitative information about tissues densities. In our approach, they graphically enhance either soft or bony regions. After measuring the patient's volume with contour recognition devices, the physical traversed lengths within it (as the Roentgen beam intersects the patient) are calculated and pixel-wise associated with the original radiograph (X). In order to derive this map of lengths (L), the camera equations of the X-ray system and the contour sensor are determined. The patient's surface is also translated to the point-of-view of the X-ray beam and all its entrance/exit points are sought with the help of ray-casting methods. The derived L is applied to X as a physical operation (subtraction), obtaining soft tissue-(D-S) or bone-enhanced (D'(B)) figures. In the D-S type, the contained graphical information can be linearly mapped to the average electronic density (traversed by the X-ray beam). This feature represents an interesting proof-of-concept of associating density data to radiographs, but most important, their intensity histogram is objectively compressed, i.e., the dynamic range is more shrunk (compared against the corresponding X). This leads to other advantages: improvement in the visibility of border/edge areas (high gradient), extended manual window level/width manipulations during screening, and immediate correction of underexposed X instances. In the D-B' type, high-density elements are highlighted and easier to discern. All these results can be achieved with low-energy beam exposures, saving costs and dose. Future work will deepen this clinical side of our research. In contrast with other image-based modifiers, the proposed method is grounded on the measurement of a physical entity: the span of the X-ray beam within a body while undertaking a radiographic examination.
|
Valdes-Cortez, C., Ballester, F., Vijande, J., Gimenez, V., Gimenez-Alventosa, V., Perez-Calatayud, J., et al. (2020). Depth-dose measurement corrections for the surface electronic brachytherapy beams of an Esteya(R) unit: a Monte Carlo study. Phys. Med. Biol., 65(24), 245026–12pp.
Abstract: Three different correction factors for measurements with the parallel-plate ionization chamber PTW T34013 on the Esteya electronic brachytherapy unit have been investigated. This chamber type is recommended by AAPM TG-253 for depth-dose measurements in the 69.5 kV x-ray beam generated by the Esteya unit. Monte Carlo simulations using the PENELOPE-2018 system were performed to determine the absorbed dose deposited in water and in the chamber sensitive volume at different depths with a Type A uncertainty smaller than 0.1%. Chamber-to-chamber differences have been explored performing measurements using three different chambers. The range of conical applicators available, from 10 to 30 mm in diameter, has been explored. Using a depth-independent global chamber perturbation correction factor without a shift of the effective point of measurement yielded differences between the absorbed dose to water and the corrected absorbed dose in the sensitive volume of the chamber of up to 1% and 0.6% for the 10 mm and 30 mm applicators, respectively. Calculations using a depth-dependent perturbation factor, including or excluding a shift of the effective point of measurement, resulted in depth-dose differences of about +/- 0.5% or less for both applicators. The smallest depth-dose differences were obtained when a shift of the effective point of measurement was implemented, being displaced 0.4 mm towards the center of the sensitive volume of the chamber. The correction factors were obtained with combined uncertainties of 0.4% (k = 2). Uncertainties due to chamber-to-chamber differences are found to be lower than 2%. The results emphasize the relevance of carrying out detailed Monte Carlo studies for each electronic brachytherapy device and ionization chamber used for its dosimetry.
Keywords: electronic brachytherapy; eBT; dosimetry; ionization chamber; Monte Carlo
|
Candela-Juan, C., Niatsetski, Y., van der Laarse, R., Granero, D., Ballester, F., Perez-Calatayud, J., et al. (2016). Design and characterization of a new high-dose-rate brachytherapy Valencia applicator for larger skin lesions. Med. Phys., 43(4), 1639–1648.
Abstract: Purpose: The aims of this study were (i) to design a new high-dose-rate (HDR) brachytherapy applicator for treating surface lesions with planning target volumes larger than 3 cm in diameter and up to 5 cm in size, using the microSelectron-HDR or Flexitron afterloader (Elekta Brachytherapy) with a Ir-192 source; (ii) to calculate by means of the Monte Carlo (MC) method the dose distribution for the new applicator when it is placed against a water phantom; and (iii) to validate experimentally the dose distributions in water. Methods: The PENELOPE2008 MC code was used to optimize dwell positions and dwell times. Next, the dose distribution in a water phantom and the leakage dose distribution around the applicator were calculated. Finally, MC data were validated experimentally for a 192Ir mHDR-v2 source by measuring (i) dose distributions with radiochromic EBT3 films (ISP); (ii) percentage depth-dose (PDD) curve with the parallel-plate ionization chamber Advanced Markus (PTW); and (iii) absolute dose rate with EBT3 films and the PinPoint T31016 (PTW) ionization chamber. Results: The new applicator is made of tungsten alloy (Densimet) and consists of a set of interchangeable collimators. Three catheters are used to allocate the source at prefixed dwell positions with preset weights to produce a homogenous dose distribution at the typical prescription depth of 3 mm in water. The same plan is used for all available collimators. PDD, absolute dose rate per unit of air kerma strength, and off-axis profiles in a cylindrical water phantom are reported. These data can be used for treatment planning. Leakage around the applicator was also scored. The dose distributions, PDD, and absolute dose rate calculated agree within experimental uncertainties with the doses measured: differences of MC data with chamber measurements are up to 0.8% and with radiochromic films are up to 3.5%. Conclusions: The new applicator and the dosimetric data provided here will be a valuable tool in clinical practice, making treatment of large skin lesions simpler, faster, and safer. Also the dose to surrounding healthy tissues is minimal.
Keywords: skin applicator; Valencia applicator; HDR brachytherapy; dosimetry; Monte Carlo
|
Carrio, F., Kim, H. Y., Moreno, P., Reed, R., Sandrock, C., Schettino, V., et al. (2014). Design of an FPGA-based embedded system for the ATLAS Tile Calorimeter front-end electronics test-bench. J. Instrum., 9, C03023–12pp.
Abstract: The portable test-bench for the certification of the ATLAS tile hadronic calorimeter front-end electronics has been redesigned for the present Long Shutdown (LS1) of LHC, improving its portability and expanding its functionalities. This paper presents a new test-bench based on a Xilinx Virtex-5 FPGA that implements an embedded system using a PowerPC 440 microprocessor hard core and custom IP cores. A light Linux version runs on the PowerPC microprocessor and handles the IP cores which implement the different functionalities needed to perform the desired tests such as TTCvi emulation, G-Link decoding, ADC control and data reception.
|
CALICE Collaboration(White, A. et al), & Irles, A. (2023). Design, construction and commissioning of a technological prototype of a highly granular SiPM-on-tile scintillator-steel hadronic calorimeter. J. Instrum., 18(11), P11018–39pp.
Abstract: The CALICE collaboration is developing highly granular electromagnetic and hadronic calorimeters for detectors at future energy frontier electron-positron colliders. After successful tests of a physics prototype, a technological prototype of the Analog Hadron Calorimeter has been built, based on a design and construction techniques scalable to a collider detector. The prototype consists of a steel absorber structure and active layers of small scintillator tiles that are individually read out by directly coupled SiPMs. Each layer has an active area of 72 x 72 cm2 and a tile size of 3 x 3 cm2. With 38 active layers, the prototype has nearly 22, 000 readout channels, and its total thickness amounts to 4.4 nuclear interaction lengths. The dedicated readout electronics provide time stamping of each hit with an expected resolution of about 1 ns. The prototype was constructed in 2017 and commissioned in beam tests at DESY. It recorded muons, hadron showers and electron showers at different energies in test beams at CERN in 2018. In this paper, the design of the prototype, its construction and commissioning are described. The methods used to calibrate the detector are detailed, and the performance achieved in terms of uniformity and stability is presented.
|