Tetrault, M. A., Oliver, J. F., Bergeron, M., Lecomte, R., & Fontaine, R. (2010). Real Time Coincidence Detection Engine for High Count Rate Timestamp Based PET. IEEE Trans. Nucl. Sci., 57(1), 117–124.
Abstract: Coincidence engines follow two main implementation flows: timestamp based systems and AND-gate based systems. The latter have been more widespread in recent years because of its lower cost and high efficiency. However, they are highly dependent on the selected electronic components, they have limited flexibility once assembled and they are customized to fit a specific scanner's geometry. Timestamp based systems are gathering more attention lately, especially with high channel count fully digital systems. These new systems must however cope with important singles count rates. One option is to record every detected event and postpone coincidence detection offline. For daily use systems, a real time engine is preferable because it dramatically reduces data volume and hence image preprocessing time and raw data management. This paper presents the timestamp based coincidence engine for the LabPET(TM), a small animal PET scanner with up to 4608 individual readout avalanche photodiode channels. The engine can handle up to 100 million single events per second and has extensive flexibility because it resides in programmable logic devices. It can be adapted for any detector geometry or channel count, can be ported to newer, faster programmable devices and can have extra modules added to take advantage of scanner-specific features. Finally, the user can select between full processing mode for imaging protocols and minimum processing mode to study different approaches for coincidence detection with offline software.
|
Cirigliano, V., Jenkins, J. P., & Gonzalez-Alonso, M. (2010). Semileptonic decays of light quarks beyond the Standard Model. Nucl. Phys. B, 830(1-2), 95–115.
Abstract: We describe non-standard contributions to semileptonic processes in a model independent way in terms of in SU(2)(L) x U(1)(Y) invariant effective lagrangian at the weak scale, front which we derive the low-energy effective lagrangian governing muon and beta decays. We find that the deviation from Cabibbo universality, Delta(CKM) equivalent to vertical bar V-ud vertical bar(2) + vertical bar V-us vertical bar(2) + vertical bar V-ub vertical bar(2) – 1, receives contributions from four effective operators. The phenomenological bound Delta(CKM) = (-1 +/- 6) x 10(-4) provides strong constraints on all four operators, corresponding to art effective scale Lambda > 11 TeV (90% CL). Depending on the operator, this constraint is at the same level or better then the Z pole observables. Conversely, precision electroweak constraints alone would allow universality violations as large as Delta(CKM) = -0.01 (90% CL). An observed Delta(CKM) not equal 0 at this level Could be explained in terms of a single four-fermion operator which is relatively poorly constrained by electroweak precision measurements.
|
Pierre Auger Collaboration(Abraham, J. et al), & Pastor, S. (2010). Measurement of the energy spectrum of cosmic rays above 10(18) eV using the Pierre Auger Observatory. Phys. Lett. B, 685(4-5), 239–246.
Abstract: We report a measurement of the flux of cosmic rays with unprecedented precision and Statistics using the Pierre Auger Observatory Based on fluorescence observations in coincidence with at least one Surface detector we derive a spectrum for energies above 10(18) eV We also update the previously published energy spectrum obtained with the surface detector array The two spectra are combined addressing the systematic uncertainties and, in particular. the influence of the energy resolution on the spectral shape The spectrum can be described by a broken power law E-gamma with index gamma = 3 3 below the ankle which is measured at log(10)(E-ankle/eV) = 18 6 Above the ankle the spectrum is described by a power law with index 2 6 followed by a flux suppression, above about log(10)(E/eV) = 19 5, detected with high statistical significance.
|
Labiche, M. et al, Caballero, L., & Rubio, B. (2010). TIARA: A large solid angle silicon array for direct reaction studies with radioactive beams. Nucl. Instrum. Methods Phys. Res. A, 614(3), 439–448.
Abstract: A compact, quasi-4 pi position sensitive silicon array. TIARA, designed to study direct reactions induced by radioactive beams in inverse kinematics is described here. The Transfer and Inelastic All-angle Reaction Array (TIARA) consists of 8 resistive charge division detectors forming an octagonal barrel around the target and a set of double-sided silicon-strip annular detectors positioned at each end of the barrel. The detector was coupled to the gamma-ray array EXOGAM and the spectrometer VAMOS at the GANIL Laboratory to demonstrate the potential of such an apparatus with radioactive beams. The N-14(d,p)N-15 reaction, well known in direct kinematics, has been carried out in inverse kinematics for that purpose. The observation of the N-15 ground state and excited states at 7.16 and 7.86 MeV is presented here as well as the comparison of the measured proton angular distributions with DWBA calculations. Transferred l-values are in very good agreement with both theoretical calculations and previous experimental results obtained in direct kinematics.
|
Bouhova-Thacker, E., Kostyukhin, V., Koffas, T., Liebig, W., Limper, M., Piacquadio, G. N., et al. (2010). Expected Performance of Vertex Reconstruction in the ATLAS Experiment at the LHC. IEEE Trans. Nucl. Sci., 57(2), 760–767.
Abstract: In the harsh environment of the Large Hadron Collider at CERN (design luminosity of 10(34) cm(-2) s(-1)) efficient reconstruction of vertices is crucial for many physics analyses. Described in this paper is the expected performance of the vertex reconstruction used in the ATLAS experiment. The algorithms for the reconstruction of primary and secondary vertices as well as for finding photon conversions and vertex reconstruction in jets are described. The implementation of vertex algorithms which follows a very modular design based on object-oriented C++ is presented. A user-friendly concept allows event reconstruction and physics analyses to compare and optimize their choice among different vertex reconstruction strategies. The performance of implemented algorithms has been studied on a variety of Monte Carlo samples and results are presented.
|
Cabrera, M. E., Casas, J. A., & Ruiz de Austri, R. (2010). MSSM forecast for the LHC. J. High Energy Phys., 05(5), 043–48pp.
Abstract: We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of M-Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e(+)e(-) data) is considered, the preferred region (for μ> 0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-mu possibilities.
|
Jimenez, R., Kitching, T., Pena-Garay, C., & Verde, L. (2010). Can we measure the neutrino mass hierarchy in the sky? J. Cosmol. Astropart. Phys., 05(5), 035–14pp.
Abstract: Cosmological probes are steadily reducing the total neutrino mass window, resulting in constraints on the neutrino-mass degeneracy as the most significant outcome. In this work we explore the discovery potential of cosmological probes to constrain the neutrino hierarchy, and point out some subtleties that could yield spurious claims of detection. This has an important implication for next generation of double beta decay experiments, that will be able to achieve a positive signal in the case of degenerate or inverted hierarchy of Majorana neutrinos. We find that cosmological experiments that nearly cover the whole sky could in principle distinguish the neutrino hierarchy by yielding 'substantial' evidence for one scenario over the another, via precise measurements of the shape of the matter power spectrum from large scale structure and weak gravitational lensing.
|
Yamagata-Sekihara, J., & Oset, E. (2010). V P gamma radiative decay of resonances dynamically generated from the vector meson-vector meson interaction. Phys. Lett. B, 690(4), 376–381.
Abstract: We evaluate the radiative decay into a vector, a pseudoscalar and a photon of several resonances dynamically generated from the vector-vector interaction. The process proceeds via the decay of one of the vector components into a pseudoscalar and a photon, which have an invariant mass distribution very different from phase space as a consequence of the two vector structure of the resonances. Experimental work along these lines should provide useful information on the nature of these resonances.
|
Pierre Auger Collaboration(Abraham, J. et al), & Pastor, S. (2010). The fluorescence detector of the Pierre Auger Observatory. Nucl. Instrum. Methods Phys. Res. A, 620(2-3), 227–251.
Abstract: The Pierre Auger Observatory is a hybrid detector for ultra-high energy cosmic rays. It combines a surface array to measure secondary particles at ground level together with a fluorescence detector to measure the development of air showers in the atmosphere above the array. The fluorescence detector comprises 24 large telescopes specialized for measuring the nitrogen fluorescence caused by charged particles of cosmic ray air showers. In this paper we describe the components of the fluorescence detector including its optical system, the design of the camera, the electronics, and the systems for relative and absolute calibration. We also discuss the operation and the monitoring of the detector. Finally, we evaluate the detector performance and precision of shower reconstructions.
|
Rodriguez-Alvarez, M. J., Sanchez, F., Soriano, A., & Iborra, A. (2010). Sparse Givens resolution of large system of linear equations: Applications to image reconstruction. Math. Comput. Model., 52(7-8), 1258–1264.
Abstract: In medicine, computed tomographic images are reconstructed from a large number of measurements of X-ray transmission through the patient (projection data). The mathematical model used to describe a computed tomography device is a large system of linear equations of the form AX = B. In this paper we propose the QR decomposition as a direct method to solve the linear system. QR decomposition can be a large computational procedure. However, once it has been calculated for a specific system, matrices Q and R are stored and used for any acquired projection on that system. Implementation of the QR decomposition in order to take more advantage of the sparsity of the system matrix is discussed.
|