Home | [1–10] << 11 12 13 14 15 16 17 18 19 20 >> [21–25] |
Assam, I., Vijande, J., Ballester, F., Perez-Calatayud, J., Poppe, B., & Siebert, F. A. (2022). Evaluation of dosimetric effects of metallic artifact reduction and tissue assignment on Monte Carlo dose calculations for I-125 prostate implants. Med. Phys., 49, 6195–6208.
Abstract: Purpose Monte Carlo (MC) simulation studies, aimed at evaluating the magnitude of tissue heterogeneity in I-125 prostate permanent seed implant brachytherapy (BT), customarily use clinical post-implant CT images to generate a virtual representation of a realistic patient model (virtual patient model). Metallic artifact reduction (MAR) techniques and tissue assignment schemes (TAS) are implemented on the post-implant CT images to mollify metallic artifacts due to BT seeds and to assign tissue types to the voxels corresponding to the bright seed spots and streaking artifacts, respectively. The objective of this study is to assess the combined influence of MAR and TAS on MC absorbed dose calculations in post-implant CT-based phantoms. The virtual patient models used for I-125 prostate implant MC absorbed dose calculations in this study are derived from the CT images of an external radiotherapy prostate patient without BT seeds and prostatic calcifications, thus averting the need to implement MAR and TAS. Methods The geometry of the IsoSeed I25.S17plus source is validated by comparing the MC calculated results of the TG-43 parameters for the line source approximation with the TG-43U1S2 consensus data. Four MC absorbed dose calculations are performed in two virtual patient models using the egs_brachy MC code: (1) TG-43-based D-w,w-TG(43), (2) D-w,D-w-MBDC that accounts for interseed scattering and attenuation (ISA), (3) D-m,D-m that examines ISA and tissue heterogeneity by scoring absorbed dose in tissue, and (4) D-w,D-m that unlike D-m,D-m scores absorbed dose in water. The MC absorbed doses (1) and (2) are simulated in a TG-43 patient phantom derived by assigning the densities of every voxel to 1.00 g cm(-3) (water), whereas MC absorbed doses (3) and (4) are scored in the TG-186 patient phantom generated by mapping the mass density of each voxel to tissue according to a CT calibration curve. The MC absorbed doses calculated in this study are compared with VariSeed v8.0 calculated absorbed doses. To evaluate the dosimetric effect of MAR and TAS, the MC absorbed doses of this work (independent of MAR and TAS) are compared to the MC absorbed doses of different I-125 source models from previous studies that were calculated with different MC codes using post-implant CT-based phantoms generated by implementing MAR and TAS on post-implant CT images. Results The very good agreement of TG-43 parameters of this study and the published consensus data within 3% validates the geometry of the IsoSeed I25.S17plus source. For the clinical studies, the TG-43-based calculations show a D-90 overestimation of more than 4% compared to the more realistic MC methods due to ISA and tissue composition. The results of this work generally show few discrepancies with the post-implant CT-based dosimetry studies with respect to the D-90 absorbed dose metric parameter. These discrepancies are mainly Type B uncertainties due to the different I-125 source models and MC codes. Conclusions The implementation of MAR and TAS on post-implant CT images have no dosimetric effect on the I-125 prostate MC absorbed dose calculation in post-implant CT-based phantoms.
|
Ahlburg, P. et al, & Marinas, C. (2020). EUDAQ – a data acquisition software framework for common beam telescopes. J. Instrum., 15(1), P01038–30pp.
Abstract: EUDAQ is a generic data acquisition software developed for use in conjunction with common beam telescopes at charged particle beam lines. Providing high-precision reference tracks for performance studies of new sensors, beam telescopes are essential for the research and development towards future detectors for high-energy physics. As beam time is a highly limited resource, EUDAQ has been designed with reliability and ease-of-use in mind. It enables flexible integration of different independent devices under test via their specific data acquisition systems into a top-level framework. EUDAQ controls all components globally, handles the data flow centrally and synchronises and records the data streams. Over the past decade, EUDAQ has been deployed as part of a wide range of successful test beam campaigns and detector development applications.
|
Garcia-Barcelo, J. M., Diaz-Morcillo, A., & Gimeno, B. (2023). Enhancing resonant circular-section haloscopes for dark matter axion detection: approaches and limitations in volume expansion. J. High Energy Phys., 11(11), 159–30pp.
Abstract: Haloscopes, microwave resonant cavities utilized in detecting dark matter axions within powerful static magnetic fields, are pivotal in modern astrophysical research. This paper delves into the realm of cylindrical geometries, investigating techniques to augment volume and enhance compatibility with dipole or solenoid magnets. The study explores volume constraints in two categories of haloscope designs: those reliant on single cavities and those employing multicavities. In both categories, strategies to increase the expanse of elongated structures are elucidated. For multicavities, the optimization of space within magnets is explored through 1D configurations. Three subcavity stacking approaches are investigated, while the foray into 2D and 3D geometries lays the groundwork for future topological developments. The results underscore the efficacy of these methods, revealing substantial room for progress in cylindrical haloscope design. Notably, an elongated single cavity design attains a three-order magnitude increase in volume compared to a WC-109 standard waveguide-based single cavity. Diverse prototypes featuring single cavities, 1D, 2D, and 3D multicavities highlight the feasibility of leveraging these geometries to magnify the volume of tangible haloscope implementations.
Keywords: Axions and ALPs; Particle Nature of Dark Matter
|
Oliveira, C. A. B., Sorel, M., Martin-Albo, J., Gomez-Cadenas, J. J., Ferreira, A. L., & Veloso, J. F. C. A. (2011). Energy resolution studies for NEXT. J. Instrum., 6, P05007–13pp.
Abstract: This work aims to present the current state of simulations of electroluminescence (EL) produced in gas-based detectors with special interest for NEXT – Neutrino Experiment with a Xenon TPC. NEXT is a neutrinoless double beta decay experiment, thus needs outstanding energy resolution which can be achieved by using electroluminescence. The process of light production is reviewed and properties such as EL yield and associated fluctuations, excitation and electroluminescence efficiencies, and energy resolution, are calculated. An EL production region with a 5 mm width gap between two infinite parallel planes is considered, where a uniform electric field is produced. The pressure and temperature considered are 10 bar and 293 K, respectively. The results show that, even for low values of VUV photon detection efficiency, good energy resolution can be achieved: below 0.4% (FWHM) at Q(beta beta) = 2.458 MeV.
Keywords: Scintillators, scintillation and light emission processes (solid, gas and liquid scintillators); Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission etc); Large detector systems for particle and astroparticle physics; Time projection chambers
|
Carrasco, J., & Zurita, J. (2024). Emerging jet probes of strongly interacting dark sectors. J. High Energy Phys., 01(1), 034–23pp.
Abstract: A strongly interacting dark sector can give rise to a class of signatures dubbed dark showers, where in analogy to the strong sector in the Standard Model, the dark sector undergoes its own showering and hadronization, before decaying into Standard Model final states. When the typical decay lengths of the dark sector mesons are larger than a few centimeters (and no larger than a few meters) they give rise to the striking signature of emerging jets, characterized by a large multiplicity of displaced vertices.In this article we consider the general reinterpretation of the CMS search for emerging jets plus prompt jets into arbitrary new physics scenarios giving rise to emerging jets. More concretely, we consider the cases where the SM Higgs mediates between the dark sector and the SM, for several benchmark decay scenarios. Our procedure is validated employing the same model than the CMS emerging jet search. We find that emerging jets can be the leading probe in regions of parameter space, in particular when considering the so-called gluon portal and dark photon portal decay benchmarks. With the current 16.1 fb-1 of luminosity this search can exclude down to O\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O} $$\end{document}(20)% exotic branching ratio of the SM Higgs, but a naive extrapolation to the 139 fb-1 luminosity employed in the current model-independent, indirect bound of 16 % would probe exotic branching ratios into dark quarks down to below 10 %. Further extrapolating these results to the HL-LHC, we find that one can pin down exotic branching ratio values of 1%, which is below the HL-LHC expectations of 2.5-4 %. We make our recasting code publicly available, as part of the LLP Recasting Repository.
|
Mendez, V., Amoros, G., Garcia, F., & Salt, J. (2010). Emergent algorithms for replica location and selection in data grid. Futur. Gener. Comp. Syst., 26(7), 934–946.
Abstract: Grid infrastructures for e-Science projects are growing in magnitude terms. Improvements in data Grid replication algorithms may be critical in many of these infrastructures. This paper shows a decentralized replica optimization service, providing a general Emergent Artificial Intelligence (EAI) algorithm for the problem definition. Our aim is to set up a theoretical framework for emergent heuristics in Grid environments. Further, we describe two EAI approaches, the Particle Swarm Optimization PSO-Grid Multiswarm Federation and the Ant Colony Optimization ACO-Grid Asynchronous Colonies Optimization replica optimization algorithms, with some examples. We also present extended results with best performance and scalability features for PSO-Grid Multiswarrn Federation.
|
ATLAS Collaboration(Aad, G. et al), Alvarez Piqueras, D., Aparisi Pozo, J. A., Bailey, A. J., Cabrera Urban, S., Castillo, F. L., et al. (2019). Electron and photon performance measurements with the ATLAS detector using the 2015-2017 LHC proton-proton collision data. J. Instrum., 14, P12006–69pp.
Abstract: This paper describes the reconstruction of electrons and photons with the ATLAS detector, employed for measurements and searches exploiting the complete LHC Run 2 dataset. An improved energy clustering algorithm is introduced, and its implications for the measurement and identification of prompt electrons and photons are discussed in detail. Corrections and calibrations that affect performance, including energy calibration, identification and isolation efficiencies, and the measurement of the charge of reconstructed electron candidates are determined using up to 81 fb(-1) of proton-proton collision data collected at root s = 13 TeV between 2015 and 2017.
|
NEXT Collaboration(Henriques, C. A. O. et al), Alvarez, V., Benlloch-Rodriguez, J. M., Botas, A., Carcel, S., Carrion, J. V., et al. (2019). Electroluminescence TPCs at the thermal diffusion limit. J. High Energy Phys., 01(1), 027–23pp.
Abstract: The NEXT experiment aims at searching for the hypothetical neutrinoless double-beta decay from the Xe-136 isotope using a high-purity xenon TPC. Efficient discrimination of the events through pattern recognition of the topology of primary ionisation tracks is a major requirement for the experiment. However, it is limited by the diffusion of electrons. It is known that the addition of a small fraction of a molecular gas to xenon reduces electron diffusion. On the other hand, the electroluminescence (EL) yield drops and the achievable energy resolution may be compromised. We have studied the effect of adding several molecular gases to xenon (CO2, CH4 and CF4) on the EL yield and energy resolution obtained in a small prototype of driftless gas proportional scintillation counter. We have compared our results on the scintillation characteristics (EL yield and energy resolution) with a microscopic simulation, obtaining the diffusion coefficients in those conditions as well. Accordingly, electron diffusion may be reduced from about 10 for pure xenon down to 2.5 using additive concentrations of about 0.05%, 0.2% and 0.02% for CO2, CH4 and CF4, respectively. Our results show that CF4 admixtures present the highest EL yield in those conditions, but very poor energy resolution as a result of huge fluctuations observed in the EL formation. CH4 presents the best energy resolution despite the EL yield being the lowest. The results obtained with xenon admixtures are extrapolated to the operational conditions of the NEXT-100 TPC. CO2 and CH4 show potential as molecular additives in a large xenon TPC. While CO2 has some operational constraints, making it difficult to be used in a large TPC, CH4 shows the best performance and stability as molecular additive to be used in the NEXT-100 TPC, with an extrapolated energy resolution of 0.4% at 2.45 MeV for concentrations below 0.4%, which is only slightly worse than the one obtained for pure xenon. We demonstrate the possibility to have an electroluminescence TPC operating very close to the thermal diffusion limit without jeopardizing the TPC performance, if CO2 or CH4 are chosen as additives.
|
Schreeck, H., Paschen, B., Wieduwilt, P., Ahlburg, P., Andricek, L., Dingfelder, J., et al. (2020). Effects of gamma irradiation on DEPFET pixel sensors for the Belle II experiment. Nucl. Instrum. Methods Phys. Res. A, 959, 163522–9pp.
Abstract: For the Belle II experiment at KEK (Tsukuba, Japan) the KEKB accelerator was upgraded to deliver a 40 times larger instantaneous luminosity than before, which requires an increased radiation hardness of the detector components. As the innermost part of the Belle II detector, the pixel detector (PXD), based on DEPFET (DEpleted P-channel Field Effect Transistor) technology, is most exposed to radiation from the accelerator. An irradiation campaign was performed to verify that the PXD can cope with the expected amount of radiation. We present the results of this measurement campaign in which an X-ray machine was used to irradiate a single PXD half-ladder to a total dose of 266 kGy. The half-ladder is from the same batch as the half-ladders used for Belle II. According to simulations, the total accumulated dose corresponds to 7-10 years of Belle II operation. While individual components have been irradiated before, this campaign is the first full system irradiation. We discuss the effects on the DEPFET sensors, as well as the performance of the front-end electronics. In addition, we present efficiency studies of the half-ladder from beam tests performed before and after the irradiation.
Keywords: DEPFET; Radiation damage; Particle tracking detectors; Belle II
|
Calefice, L., Hennequin, A., Henry, L., Jashal, B. K., Mendoza, D., Oyanguren, A., et al. (2022). Effect of the high-level trigger for detecting long-lived particles at LHCb. Front. Big Data, 5, 1008737–13pp.
Abstract: Long-lived particles (LLPs) show up in many extensions of the Standard Model, but they are challenging to search for with current detectors, due to their very displaced vertices. This study evaluated the ability of the trigger algorithms used in the Large Hadron Collider beauty (LHCb) experiment to detect long-lived particles and attempted to adapt them to enhance the sensitivity of this experiment to undiscovered long-lived particles. A model with a Higgs portal to a dark sector is tested, and the sensitivity reach is discussed. In the LHCb tracking system, the farthest tracking station from the collision point is the scintillating fiber tracker, the SciFi detector. One of the challenges in the track reconstruction is to deal with the large amount of and combinatorics of hits in the LHCb detector. A dedicated algorithm has been developed to cope with the large data output. When fully implemented, this algorithm would greatly increase the available statistics for any long-lived particle search in the forward region and would additionally improve the sensitivity of analyses dealing with Standard Model particles of large lifetime, such as KS0 or Lambda (0) hadrons.
Keywords: LHCb; trigger; real time analysis; long-lived particles; GPU; SciFi; beyond standard physics
|