Home | << 1 2 3 4 5 6 7 8 >> |
![]() |
Rivard, M. J., Granero, D., Perez-Calatayud, J., & Ballester, F. (2010). Influence of photon energy spectra from brachytherapy sources on Monte Carlo simulations of kerma and dose rates in water and air. Med. Phys., 37(2), 869–876.
Abstract: Methods: For Ir-192, I-125, and Pd-103, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of Ir-192, I-125, and Pd-103 spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for Ir-192, I-125, and Pd-103, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
Keywords: biomedical materials; brachytherapy; dosimetry; iodine; iridium; Monte Carlo methods; palladium; radioisotopes
|
Ballester, F., Tedgren, A. C., Granero, D., Haworth, A., Mourtada, F., Fonseca, G. P., et al. (2015). A generic high-dose rate Ir-192 brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism. Med. Phys., 42(6), 3048–3062.
Abstract: Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) Ir-192 source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR Ir-192 source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic Ir-192 source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra (R) Brachy with advanced collapsed-cone engine (ACE) and BrachyVision AcuRos (TM)]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and pENELopE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201)(3) voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR Ir-192 source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 +/- 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR Ir-192 source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.
|
Arnalte-Mur, P., Labatie, A., Clerc, N., Martinez, V. J., Starck, J. L., Lachieze-Rey, M., et al. (2012). Wavelet analysis of baryon acoustic structures in the galaxy distribution. Astron. Astrophys., 542, A34–11pp.
Abstract: Context. Baryon acoustic oscillations (BAO) are imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe. Aims. The BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. We present a new method to detect the real-space structures associated with BAO. These baryon acoustic structures are spherical shells of relatively small density contrast, surrounding high density central regions. Methods. We design a specific wavelet adapted to search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the Lambda – cold dark matter (Lambda CDM) model, using an analytical approximation to the transfer function. We confirm the reliability and stability of our method by using cosmological N-body simulations from the MareNostrum Institut de Ciencies de l'Espai (MICE). Results. We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the “main” catalogue to trace the shells, and the luminous red galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRG in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAO. We show that stacking selected shells, we can find their characteristic density profile. Conclusions. We delineate a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells.
|
ANTARES Collaboration(Adrian-Martinez, S. et al), Barrios-Marti, J., Bigongiari, C., Emanuele, U., Gomez-Gonzalez, J. P., Hernandez-Rey, J. J., et al. (2013). Search for muon neutrinos from gamma-ray bursts with the ANTARES neutrino telescope using 2008 to 2011 data. Astron. Astrophys., 559, A9–11pp.
Abstract: Aims. We search for muon neutrinos in coincidence with GRBs with the ANTARES neutrino detector using data from the end of 2007 to 2011. Methods. Expected neutrino fluxes were calculated for each burst individually. The most recent numerical calculations of the spectra using the NeuCosmA code were employed, which include Monte Carlo simulations of the full underlying photohadronic interaction processes. The discovery probability for a selection of 296 GRBs in the given period was optimised using an extended maximum-likelihood strategy. Results. No significant excess over background is found in the data, and 90% confidence level upper limits are placed on the total expected flux according to the model.
Keywords: neutrinos; gamma-ray burst: general; methods: numerical
|
Panes, B., Eckner, C., Hendriks, L., Caron, S., Dijkstra, K., Johannesson, G., et al. (2021). Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challenge. Astron. Astrophys., 656, A62–18pp.
Abstract: Context. At GeV energies, the sky is dominated by the interstellar emission from the Galaxy. With limited statistics and spatial resolution, accurately separating point sources is therefore challenging. Aims. Here we present the first application of deep learning based algorithms to automatically detect and classify point sources from gamma-ray data. For concreteness we refer to this approach as AutoSourceID. Methods. To detect point sources, we utilized U-shaped convolutional networks for image segmentation and k-means for source clustering and localization. We also explored the Centroid-Net algorithm, which is designed to find and count objects. Using two algorithms allows for a cross check of the results, while a combination of their results can be used to improve performance. The training data are based on 9.5 years of exposure from The Fermi Large Area Telescope (Fermi-LAT) and we used source properties of active galactic nuclei (AGNs) and pulsars (PSRs) from the fourth Fermi-LAT source catalog in addition to several models of background interstellar emission. The results of the localization algorithm are fed into a classification neural network that is trained to separate the three general source classes (AGNs, PSRs, and FAKE sources). Results. We compared our localization algorithms qualitatively with traditional methods and find them to have similar detection thresholds. We also demonstrate the robustness of our source localization algorithms to modifications in the interstellar emission models, which presents a clear advantage over traditional methods. The classification network is able to discriminate between the three classes with typical accuracy of similar to 70%, as long as balanced data sets are used in classification training. We published online our training data sets and analysis scripts and invite the community to join the data challenge aimed to improve the localization and classification of gamma-ray point sources.
|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
Keywords: astronomical databases; miscellaneous; methods; data analysis; stars; imaging; techniques; image processing
|
HAWC Collaboration(Alfaro, R. et al), & Salesa Greus, F. (2022). Validation of standardized data formats and tools for ground-level particle-based gamma-ray observatories. Astron. Astrophys., 667, A36–12pp.
Abstract: Context. Ground-based gamma-ray astronomy is still a rather young field of research, with strong historical connections to particle physics. This is why most observations are conducted by experiments with proprietary data and analysis software, as is usual in the particle physics field. However, in recent years, this paradigm has been slowly shifting toward the development and use of open-source data formats and tools, driven by upcoming observatories such as the Cherenkov Telescope Array (CTA). In this context, a community-driven, shared data format (the gamma-astro-data-format, or GADF) and analysis tools such as Gammapy and ctools have been developed. So far, these efforts have been led by the Imaging Atmospheric Cherenkov Telescope community, leaving out other types of ground-based gamma-ray instruments. Aims. We aim to show that the data from ground particle arrays, such as the High-Altitude Water Cherenkov (HAWC) observatory, are also compatible with the GADF and can thus be fully analyzed using the related tools, in this case, Gammapy. Methods. We reproduced several published HAWC results using Gammapy and data products compliant with GADF standard. We also illustrate the capabilities of the shared format and tools by producing a joint fit of the Crab spectrum including data from six different gamma-ray experiments. Results. We find excellent agreement with the reference results, a powerful confirmation of both the published results and the tools involved. Conclusions. The data from particle detector arrays such as the HAWC observatory can be adapted to the GADF and thus analyzed with Gammapy. A common data format and shared analysis tools allow multi-instrument joint analysis and effective data sharing. To emphasize this, a sample of Crab nebula event lists is made public with this paper. Because of the complementary nature of pointing and wide-field instruments, this synergy will be distinctly beneficial for the joint scientific exploitation of future observatories such as the Southern Wide-field Gamma-ray Observatory and CTA.
Keywords: methods; data analysis; gamma rays; general
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|
Stoppa, F., Bhattacharyya, S., Ruiz de Austri, R., Vreeswijk, P., Caron, S., Zaharijas, G., et al. (2023). AutoSourceID-Classifier Star-galaxy classification using a convolutional neural network with spatial information. Astron. Astrophys., 680, A109–16pp.
Abstract: Aims. Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification's reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.Methods. The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.Results. We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor. While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced error propagation inherent in ASID-C's direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy.
|
Aguilar, A. C., De Soto, F., Ferreira, M. N., Papavassiliou, J., Pinto-Gomez, F., Roberts, C. D., et al. (2023). Schwinger mechanism for gluons from lattice QCD. Phys. Lett. B, 841, 137906–8pp.
Abstract: Continuum and lattice analyses have revealed the existence of a mass-scale in the gluon two-point Schwinger function. It has long been conjectured that this expresses the action of a Schwinger mechanism for gauge boson mass generation in quantum chromodynamics (QCD). For such to be true, it is necessary and sufficient that a dynamically-generated, massless, colour-carrying, scalar gluon+gluon correlation emerges as a feature of the dressed three-gluon vertex. Working with results on elementary Schwinger functions obtained via the numerical simulation of lattice-regularised QCD, we establish with an extremely high level of confidence that just such a feature appears; hence, confirm the conjectured origin of the gluon mass scale.
|