|
Stoppa, F., Vreeswijk, P., Bloemen, S., Bhattacharyya, S., Caron, S., Johannesson, G., et al. (2022). AutoSourceID-Light Fast optical source localization via U-Net and Laplacian of Gaussian. Astron. Astrophys., 662, A109–8pp.
Abstract: Aims. With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods. We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results. Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method. ASID-L rapidly detects more sources not only in low- and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
|
|
|
Stoppa, F., Ruiz de Austri, R., Vreeswijk, P., Bhattacharyya, S., Caron, S., Bloemen, S., et al. (2023). AutoSourceID-FeatureExtractor Optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation. Astron. Astrophys., 680, A108–14pp.
Abstract: Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.Results. We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
|
|
|
Stoppa, F., Bhattacharyya, S., Ruiz de Austri, R., Vreeswijk, P., Caron, S., Zaharijas, G., et al. (2023). AutoSourceID-Classifier Star-galaxy classification using a convolutional neural network with spatial information. Astron. Astrophys., 680, A109–16pp.
Abstract: Aims. Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification's reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.Methods. The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.Results. We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor. While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced error propagation inherent in ASID-C's direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy.
|
|
|
Lerendegui-Marco, J., Babiano-Suarez, V., Balibrea-Correa, J., Caballero, L., Calvo, D., Ladarescu, I., et al. (2024). Simultaneous Gamma-Neutron Vision device: a portable and versatile tool for nuclear inspections. EPJ Tech. Instrum., 11(1), 2–17pp.
Abstract: This work presents GN-Vision, a novel dual gamma-ray and neutron imaging system, which aims at simultaneously obtaining information about the spatial origin of gamma-ray and neutron sources. The proposed device is based on two position sensitive detection planes and exploits the Compton imaging technique for the imaging of gamma-rays. In addition, spatial distributions of slow- and thermal-neutron sources (<100 eV) are reconstructed by using a passive neutron pin-hole collimator attached to the first detection plane. The proposed gamma-neutron imaging device could be of prime interest for nuclear safety and security applications. The two main advantages of this imaging system are its high efficiency and portability, making it well suited for nuclear applications were compactness and real-time imaging is important. This work presents the working principle and conceptual design of the GN-Vision system and explores, on the basis of Monte Carlo simulations, its simultaneous gamma-ray and neutron detection and imaging capabilities for a realistic scenario where a Cf-252 source is hidden in a neutron moderating container.
|
|
|
Albiol, F., Corbi, A., & Albiol, A. (2019). Densitometric Radiographic Imaging With Contour Sensors. IEEE Access, 7, 18902–18914.
Abstract: We present the technical/physical foundations of a new imaging technique that combines ordinary radiographic information (generated by conventional X-ray settings) with the patient's volume to derive densitometric images. Traditionally, these images provide quantitative information about tissues densities. In our approach, they graphically enhance either soft or bony regions. After measuring the patient's volume with contour recognition devices, the physical traversed lengths within it (as the Roentgen beam intersects the patient) are calculated and pixel-wise associated with the original radiograph (X). In order to derive this map of lengths (L), the camera equations of the X-ray system and the contour sensor are determined. The patient's surface is also translated to the point-of-view of the X-ray beam and all its entrance/exit points are sought with the help of ray-casting methods. The derived L is applied to X as a physical operation (subtraction), obtaining soft tissue-(D-S) or bone-enhanced (D'(B)) figures. In the D-S type, the contained graphical information can be linearly mapped to the average electronic density (traversed by the X-ray beam). This feature represents an interesting proof-of-concept of associating density data to radiographs, but most important, their intensity histogram is objectively compressed, i.e., the dynamic range is more shrunk (compared against the corresponding X). This leads to other advantages: improvement in the visibility of border/edge areas (high gradient), extended manual window level/width manipulations during screening, and immediate correction of underexposed X instances. In the D-B' type, high-density elements are highlighted and easier to discern. All these results can be achieved with low-energy beam exposures, saving costs and dose. Future work will deepen this clinical side of our research. In contrast with other image-based modifiers, the proposed method is grounded on the measurement of a physical entity: the span of the X-ray beam within a body while undertaking a radiographic examination.
|
|