Cavity Detection Tool (CADET)
Overview Training Results How to use

Python package

The CADET pipeline has been released as a standalone Python3 package pycadet, which can be installed using pip:

$ pip3 install pycadet

or from source:

$ pip3 install git+https://github.com/tomasplsek/CADET.git

The pycadet package requires the following libraries (which should be installed automatically with the package):

numpy
scipy
astropy
matplotlib
pyds9
scikit-learn>=1.1
tensorflow>=2.8

For Conda environments, it is recommended to install the dependencies beforehand as some of the packages can be tricky to install in an existing environment (especially tensorflow) and on some machines (especially new Macs). For machines with dedicated NVIDIA GPUs, tensorflow-gpu can be installed to allow the CADET model to leverage the GPU for faster inference.

An exemplary notebook on how to use the pycadet package can be found here:

Open In Colab


DS9 Plugin

The CADET pipeline can also be used as a SAOImageDS9 plugin which is installed together with the pycadet Python package. The CADET plugin requires that SAOImageDS9 is already installed on the system. To avoid conflicts (e.g. the CIAO installation of DS9), it is recommended to install pycadet using a system installation of Python3 rather than a Conda environment.

After the installation, the CADET plugin should be available in the Analysis menu of DS9. After clicking on the CADET option, a new window will appear, where the user can set several options: whether the prediction should be averaged over multiple input images by shifting by +/- 1 pixel (Shift); and whether the prediction should be decomposed into individual cavities (Decompose). When decomposing into individual cavities, the user can also set a pair of discrimination thresholds, where the first one (Threshold1) is used for volume error calibration and the second one (Threshold2) for false positive rate calibration (for more info see Plšek et al. 2023).

If the CADET plugin does not appear in the Analysis menu, it can be added manually by opening Edit > Preferences > Analysis and adding a path to the following file DS9CADET.ds9.ans (after the installation it should be located in ~/.ds9/). The plugin is inspired by the pyds9plugin library.

Online CADET interface

A simplified version of the CADET pipeline is available via a web interface hosted on HuggingFace Spaces. The input image should be centred on the galaxy centre and cropped to a square shape. It is also recommended to remove point sources from the image and fill them with the surrounding background level using Poisson statistics (dmfilth within CIAO). Furthermore, compared to the pycadet package, the web interface performs only a single thresholding of the raw pixel-wise prediction, which is easily adjustable using a slider.

HuggingFace web interface

Convolutional part

The convolutional part of the pipeline can be used separately to produce raw pixel-wise predictions. Since the convolutional network was implemented using the functional Keras API, the architecture could have been stored together with the trained weights in the HDF5 format (CADET.hdf5). The trained model can then simply be loaded using the load_model TensorFlow function:

from tensorflow.keras.models import load_model

model = load_model("CADET.hdf5")

y_pred = model.predict(X)

The raw CADET model only inputs 128x128 images. Furthermore, to maintain the compatibility with Keras, the input needs to be reshaped as X.reshape(1, 128, 128, 1) for single image or as X.reshape(-1, 128, 128, 1) for multiple images.

Alternatively, the CADET model can be imported from HuggingFace's model hub:

from huggingface_hub import from_pretrained_keras

model = from_pretrained_keras("Plsek/CADET-v1")

y_pred = model.predict(X)