Tutorial: How to fine tune a Cellpose model with the Active Learning plugin for Napari

Tutorial for fine-tuning the “nuclei” model from cellpose to new data.
Cellpose
Transfer learning
Author

Fernando Cervantes

1. Install napari and the Active Learning plugin


1.1. Install napari

Follow the instructions to install napari from its official website.

1.2. Install the napari-activelearning plugin using pip

Important

If you installed napari using conda, activate that same environment before installing the Active Learning plugin.

Install the plugin adding [cellpose] to the command so all dependencies required for this tutorial are available.

python -m pip install "napari-activelearning[cellpose]"

1.3. Launch napari

From a console, open napari with the following command:

napari
Caution

The first time napari is launched it can take more than \(30\) seconds to open, so please be patient!

2. Image groups management

2.1. Load a sample image

You can use the cells 3D image sample from napari’s built-in samples.

File > Open Sample > napari builtins > Cells (3D+2Ch)

2.2. Add the Image Groups Manager widget to napari’s window

The Image groups manager can be found under the Active Learning plugin in napari’s plugins menu.

Plugins > Active Learning > Image groups manager

The Image groups manager widget is used to create groups of layers and their specific use within the active-learning plugin. It allows to specify properties and metadata of such layers (e.g. inputs paths, output directory, axes order, etc.) that are used by the Acquisition Function Configuration widget.

This widget allows to relate different layers using a single hierarchical structure (image group). Such hierarchical structure is implemented as follows:

  • Image groups. Used to contain different modes of data from a single object, such as image pixels, labels, masks, etc.

    • Layers groups. These group multiple layers of the same mode together, e.g. different channel layers of the same image.

      • Layer channels. This is directly related to elements in napari’s layer list. Moreover, layer channels allow to store additional information or metadata useful for the inference and fine-tuning processes.

2.3. Create an Image Group containing nuclei and membrane layers

  1. Select the nuclei and membrane layer.

  2. Click the New Image Group button on the Image Groups Manager widget.

2.4. Edit the image group properties

  1. Select the “images” layers group_ that has been created under the new image group.

  2. Click the Edit group properties checkbox.

  3. Make sure that Axes order is “CZYX”, otherwise, you can edit it and press Enter to update the axes names.

3. Segmentation on image groups

3.1. Add the Acquisition function configuration widget to napari’s window

The Acquisition function configuration is under the Active Learning plugin in napari’s plugins menu.

Plugins > Active Learning > Acquisition function configuration
Tip

All Active Learning widgets can be un-docked from their current place and re-docked into other more convenient location within napari’s window, or even as tabs, as illustrated in this tutorial.

The Acquisition function configuration widget comprises the processes of inference and fine tunning of deep learning models.

This tool allows to define the sampling configuration used to retrieve patches from the input image at random. Such configuration involves the size on each dimension of the patches, the maximum number of patches that can be sampled from each image, and the order of the axes that are passed to the model’s inference function. Parameters for such model can also be configured within this widget.

The fine-tuning process can also be configured and launched from within this widget. Finally, this widget executes the inference process on the same sampled patches to compare the base and fine-tuned model performance visually.

3.2. Define sampling configuration

  1. Make sure “Input axes” are set to “ZYX”.

This enables edition of the patch size at those selected spatial axes.

For example, the “Cells (3D+2Ch)” sample image is three-dimensional, with “ZYX” spatial axes, and has two color channels. For that image we will extract two-dimensional patches of size \(256\times256\) pixels, in the “X” and “Y” spatial axes.

Therefore, “Input axes”=“ZYX” allows to set the patch size at spatial axes “Y” and “X” to \(256\) pixels, and “Z” to \(1\) in order to extract one-slice-deep patches.

  1. Change the “Model axes” to “CYX”.

This, on the other hand, specifies the order our model expects to receive its inputs.

In his case, the nuclei model from cellpose will be applied to two-dimensional images. Because the “Z” axis is not used, we drop it from the “Model axes” string. This is only possible because the “Z” axis is already set to \(1\) slice deep and can be squeezed from the patch.

Additionally, we append the “C” (color channel) axis to the beginning of the string. That means that the order of the patch axes will be permuted to have the color channel as leading axis.

3.3. Set the size of the sampling patch

  1. Click the “Edit patch size” checkbox
  2. Change the patch size of “X” and “Y” to 256, and the “Z” axis to 1.
Note

This directs the Active Learning plugin to sample at random patches of size \(256\times256\) pixels and \(1\) slice deep.

3.4. Define the maximum number of samples to extract

  • Set the “Maximum samples” to \(4\) and press Enter
Note

This tells the Active Learning plugin to process at most four samples at random from the whole image.

Because the image is of size \(256\times256\) pixels, four whole slices will be sampled at random for segmentation.

3.5. Configure the segmentation method

  1. Use the “Model” dropdown list to select the cellpose method

  2. Click the “Advanced segmentation parameters” checkbox

  3. Change the “Channel axis” to \(0\)

Note

This makes cellpose to use the first axis as “Color” channel, as we already set in “Model axes”=“CYX”.

  1. Change the second channel to \(1\) (the right spin box in the “channels” row)
Note

This tells cellpose to segment the first channel (\(0\)) and use the second channel (\(1\)) as help channel.

  1. Choose the “nuclei” model from the dropdown list

3.6. Execute the segmentation method on all image groups

  • Click the “Run on all image groups” button
Note

To execute the segmentation only on specific image groups, select the desired image groups in the Image groups manager widget and use the “Run on selected image groups” button instead.

3.7. Inspect the segmentation layer

  • We will only see four segmented slices from the whole image. This is because we set the “Maximum samples” parameter to \(4\) in Section 3.4.
Note

Because the input image is 3D, you may have to slide the “Z” index at the bottom of napari’s window to look at the samples that were segmented.

Three new layers were added to the napari’s layers list after executing the segmentation pipeline.

The “images acquisition function” presents a map of the confidence prediction made by the chosen model for each sampled patch. Lighter intensity correspond to regions where the annotator should pay attention when correcting the output labels, since those are low-confidence predictions. On the other hand, darker intensity corresponds to high-confidence predictions, and correcting these labels would not have much impact in the performance of the fine-tuned model as correcting low-confidence (lighter intensity) regions.

The “images sampled positions” show a low-resolution map of the regions that were sampled during the segmentation process.

The “images segmentation” layer are the output labels predicted by the model.

Note that the name of those three generated layers contain the name of the image group from where those patches were sampled, in this case the “images” image group.

4. Segment masked regions only

4.1. Create a mask to restrict the sampling space

  1. Switch to the Image groups manager tab

  2. Click the “Edit mask properties” checkbox

  3. Set the mask scale to \(256\) for the “X” and “Y” axes, and a scale of \(1\) for the “Z” axis

  4. Click the “Create mask” button

Note

This creates a low-resolution mask where each of its pixels corresponds to a \(256\times256\) pixels region in the input image. Because the mask is low-resolution, it uses less space in memory and disk.

4.2. Specify the samplable regions

  1. Make sure the newly created layer “images mask” is selected.

  2. Activate the fill bucket tool (or press 4 or F keys in the keyboard as shortcuts).

  3. Click the image to draw the mask on the current slice.

  4. Move the slider at the bottom of napari’s window to navigate between slices in the “Z” axis. Select slice \(28\) and draw on it as in step \(3\). Repeat with slices \(29\) and \(30\).

4.3. Execute the segmentation process on the masked regions

  • Return to the Acquisition function configuration tab and run the segmentation process again (clicking “Run on all image groups” button).

4.4. Inspect the results from the segmentation applied only to masked regions

  • All slices between \(27\) and \(30\) were picked and segmented thanks to the sampling mask!
Note

At this point, multiple layers have been added to napari’s layers list and it could start to look clutered.

The following layers will be used in the remainder of this tutorial: “membrane”, “nuclei”, “images mask”, “images sampled positions [1]”, and “images segmentation [1]”.

The rest of the layers can be safely removed from the list (i.e. “images acquisition function [1]”, “images acquisition function”, “images sampled positions”, and “images segmentation”).

5. Fine tune the segmentation model

5.1. Add the Label groups manager widget to napari’s window

You can find the Label groups manager under the Active Learning plugin in napari’s plugins menu.

Plugins > Active Learning > Label groups manager

The Label groups manager widget contains the coordinates of the sampled patches acquired by the Acquisition function configuration widget.

This widget allows to navigate between patches without serching them in the complete extent of their corresponding image. It additionally permits to edit the pixel data of the selected patch’s labels in order to correct the inference output in a human-in-the-loop approach.

5.3. Use napari’s layer controls to make changes on the objects of the current patch

  1. Select the label associated to the patch extracted at slice \(27\). That label can be found by looking for the coordinate \((0, 0, 27, 0, 0)\) in the “Sampling top-left” column of the label groups manager table.
Note

This creates a new editable “Labels edit” layer with a copy of the selected patch.

  1. Make sure the “Labels edit” layer is selected.

  2. Edit the labels on the current patch with napari’s annotation tools.

  1. Select the pick mode tool (click the dropper icon or press key 5 or L on the keyboard) and click on an object in the current patch to get its label index.

  2. Use the paint brush (click the brush icon or press key 2 or P on the keyboard) and add pixels missed by the model.

  3. Remove extra pixels with the label eraser tool (click the eraser icon or press key 1 or E on the keyboard).

  4. If an object was not segmented at all, press the M key on the keyboard to get the next unused label index, and use the paint brush as in step \(2\) to cover the pixels of that object.

For a more complete guide on annotation with napari follow this tutorial.

  1. Click the “Commit changes” button when finished editing.

5.4. Edit labels on other slices

  • Continue editing the labels of the other patches until these are closer to what the model is expected to predict.
Important

The cellpose model used in this tutorial does not work with sparse labeling, so its important that all the objects that are of interest should be labeled.

Additionally, the performance of the fine-tuned model will depend highly on how precise the objects were labeled.

5.5. Select the layer group that will be used as labels for fine-tuning

  1. Go to the image groups manager widget and select the “segmentation (1)” layers group.
Tip

The “Group name” column in the groups table can be resized to show the complete names.

  1. Open the Edit group properties view (clicking on the checkbox) and tick the “Use as labels” checkbox.
Note

Any layers group can be used as labels for the fine-tuning process, just make sure the layer type is appropriate for the training workflow of the model to fine-tune.

In this tutorial, the labels must be of napari.layers.Labels type.

5.6. Setup fine tuning configuration

  1. Go to the Acquisition function configuration widget and click the “Advanced fine tuning parameters” checkbox

  2. Change the “save path” to a location where you want to store the fine tuned model

5.6. Setup fine tuning configuration

  1. Change the “model name” to “nuclei_ft”
Tip

Scroll the Advanced fine tuning parameters widget down to show more parameters.

  1. Set the “batch size” to \(3\)

  2. Change the “learning rate” to \(0.0001\)

Tip

You can modify other parameters for the training process here, such as the number of training epochs.

5.7. Execute the fine tuning process

Because spin boxes in this widget can be modified by hovering and scrolling with the mouse, it is possible to change their values when scrolling down the parameters window.

Make sure that the parameters are the same as the shown in the following screenshot, particularly the size of the crops used for data augmentation (bsize), and the number of images per epoch (nimg per epoch).

  • Click the “Fine tune model” button to run the training process.
Caution

Depending on your computer resources (RAM, CPU), this process might take some minutes to complete. If you have a dedicated GPU device, this can take a couple of seconds instead.

5.8. Review the fine tuned segmentation

Tip

Use the opacity slider to compare how the fine tuned model segments the same objects that were labeled for training.

6. Use the fine tuned model for inference

6.1. Create a mask to apply the fine tuned model for inference

  1. Follow the steps from Section 4 to create a mask, now covering slices \(31\) to \(34\).

  2. Scroll down the Image groups manager table to show the newest mask layer grup (“mask (1)”).

  3. Select the “mask (1)” layer group

  4. Open the Edit group properties (by clicking its checkbox) and make sure that “Use as sampling mask” checkbox is checked.

  5. Go to the Acquisition function configuration widget again and click “Run on all image groups” button.

6.2. Inspect the output of the fine tuned model

  • Now you have a fine tuned model for nuclei segmentation that has been adapted to this image!
Important

The fine tuned model can be used in Cellpose’s GUI software, their python API package, or any other package that supports pretrained cellpose models.

6.3. Segment the newly masked region with the base model for comaprison

  1. Choose the “nuclei” model again from the dropdown list in the Advanced segmentation parameters section

  2. Click the “Run on all image groups” button again

Note

This will execute the segmentation process with the non-fine-tuned “nuclei” model on the same sampling positions from the last run.

6.4. Compare the results of both models

  • Remove all the layers except the “nuclei” and “membrane” layers, and the “images segmentation” and “images segmentation [2]” layers, that are the segmentation outputs from the fine-tuned model and base model, respectively.
Tip

Click the eye icon in the “layer list” to hide/show layers instead of removing the layers.