Peter Sobolewski (he/him)
Systems Analyst, Imaging Applications
Research IT
QuPath is an open-source image analysis program
Key feature:
QuPath has a graphical user interface (GUI) for performant working with very large 2D images, like those produced by slide scanners
If you use it, cite it!
Was designed for very large, multiscale 2D files
Includes extensive annotation, overlay, and visualization options
It includes robust algorithms for common analysis tasks
It includes interactive machine learning for pixel and object classification
Recordable workflows allow for easy batch processing
Integrated ImageJ
Robust scripting support (Groovy)
Limited 3D & time series viewing capabilities:
Some complex functions require scripting
Extension ecosystem is relatively limited
No support for zarr/NGFF
projectprojects are the best way to organize your analysis workprojects are frequently required by some scripts/extensionsprojects are folders:
Create project buttonWhen making a project, remember to create a folder for it!
projectimage files will not be copied into your project folder!
URI) will be used!images in a project are actually QuPath objects holding metadata, annotations, etc.ImageServerimages within a project only relates to QuPath specific data, not the original files or pixelsWhen images are removed from a project, the data files are not deleted!
Create projectChoose files—or drag-and-dropImportThis affects the behavior of tools & features! (but you can change it)
viewer on the right – it will be bolded in the listThese changes only affect the QuPath objects in this project, not your data files!
Line annotationDot notifies of un-read ErrorsDot notifies for potential problems, will offer solutionsS: Switch to selection mode, where tools select objects insteadC: show/hides pixel classification overlayIf you can’t see things you expect, check that they aren’t hidden!
For menu items, press Command/Control-L and start typing what you remember to get the menu item/command!
Tools menu, which shows the shortcutsSpace bar
Alt/Option and start from the outside to reduce itShift with the Brush or Wand to add to an Annotation (discontinuous)classesPopulate commands refer to the class listMajority of QuPath functions rely on using Classes
Set selected assigns all selected Annotations to a selected classAuto set assigns all new Annotations (regardless of type) to the selected classClasses with a * are ignored in measurements, percentages, etc.
Tissue classTissue classslice 1 through slice 4 (left to right)First delete all annotations (Objects ‣ Delete…) or duplicate the image (uncheck Also duplicate data)
Show measurements table Toolbar buttonFor Annotations, the entire area of a child must be inside the parent
Border classLine Annotations per tissue slice to measure the thickness of the dark purple edge regionBorder classResolve hierarchy to assign the lines to slicesMeasurements table: right-click to show just the Border classFirst Lock all of your Tissue Annotations to prevent accidental editing
For fluorescence images, single channels are typically collected, so this isn’t needed.
image type on import, default separation is performedDuplicateAlso duplicate data filesYes to set the new valueAuto vector estimationAuto to get an initial estimateAlso duplicate data, and name it vectors (or similar)Analyze ‣ Estimate stain vectors and the Visual stain editor to adjust the stain deconvolutionBrightness & Contrast to check the H&E color channels in different areasPixel Classification
Average channels can be a good choiceNote
Double-click on Above or Below to swap classes!
You need to save your thresholder before you can use it!
We are segmenting 4 large tissue slices, so we want to make:
We saved our classifier (thresholder), so we can re-use it:


Pixel classification ‣ Create thresholderTissueTissue (or similar)Create objects to make 4 annotations, one for each sliceImportant
Ensure they can access the actual image files!
The project folder only contains QuPath objects and data!
The image files will not be inside the project (unless you had placed them there)!
You will need to re-link the project to the location of the actual image files
Search… to find the new enclosing folderEnsure Also duplicate data is checked!
Tissue Annotations
White or lumens, etc.Classify ‣ Pixel classification ‣ Create thresholderStromaTissue Annotations, click MeasureClassify ‣ Pixel classification ‣ Load pixel classifier and load the previous classifierCreate objects and select All Annotations for Choose parent objects:Annotation as New object typeMinimum object size, the area of the things we want to detect, the lumens/holes, maybe 200 µm2?Minimum hole size—this would be holes within the objects we are interested in!Ensure Split objects is not checked!
features (computed values per pixel)Ensure Also duplicate data is checked!
paint a few squiggles in:
borderholes, set class lumensStromaClassify ‣ Pixel classification ‣ Train pixel classifier
Random trees, less of a black box can provide importance of features vs. ANN_MLPTypically, fewer, but well-chosen features provides more robust results.
| From QuPath docs | |
|---|---|
| Gaussian filter | General-purpose |
| Laplacian of Gaussian | Blob-like things, some edges |
| Weighted deviation | Textured vs. smooth areas |
| Gradient magnitude | Edges |
| Structure tensor eigenvalues | Long, stringy things |
| Structure tensor coherence | ‘Oriented’ regions |
| Hessian determinant | Blob-like things |
| Hessian eigenvalues | Long, stringy things |
Use the C key and opacity slider!
Typically, using Polyline Annotation squiggles gives best results
Does not let you ignore class balance totally!
Using Random trees classifier, test out impact of classifier parameters, especially:
- Resolution
- Features: use the `Log` to check importance
You can add more Annotations!
When reasonable, save the classifier and your image with training Annotations
Load and run your classifier (Create objects) on a fresh copy of the 4 Annotation image
Load training to add them to the training dataImport from file in the Load pixel classifier dialogMake a note of your features & parameters, in case you want to train again!
You want to have few Annotations and use them as parent objects
Detections can be classified!
Superpixels don’t have any default measurements
Analyze ‣ Calculate featuresSmoothed features: use a weighted average of the corresponding measurements of neighboring objectsIntensity features: mean, SD, etc. of channel intensitiesShape features: Area, perimeter, caliper, etc.By default:
ML approaches including Cellpose and Stardist are available as extensions
Ensure Also duplicate data is checked!
Analyze ‣ Cell detection ‣ Cell detection
Note
Positive cell detection allows setting additional intensity thresholds for scoring (e.g. DAB)
Defaults are frequently pretty good!
Hematoxylin OD, but OD Sum can be useful for H-DABUse B&C dialog to take a look!
0 for native (full) resolution. Trade-off of speed vs. detail, 0.5 or 1 frequently good.0 to disable0 to disableTip
D to toggle showing Detections or F to toggle their FillDepending on the parameters and power of your computer this can take a few minutes!
Run again if you have the Cell detection dialog openCell detection parameters are not saved. If you closed the dialog your settings are lost, but can be recovered in the Analysis pane Workflow tab
Double click the Workflow step you want to re-run!
Use Create workflow button to remove any spurious steps prior to Create script
Copy command to paste into a scriptShow detection measurements)
Measure ‣ Export measurementsMeasure ‣ Show measurements maps
Object classificationSmoothed measurements take into account local area of each cell and can be even more useful
Important
Classify ‣ Object classification ‣ Train object classifier
Selected features and click Select to reduce feature spaceSelected classes and click Select to train on selected Classes onlyLive update applies the classifier
Classify ‣ Object classification ‣ Reset detection classificationsThis only makes sense if you ran Cell detection on a tissue slice
border: the dark purple/brown area at the edgeTumor: the islands of dense hematoxylin stainingStroma: other areas of the slicePress D to toggle showing Detections or F to toggle their Fill
Classify ‣ Object classification ‣ Set cell intensity classificationsAnalyze ‣ Density maps ‣ Create density map

Imaging Applications, Research IT