Use to advance the slide
 

Widefield and Confocal microscopes acquire images in different ways.
Widefield and laser-scanning microscopes acquire images in different ways.
 

 
Detectors collect photons and convert them to a voltage
 
The A/D converter determines the dynamic range of the data
 
 
 
Unless you have good reason not to, always collect data at the highest possible bit depth
 
	32 bit is a special data type called floating point. 
TL;DR: pixels can have non-integer values which can be useful in applications like ratiometric imaging.
 
ImageJ is a java program for image processing and analysis.
Fiji extends this via plugins.
 
Learn more about Bio-Formats here
 
 
 
[Window > Tile] command is very useful when opening multiple images 
	 
		[Help > Update...]Exercises will be provided in-line, as links to PDFs—right-click and open in a new tab
Commands on the Fiji menu will look like: [File > Save]
Sample/exercise data will look like this: 01-Photo.tif
	
Right-click to copy the URL and use [File > Import > URL...] to open
 
	 
	(Paste in the URL in the resulting box)
Task1.pdf and follow the instructions there.01-Photo.tif and 02-Biological_Image.tif[Image > Properties] also allows you to view and set calibrationum or microndolphins 
	Images are an array of intensity values. The intensity histogram shows the number (on the y-axis) of each intensity value (on the x-axis) and thus the distribution of intensities
 
Photos typically have a broad range of intensity values and so the distribution of intensities varies greatly
 
Fluorescent micrographs will typically have a much more predictable distribution:
 
The Black and White points of the histogram dictate the bounds of the display (changing these values alters the brightness and contrast of the image)
	
They are often called "the contrast limits"
 
 
The histogram is now stretched and the intensity value of every pixel is effectively doubled which increases the contrast in the image
 
If we repeat the same manipulation, the maximum intensity value in the image is now outside the bounds of the display scale!
 
Values falling beyond the new White point are dumped into the top bin of the histogram (IE 256 in an 8-bit image) and information from the image is lost
	
This is often called "clipping"
 
 
Be careful when applying changes to contrast limits, as they will change the pixel values!
Be warned: removing information from an image is deemed an unacceptable maniplulation and can constitute academic fraud!
For an excellent (if slightly dated) review of permissible image manipulation see:Rossner & Yamada (2004): "What's in a picture? The temptation of image manipulation"
The best advice is to get it right during acquisition and make sure you compare apples to apples
Task2.pdf and follow the instructions there.02-Biological Image.tif[Analyze > Measure] (or `m` keyboard shortcut) is used to make measurements
The measurements provided are set via [Analyze > Set Measurements...] except for selection-specific measurements (length, angle, coords)
[Analyze > Tools > ROI Manager...] to open the ROI Manager[Edit > Selection > Add to Manager]
	[More > Multi-measure][More > Save] for better data provenance! 
	 
	Task3.pdf and follow the instructions there.02-Biological Image.tif[Image > Properties...] 
	[Analyze > Set Scale...] 
	Some file formats (eg. TIF) can store multiple images in one file which are called stacks
 
 
When more than one dimension (time, z, channel) is included, the images are still stored in a linear stack so it's critical to know the dimension order (eg, XYCZT, XYZTC etc) so you can navigate the stack correctly.
You will very rarely have to deal with Interleaved stacks because of Hyperstacks which give you independent control of dimensions with additional control bars.
 
 
Convert between stack types with the [Image > Hyperstack] menu
Interacting with channels is so common that there is a dedicated Channels Tool for additional controls:
[Image > Color > Channels Tool]Other useful menu options:
[Image > Type > RGB Color]: convert from 3 channel stack to RGB[Image > Type > RGB Stack]: split RGB image into 3 channel stackTask4.pdf and follow the instructions there.06-MultiChannel.tifColor in your images is (almost always) dictated by arbitrary lookup tables
 
 
Lookup tables (LUTs) translate an intensity (1-256 for 8 bit) to an RGB display value
Color in your images is (almost always) dictated by arbitrary lookup tables
 
 
Lookup tables (LUTs), also called "colormaps" translate an intensity (1-256 for 8 bit) to an RGB display value
You can use whatever colours you want (they are arbitrary after all), but the most reliable contrast is greyscale
 
More info on color and sensitivity of the human eye here
Additive and Subtractive Colours can be mixed in defined ways
 
Non 'pure' colours cannot be combined in reliable ways (as they contain a mix of other channels)
BUT! Interpretation is highly context dependent!


 
 
~10% of the population have trouble discerning Red and Green. Consider using Green and Magenta instead which still combine to white.
 
Task5.pdf and follow the instructions there.06-MultiChannel.tifA couple of useful LUTs:
 
Applications
Segmentation is the separation of an image into regions of interest
Semantic segmentation assigns each pixel to a class, e.g. foreground vs. background
 
 
The end point for most segmentation is a binary mask (false/true, 0/255)
Fiji has an odd way of dealing with masks
 
	 
	Run [Process > Binary > Options] and check Black Background. Hit OK.
For most applications, intensity-based thresholding works well. This relies on the signal being higher intensity than the background.
 
We use a Threshold to pick a cutoff.
A background/foreground binary mask (false/true, 0/255) may be sufficient for some analysis.
 
	Or we may want to identify individual objects (instance segmentation)
 
	One approach to identifying individual objects when given a binary mask is Connected Component Analysis/Labeling (CCA)
This approach assigns pixels that are touching (connected) to individual object:
| 4-way connected 
		- + - | 8-way connected 
		+ + + | 
In Fiji this can be accomplished using [Analyze > Analyze Particles...]
	
If CCA results in merged objects due to touching, then "watershed" approach is frequently used to define boundaries.
Open Task6.pdf and follow the instructions there.
You will need these images: 07-nuclei.tif and 08-nucleiMask.tif
Analyze Particles comes with the option to Display Results
 
	 
	 
	[Analyze > Set Measurements] and one row per objectTo apply measurements to the original image: check Add to Manager in Analyze Particles, open the original image, then run [More > Multi Measure]
 
 
 
Don't forget [Analyze > Set Measurements] to pick parameters
You may want to create an output or display image showing the results of CCA or segmentation. Analyze particles has several useful outputs:
 
	 
	
	Count masks are very useful in combination with [Image > Look Up Tables > Glasbey]-style (IE random) LUTs
 
Adapted from a slide by Fabrice Cordelieres
For more rigor, see: Aaron et al. J Cell Sci (2018) 131 (3): jcs211847., Figure 4
Colocalisation is highly dependent upon resolution! Example:
 
Same idea goes for cells. Keep in mind your imaging resolution!
We will walk through using JaCoP (Just Another CoLocalisation Plugin) to look at Pearson's and Manders' analysis
 It's been revamped by the BIOP folks of EPFL: JaCoP-BIOP
The companion paper https://doi.org/10.1111/j.1365-2818.2006.01706.x
 
 
Figure from https://doi.org/10.1111/j.1365-2818.2006.01706.x
 
JaCoP-BIOP plugin[Plugins > Install...], point to the downloaded jar file, then press "Save" to confirm 
	11-coloc-multichannel.tif.
	[Image > Color > Channels tool] command to examine the channels[Plugins > BIOP > Image Analysis > BIOP JaCoP] 
 
 
11-coloc-multichannel.tif)[Plugins > BIOP > Image Analysis > BIOP JaCoP], check both `Get Pearsons Correlation` and `Get Manders coefficients`[Process > Noise > Add Noise] or blur your images [Process > Filters > Gaussian Blur] and see how that affects the coefficientsLife exists in the fourth dimension. Tracking allows you to correlate spatial and temporal properties.
 
    
Most partcles look the same! Without any way to identify them, tracking is probabilistic.
Tracking has two parts: Feature Identification and Feature Linking

 
For every frame, features are detected, typically using a Gaussian-based method (eg. Laplacian of Gaussian: LoG)
Spots can be localised to sub-pixel resolution!
 
Without sub-pixel localisation, the precision of detection is limited to whole pixel values.
Feature linkage
 
For each feature, all possible links in the next frame are calculated. This includes the spot disappearing completely.
 
A 'cost matrix' is formed to compare the 'cost' of each linkage. This is globally optimised to calculate the lowest cost for all linkages.
In the simplest form, a cost matrix will usually consider distance. Many other parameters can be used such as:
Which can allow for a more accurate linkage especially in crowded or low S/N environments
Open 10-tracks.tif
Hit the arrow to play the movie. Right Click on the arrow to set playback speed
 
If you're interested in how the dataset was made see this snippet
Run [Plugins > Tracking > Trackmate]
If it's missing, download the TrackMate plugin and install it using [Plugins > Install...]
 
 
You should aim to set the minimum threshold that removes all noise.
	
Slide the navigation bar, then hit Preview to check out a few other timepoints. When satisfied press Next and advance the slides!
 
TrackMate will process the stacks.
	
Once it's done, hit next, accepting defaults until you reach 'Select a tracker'
Ensure `Simple LAP tracker` is selected and hit Next, then advance the slides!
 
In the 'Settings for Simple LAP tracker', set:
Press Next and after processing, you should have tracks!
 
Linking Max Distance Sets a 'search radius' for linkage
 
Gap-closing Max Frame Gap Allows linkages to be found in non-adjacent frames
 
Gap-closing Max Distance Limits search radius in non-adjacent frames
Press Next to get to outputs from Trackmate: (1) Tracking data
 
 
Press Next to get to outputs from Trackmate: (2) Movies!
 
 
You may want adjust the Display Options to get the tracks drawing the way you want (e.g. try "Local, Backwards")
While simple, Tracking is not to be taken on lightly!
Increasing resolution (via higher NA lenses) almost always leads to a reduced field
 
Often you will want both!
We can achieve this with tile scanning (IE. imaging multiple adjacent fields)
 
Stitching is the method used to put them back together again.
	
We'll use the Grid/Collection Stitching plugin
Stitching_noOverlap.zip (make a note of the location)[Plugins > Stitching > Grid/Collection Stitching] 
 
 
Why do the images not line up?
Stitching_Overlap.zip. Unzip to the desktop.[Plugins > Stitching > Grid/Collection Stitching] again 
Two things to remember when using Grid/Collection Stitching:
The most important point is to know your data!
Manual analysis (while sometimes necessary) can be laborious, error prone and not provide the provenance required. Batch processing allows the same processing to be run on multiple images.
The built-in [Process > Batch] menu has lots of useful functions:
 
	We'll use a subset of dataset BBBC008 from the Broad Bioimage Benchmark Collection
BBBC008_partial 
	OutputBBBC008_partial.zip extracted to a known location and a that you have a second folder called "Output" next to it.
	Task7.pdf and follow the instructions there.[Plugins > Macros > Record...]can enable you to generate simple macros without having to write them yourself.
		[Plugins > Macros > Run...] 
		BBBC008_partial.zip extracted to a known location and a that you have a second folder called "Output" next to it.
		Task8.pdf and follow the instructions there.Scripting is useful for running the same process multiple times or having a record of how images were processed to get a particular output
Fiji supports many scripting languages including Java, Python, Scala, Ruby, Clojure and Groovy through the script editor which also recognises the macro language from the previous example (which we'll be using)
As an example, we're going to (manually) create a montage from a three channel image, then see what the script looks like
 
	06-MultiChannel.tif[Plugins > Macros > Record][Image > Hyperstacks > Stack to Hyperstack])[Image > Color > Channels Tool] and set the mode to grayscale[Image > Type > RGB color]channels with [Image > Rename]composite[Image > Type > RGB color]merge with [Image > Rename][Image > Stacks > Tools > Concatenate] and select Channels and merge in the two boxes (see right)[Image > Stacks > Make Montage] change the border width to 3 then hit OK 
	Got it? Have a look at the Macro Recorder and see if you can see the commands you ran
Open the script editor with [File > New > Script] and copy in the following code:
	//-- Record the filename
	imageName=getTitle();
	print("Processing: "+imageName);
	//-- Display the stack in greyscale, create an RGB version, rename
	Property.set("CompositeProjection", "null");
	Stack.setDisplayMode("grayscale");
	run("RGB Color");
	rename("channels");
	//-- Select the original image
	selectWindow(imageName);
	//-- Display the stack in composite, create an RGB version, rename
	Property.set("CompositeProjection", "Sum");
	Stack.setDisplayMode("composite");
	run("RGB Color");
	rename("merge");
	//-- Close the original
	close(imageName);
	//-- Put the two RGB images together
	run("Concatenate...", "  title=newStack open image1=channels image2=merge");
	//-- Create a montage
	run("Make Montage...", "columns=4 rows=1 scale=0.50 border=3");
	//-- Close the stack (from concatenation)
	close("newStack");
	Open 06-MultiChannel.tif again and hit Run
Comments, variables, print, active window
This script operates on an open image but it's easily converted to a batch processing script using the built in templates:
 
	The full script is here. I added these lines at the top and bottom:
open(input + File.separator + file);saveAs("png", output + File.separator + replace(file, suffix, ".png"));
	close("*");
	We'll go into more detail on scripts in the future
In the meantime:
Thank you for your attention!
We will send you a survey for feedback; please take 2 minutes to answer, it helps us a lot!
