Image Manipulation and Analysis Techniques

Image manipulation and analysis techniques can be classified as follows:

1. Image enhancement and filtering.

This includes:

Filtering techniques are divided into two categories: convolution filters (linear filters) and non-convolution (nonlinear) filters. Both techniques accomplish their results by examining and processing an image in small regions, called pixel "neighborhoods." A neighborhood is a square region of image pixels, typically 3x3, 5x5 or 7x7 in size.

Convolution filters

Example: Hi-Pass. The Hi-Pass filter accentuates intensity changes in an image by modifying a pixel's value to exaggerate its intensity difference from its neighbors. It produces an image with harsh intensity transitions, and generally results in an image with only edges of high contrast visible. Fine detail with low contrast is usually lost to the background. This filter can be used when you need to pull out just the elements having high contrast to the image background.

Original image

Hi-Pass applied

Non-convolution filters

Example: Erode and Dilate. The Erosion filter is a morphological filter that changes the shape of objects in an image by eroding (reducing) the boundaries of bright objects, and enlarging the boundaries of dark ones. It is often used to reduce, or eliminate, small bright objects. The Dilation filter is a morphological filter that changes the shape of objects in an image by dilating (enlarging) the boundaries of bright objects, and reducing the boundaries of dark ones. The dilation filter can be used to increase the size of small bright objects.

Original Image

Erosion Applied

These techniques can be performed in Image-Pro or in NIH-Image. In Image Pro, spatial filtering functions are found in the Tools>Filters command. In NIH-Image, load the "filters" macro from the NIH-Image\Macro subdirectory by selecting Special>Load Macros

Merging images

Image merging can be done in several ways: it is possible to extract one colour channel ( e.g. red) and then combine this with another image. Often the problem arises that the background of the image that is phased in another one, totally obscures the features of the recipient image. If the background is flat (i.e. has constant grey values), the function quad (quadtree) solves this problem. The quadtree function splits the image up in regions, and examines whether any region is a uniform. If it is, it is not extracted, if it is not uniform, the non-uniform part is extracted. This extraction procedure omits the background so that there is no problem with the background of the extracted image obscuring what is in the recipient image.

Image 1

Image 2

Combine image,
Using Quadtree

Frequency domain filtering involves frequency domain transforms. These transforms change an image from its spatial-domain form of brightnesses to a frequency domain of fundamental frequency components. One of the most commonly used is the Fast Fourier Transform. When an image is transformed with Fast Fourier, and the Fourier frequency is displayed, it appears symmetrical about the centre. The centre is the zero frequency point. Two axes run through the centre: the horizontal axis defines the horizontal (x value) frequency, the vertical axis the vertical (y value) frequency. The frequency magnitude is determined by the brightness of the pixel at a particular point. Fast Fourier Transforms are very good for filtering periodic noise in an image: one can eliminate bright spots in the Fast Fourier frequency display representing the noise so that the image is cleared up.

Original Image

Display of Fast Fourier
Transform frequencies

Image edge enhancement reduces an image to show only its edge details. It is similar to Hi-Pass, although it focusses more on the edge itself rather than on the contrast between object and its surroundings. The most common is Laplacian edge enhancement. It highlights the edges in an image, irrespective of their orientation.

Original Image

Laplacian Edge
enhancement applied

Noise reduction

Opening and Closing: The opening filter performs an erosion, then a dilation (see above). In images containing bright objects on a dark background, the opening filter smoothes object contours, breaks (opens) narrow connections, eliminates minor protrusions and removes small dark spots. In images with dark objects on a bright background, the opening filter fills narrow gaps between objects. The closing filter is a morphological filter that performs a dilation followed by an erosion. In images containing dark objects on a bright background, the opening filter smoothes object contours, breaks narrow connections, eliminates minor protrusions and removes small bright spots. In images with bright objects on a dark background, the closing filter fills narrow gaps between objects.

Image with background impurities
(small black dots)

After applying "Closing"
filter: impurities have been
removed and some connections
(eg in images with holes) are
about to be broken

Image Measurements and Feature Extractions

The most common measurements are those that count objects and/or describe the shape of objects in an image. In Image Pro it can be found under Measure>Count/Size (except Length which is found under Measure>Measurements). Image analysis programs offer many different measurements. The most common are:

Length: the length of a line drawn

Area: the pixel area of the interior of the object.

Perimeter: the pixel distance around the circumference of the object.

Area to perimeter ratio: a measure of the object's roundness, or compactness, giving a value between 0 and 1

Major axis: the x, y endpoints of the longest line that can be drawn through the object

Minor axis: the x, y endpoints of the longest line that can be drawn through the object while maintaining perpendicularity with the major axis.

Number of holes: a count of how many holes exist within the interior of an object.

It has to be noted that these measurements would normally be expressed in pixels. However, they can be converted to another unit, such as microns, millimetres, or miles. (In Image pro one can go to Image>Calibration>Spatial and set number of pixels to another unit, e.g. microns, or miles). In NIH-Image go to Analyse>Set Scale to set the number of pixels to another unit.

Many of these measurements add up to so-called feature segmentation techniques, enabling discrimination between objects of interest. For example:

Object 1 shape Measures (in pixels)
Perimeter: 871
Area: 30,760
Major axis angle:13 degrees
Major axis width: 360
Minor Axis Width: 152
Object 2 shape Measures (in pixels)
Perimeter: 430
Area: 6455
Major axis angel 0 degrees
Major axis width: 136
Minor Axis Width: 74

Image Pro has a function called auto-classification that uses classifiers such as area, perimeter, major and minor axes to discriminate between objects. It uses a maximum of three classifiers from a range of possible ones and can be found in Measure>Count/Size>Measure>Auto-Classification.

One very useful way for classifying an object is to express an object's boundary as a chain code and then to analyse the numbers produced by that chain code.
A chain code represents the pixels that form the boundary of the object.

These pixels can be isolated and their direction changes can be mapped..

by giving a numerical value to every possible direction.

These numbers can then be analysed in several ways: by producing a diversity index of each contour and comparing these indices, or by extracting statistical functions called Moment Invariant Functions from the x y coordinates that make up the contour. These include functions such as means, variance, skewness and curtosis. A third method is to describe the contour by Elliptical Fourier Transforms, where a series of shifting ellipses are fitted to the contour with a complex shape (approximating the contour) being generated by the interactive combination of all ellipses used. The shape of these ellipses themselves can then be expresses by a set of numbers, which then, in effect describe the contour of the object itself.

Image Synthesis Instead of extracting data from an Image, digital images can be used to display complex data in a visual form that makes them easier to understand. Advanced Visual Systems (AVS), a high-end image analysis and visualisation program is able to read in data sets and then convert these to objects to a 3D image. In order to do this, one has to combine single images into a multiple-image data stack, which is then read in by the appropriate AVS module. Once the 3D image has been imported into AVS it is possible to extract information from the 3D image such as isosurface (areas in 3D space of equal pixel density). It is also possible to export the image as VRML file that can be displayed on the web. For more information, read the AVS booklet "Visualising Your Data with AVS Express".

For more information, contact:

Email [email protected]

Tel: 6488 8649        Fax: 6488 1051