Personal tools
You are here: Home / cmgui / Wiki / Image Processing functions
Navigation
Log in


Forgot your password?
 

Image Processing functions

Functions to perform image processing have been implemented as computed fields in CMGUI. This document aims to identify and list the functions present, and to suggest improvements and enhancements which would be useful, particularly for imagesets arising from the large-volume imaging rig in Physiology.

The image_processing functionality is now being replaced with the wrapping of "ITK":http://www.itk.org/ in computed fields.

A single image processing example (image_processing_2d:http://cmiss.bioeng.auckland.ac.nz/development/examples/a/testing/image_processing_2D/) tests each filter incorporated so far.

The functionality has been determined by viewing the CMGUI image processing "examples":http://cmiss.bioeng.auckland.ac.nz/development/examples/a/image_processing/index.html and by looking in the CMGUI source code. Additional references which were helpful include:

  • The Image Processing Handbook by John C Russ, available at Bioengineering and from the Philson Library
  • LabView IMAQ Vision documentation and modules
  • Additional LabView code written by Greg Sands

Assumptions

There is no distinction made between 2D and 3D, and therefore the words image and pixel are used for both, unless specified.

General comments

Developments

These possible developments are all detailed below, this is just a simple index to allow us to think about the relative priorities. These fall into a number of categories.

Bugs

  • Border operations. Need to be specific and careful about border operations, do these propogate or are they specified per operation with a default?
  • connectivity: edges or edges+corners

Restrictions on size and speed

  • Image cache reduction
  • Alternative storage types (currently always float)
  • Alternative computation types (currently always float)

Low level functionality

  • Automatic image wrapping with computed field
  • Automatic field wrapping with texture and material (plus some suppression mechanism so we don't get too many?)
  • constants and input computed field consolidation. This distinction is unnecessary.
  • masks, domain (originally you could make the values vary over a domain other than 0-1 in texture coordinate space).
  • kernel consolidation and shapes. Can we make a more logical framework for most of the kernel operations.

High level functionality

  • User interface, workflow editor
  • Image montage
  • Image cross-correlation
  • Point spread deconvolution

Computed fields

Computed fields are an excellent methodology for implementing image processing functions. In particular the ability to assign components to hold arbitrary pixel information allows possibilities not traditionally available, such as holding values which are not RGB colour planes but other information. However the inability to typecast a field (specify its memory storage per pixel) is a significant limitation in terms of restricting the amount of memory allocated. It would be very useful to be able to create unsigned 8-bit fields which form a large majority of images, and would reduce the memory load by a factor of four. Being able to use integer arithmetic would also be useful to improve speed - most image processing suites would optimise many of these operations. The development of computed variables (which are intended to extend and replace computed fields) allows data to be created of all different types. In the mean time some support for types could be fitted into computed fields.

Computed fields currently cache their copy of the image pipeline at every point. This is very expensive and unnecessary on memory. Some mechanisms for releasing these caches could be allowed. A simplistic mechanism where each operation in the chain releases the cache for the previous step when it has created it's cache will be sufficient unless there are multiple operations derived from a common source. Currently a field does not "know" how many other fields depend on it. Some mechanism (essentially a new access count) for registering this interest is probably required.

CMGUI could additionally provide a better "user-interface" to enable images to be easily handled from the command-line. Several possibilities are listed:

  • An image is currently read in to a texture, from which a field must be generated before any image processing can occur. It would be better to have a single object (which may be mapped internally) which reinforces to the user that an image is simply a discrete field. Textures could remain a graphical object which are created when needed, but are most often not required to be manipulated by users.

  • It would be easy to automatically create a containing Node/element for an image as it is read in, perhaps in a separate image region. The dimensions of this element could either match the dimensions of the image, or better still could be scaled according to specified pixel resolutions, e.g.:

    gfx read image file.png resolution 0.05 0.05 0.1

would indicate that X and Y resolutions were 0.05 mm, and the Z resolution was 0.1 mm, and the nodes could be created accordingly.

  • At present in order to view an image, it is necessary to either create a material, or to use the texture as data, in which case it is sampled at the resolution of the graphical element. There is some sense in making the image available on the element without explicitly creating any other objects.

Kernels

One of the more general-purpose image processing operations is to convolve an image with a kernel. A kernel is essentially a template describing a local neighbourhood, where the elements of the kernel specify the local weightings given to each neighbouring pixel, and each pixel of the input image is replaced by the convolution of its neighbourhood with the kernel. A general purpose convolution allows for the application of Gaussian smoothing, local averaging, edge detection, Laplacian functions, and many other so-called kernel operations.

The current implementation does not contain the ability to perform a generic convolution. Several routines which perform equivalent functions are independently defined. Furthermore, the only specification of a kernel is a single value for the radius, allowing only a cuboid kernel with preset values.

Additionally, kernel convolutions should form the basis for many other image operations, including the ability to perform ranking operators, which lead to erosion and dilation functions, and rolling-ball or top-hat filters.

There is the potential for CMGUI to contain powerful convolution operators. One possible way for including this functionality would be to be able to define a kernel field, and assign the values either from a predefined list, or by listing each value on the command line, and then to allow this field to be used as a kernel both in a general-purpose convolution function, and in other functions where appropriate.

Masks

A very common requirement is for the ability to define a mask, which constrains the image processing to a pre-defined region. This would easily be incorporated by allowing fields to be used as masks in other operations.

Borders

It is important to consider what happens to pixels close to an image border when performing operations that involve local neighbourhoods of images. There are typically three options:

1 Any pixels outside the image domain are set to zero. This is commonly used for correlation functions.

2 Border pixels are copied outwards as far as needed, typically used for morphological and segmentation functions.

3 Pixels are mirrored away from the border, used for edge detection, low pass and Nth order functions.

The external values are typically generated on-the-fly when needed, but at least the method to be used should be available.

At present, it seems as though borders are not handled well in images. An image is treated as a single 1D data array which wraps circularly. This leads to the following effect, which for a 5-pixel radius erode operation, the region which initially is present only on the right of the image, becomes evident on the left as well, certainly not the result desired. Examples are given for functions described below.

The IMAQ documentation indicates that image borders are allocated with the image memory, and values copied in to this as required by each function. Additionally, the first pixel of (each line of) the image is 8-byte-aligned in memory, increasing processing speed by up to 30%.

Connectivity

It is common to be able to specify the connectivity between pixels, 4- or 8-way in 2D and 6- or 26-way in 3D. This is most commonly applied when segmenting images and identifying and measuring structures or objects. This is currently not available.

Colour spaces

It is common when processing colour images to be able to convert between various colour planes. Having computed fields enables these various representations of the information to be carried around easily, although there are currently no methods for converting these. In particular, useful colour spaces include:

  • 'RGB' (red, green, blue).
  • 'HSL' (hue, saturation, luminance (or intensity)), particularly useful for segmentation of colour images.
  • 'XYZ' - a CIE colour space close to the human vision system.
  • 'L*a*b*' - another CIE space which linearises differences, useful because colours appear as different (to a human) as their distance apart in this space. Again can be very useful for segmentation
  • 'CMY' (cyan, magenta, yellow) - probably not needed.
  • 'YIQ' - for TV and broadcasting, again probably not needed.
  • 'Grayscale' - several possible conversions should be available, including (R+G+B)/3 and 0.299R+0.587G+0.114B.

The components of computed fields should allow for individual (or grouped) colour planes to be extracted as necessary, according to those most useful for a particular segmentation.

Image arithmetic

There is a good range of general arithmetic operators available (as general field operators), including acos, add, asin, atan, atan2, cos, divide_components, dot_product, magnitude, multiply_components, normalise, power, scale, sin, and tan. Missing are logical operators (such as and, or, xor, max, min, mask etc) especially useful when working with binary segmented images.

Resampling

When working this images, the computed fields propogate information about the data being transferred. These parameters are:

  • dimension The dimension of the texture
  • element_dimension The dimension of the elements looked up for the texture coordinates
  • maximums The maximum values of the texture coordinates in the element dimensions
  • minimums The minimum values of the texture coordinates in the element dimensions
  • sizes The number of pixels in each dimension, spread out from minimum to maximum
  • texture_coordinate_field The field which the regular grid values minimum and maximum are mapped onto elements with.

To change the sampling or extent of an image use the 'resample' computed field to change these parameters. Additionally as any field can be used with these spatial operators, before doing so you should put a 'resample' computed_field into the chain.

Constants and inputs

The computed field commands make a distinction between inputs which are variable and specified using a computed field as their input and those which are constant and specified by value. This should be rationalised.

Source code implementation

Some rationalisation of the source code could be performed. Maybe there is a better way to define just what is special about each computed field operation and reduce the amount of parser code.

Many of the image processing operations result from applying a kernel to the data over all the pixels. This could be done with a single field with a kernel as one of its inputs. All the existing commands could be supported, by simply supplying the correct kernel either through another computed field or as a matrix of values.

New functions

A method for constructing kernel matrics has been developed. A field "kernel_matrix" is define for 2D/3D case. The dimension, the radius and the filter name (in the predefined list) service as the inputs for defining kernel fields.

For implementing convolution operation, a field, called "convolution_filter", is also developed. The input image, texture coordinate field and kernel matrix field service as its inputs.

Examples

Examples are present for most of the functions. Some are well documented, and some are not. One thing that should change is that there are signifcant problems using JPG images for image processing - the 8x8 pixel blocks that subdivide a JPG image can show severe artifacts, especially for edge detection and other noise amplifying algorithms. The preferred image format is PNG, for which both 8-bit and 24-bit non-lossy (but compressed) images can be created.

Image processing functions

This section aims to cover each of the image processing functions, detail what it does and indicate what else may be useful. Where the above general comments are applicable, they are indicated without further detail. The qualifiers described in the Default Values section above are removed from the commands unless important.

The following image is used as a test image for examining the functions, with resulting images shown.

<img src="test.png" />

Adaptive Adjust Contrast

Adjusts the intensity of each pixel according to its local mean and standard deviation in relation to the global standard deviation.

CMGUI command:

gfx define field adaptive_adjust_contrast
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>

Example:

image_contrast_adjustment

Qualifiers:

Kernel, Border

Function:

Computes global image mean (Gmean) and standard deviation (Gsd). For each pixel, then computes the local mean (Lmean) and standard deviation (Lsd) according to a local kernel. If the Lsd is zero, the result is zero, otherwise the pixel value is modified by the ratio of the Gsd to the Lsd as:

y[i] = 1 / (1 + exp(Gsd/Lsd * (Lmean - x[i])))

The result is to accentuate regions of sharp intensity change or of noise. When applied to the test image, the resulting image is shown.

<img src="adaptive_adjust_contrast.png" />

  • In addition to the kernel not being flexible, the radius is preset to 2.
  • This seems like a harsh filter effect which would possibly work better if weighted and combined with the original image.

Adjust Contrast

Adjusts the intensity of each pixel according to a user-defined weighting, after first scaling over the intensity range of the image.

CMGUI command:

gfx define field adjust_contrast
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <gain #[5]{integer}>
  <cutoff #[0.5]>

Example:

image_contrast_adjustment

Qualifiers:

Border

Function:

Computes global image maximum (Max) and minimum (Min). For each pixel, the value is modified as:

y[i] = 1 / (1 + exp(gain * (cutoff - (x[i] - Min) / (Max - Min))))

The result is to alter image contrast. When applied to the test image, with a gain of 20 and a cutoff of 0.4, the resulting image is shown.

<img src="adjust_contrast.png" />

  • cutoff must lie between 0.0 and 1.0, and indicates the threshold for application of the contrast gain.
  • gain does not need to be an integer, and specifies how strongly the adjustment is applied. Presumably values less than 1.0 would reduce, rather than enhance, contrast.

Adjusted Gaussian

This filter is identical to the standard Gaussian Filter, except that it computes the global image mean, then subtracts it from each pixel before applying the Gaussian, and then adds it back on afterwards.

CMGUI command:

gfx define field adjusted_gaussian
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <sigma #[1]>

Example:

none

Qualifiers:

Kernel, Border

Function:

see Gaussian Filter. The resulting image with sigma = 2.2 is shown.

<img src="adjusted_gaussian.png" />

  • Rather than creating a new filter with an ambiguous name, it would be better to have a qualifier which could be added to the standard command.
  • For the example shown, the output is identical to the Gaussian Filter.

Binomial Filter

A low-pass smoothing filter for which the filter weights are set proportional to the binomial coefficients. For large filter sizes, the binomial filter tends towards the Gaussian, or normal, weights.

CMGUI command:

gfx define field binomial_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <centre_weight #[1]{integer}>
  <standard_deviation #[2]{integer}>

Example:

image_smooth

Qualifiers:

Border

Function:

This function seems to be coded differently from other smoothing functions, in which it appears to apply the same filter multiple times which therefore generates its effect recursively. The resulting image with standard_deviation = 2.5 and centre_weight = 1 is shown.

<img src="binomial_filter.png" />

  • Possibly better implemented as a kernel available to a general linear kernel filter, along with Gaussian filters and a Moving Average filter.
  • For the example I tried, varying the centre_weight had no discernable effect.

BVC Decomp

Adapted from code documentation: Perform an image decomposition operation. The method decomposes the input image into the sum (f) of a bounded variation component (u) and an oscillating component (v). The oscillating component contains the texture and the noise. This method is based on minimizing a functional of two variables. When a variable is fixed, a orthogonal projection method is used to minimize the functional:

F(u,v) = J(u) + (1/2*lambda) ||f-u-v||^2,
where,   v in {G_{mu}}^d = {div(g)|g = (g1,g2), g1, g2 in L^2(R) and ||g||_{infinite} <= lambda}
         u in BV, J(u) is the total variation of u.

REFERENCE: J.F. Aujol, et al., "Image decomposition into a bounded variation component and an oscillating component," J. of Math. Imaging and Vision, Vol. 22: 71-88, 2005

CMGUI command:

gfx define field bvc_decomp
  <number_of_iterations #[0]{>0,integer}>
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <result bounded_variation|oscillating|reconstruction|difference>
  <tou #[0]>
  <lambda #[1]>
  <mu #[0]>

Example:

bvc_decomp

Qualifiers:

Border

Function:

This function outputs either the bounded variation, the oscillating component of that variation, the reconstruction 'f = u + v', or the difference between the reconstruction and the original image. With parameters 'tou = 0.25', 'lambda = 0.2', 'mu = 1' and 'number_of_iterations = 10' the resulting four images are shown.

<img src="bvc_variation.png" /> <img src="bvc_oscillating.png" /> <img src="bvc_reconstruction.png" /> <img src="bvc_difference.png" />

  • The parameters tou (tau?), mu and lambda do not give meaningful indications of their effect. The tou parameter has the greatest influence, and the effect of the others is comparatively small.
  • It may be useful to be able to return several of the various results into different components of the resulting field.
  • There is no indication of what this function might be useful for.

Canny Filter

A separable edge detection filter, therefore should be faster than other edge detection techniques.

CMGUI command:

gfx define field canny_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <sigma #[1]>

Example:

image_edge_extraction

Qualifiers:

Border

Function:

Computes Gaussian blur of image, then uses a Sobel gradient/orientation technique to compute the resulting image value. The image generated with 'sigma = 0.1' is shown.

<img src="canny_filter.png" />

  • The sigma value defines the standard deviation of the Gaussian blurring function applied before edge detection. This code (along with others) would benefit from a standard routine to compute (and perhaps apply) a Gaussian filter.
  • This filter seems to return '0' to indicate the prescence of an edge, which is the inverse of the usual definition.

Color Based Segment

Segments an image based on variation from a background mean.

CMGUI command:

gfx define field color_based_segment
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>

Example:

image_segmentation

Qualifiers:

none

Function:

Computes the mean value over a "background" region, which appears to be a square in the centre of the image. At each pixel, the "distance" between that piel's value and the mean is computed. If that distance is less than 0.8 of the maximum distance, then the result is 1, otherwise 0. The image generated is shown.

<img src="color_based_segment.png" />

  • The "background" region seems somewhat arbitrary, although this code is difficult to follow.
  • The threshold of 0.8 is hard-coded and should be a parameter.

Cube Plugin All

Dilate Filter

Replaces each pixel with the maximum in the neighbouring region.

CMGUI command:

gfx define field dilate_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <radius #[0]{>0,integer}>

Example:

image_smooth

Qualifiers:

Kernel, Border

Function:

At each pixel, the new value is the maximum value from the neighbouring pixels according to the size of the kernel. When applied to the test image with an radius of 4, the resulting image is shown.

<img src="dilate_filter.png" />

  • This filter may be better implemented as a morphological filter, which could include erode, dilate, open, close and other morphological operations.

Edge Detection

Detects edges in the image using a variety of techniques.

CMGUI command:

gfx define field edge_detection
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <operator roberts|sobel|prewitt|isotropic|laplacian>
  <global_threshold #[0]>

Example:

image_edge_extraction

Qualifiers:

Kernel, Border

Function:

Returns the edge value as a normalised combination of the appropriate kernels at each pixel. The images created for the isotropic, Laplacian, Prewitt, Roberts and Sobel filters respectively, with 'global_threshold = 0', are shown.

<img src="edge_detection_isotropic.png" /> <img src="edge_detection_laplacian.png" /> <img src="edge_detection_prewitt.png" /> <img src="edge_detection_roberts.png" /> <img src="edge_detection_sobel.png" />

Further results for the Sobel edge detection filter with thresholds of 0.3, 0.5 and 0.7 are shown.

<img src="edge_detection_sobel_03.png" /> <img src="edge_detection_sobel_05.png" /> <img src="edge_detection_sobel_07.png" />

  • This filter seems to be implemented incorrectly for the Prewitt, Roberts and Sobel filters, for which the result should be the maximum of the kernel convolutions in each direction, whereas this code computes the sum of the two kernels.
  • Hardcoded kernels do not give any option for 3D edge detection, even though the dimension of the filter can be specified. There may not be equivalent 3D filters though.
  • Certainly the Laplacian filter may be applied with different-sized kernels than the provided 3x3, and is commonly applied with 5x5 or 7x7.
  • The global_threshold parameter gives a full scale image if it is zero, otherwise it creates a binary image with the given threshold.

Erode Filter

Replaces each pixel with the minimum in the neighbouring region.

CMGUI command:

gfx define field erode_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <radius #[0]{>0,integer}>

Example:

image_smooth

Qualifiers:

Kernel, Border

Function:

At each pixel, the new value is the minimum value from the neighbouring pixels according to the size of the kernel. When applied to the test image with an radius of 4, the resulting image is shown.

<img src="erode_filter.png" />

  • This filter may be better implemented as a morphological filter, which could include erode, dilate, open, close and other morphological operations.

First Order Statistics

This filter computes the local mean and standard deviation about each pixel of the image.

CMGUI command:

gfx define field first_order_statistics
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <radius #[1]{>0,integer}>

Example:

mean_std

Qualifiers:

Kernel, Border

Function:

Computes a square "kernel" of the given radius and uses it to compute a local mean and standard deviation at each pixel.

Fuzzy Clustering

This filter constructs a segmentation into a number of classes using a fuzzy C-means clustering technique.

CMGUI command:

gfx define field fuzzy_clustering
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <number_of_classes #[10]{>0,integer}>

Example:

fuzzy_clustering

Qualifiers:

Kernel, Border

Function:

A fuzzy C-means technique is used, whereby an iterative process classifies each pixel into a cluster by examining its local mean relative to the mean of the cluster. The resulting image segmented into 5 classes (4 + black) is shown.

<img src="fuzzy_clustering.png" />

  • The kernel is hard-coded to have a radius of one.

Gabor Filter

This filter performs a Gabor transform on the image.

CMGUI command:

gfx define field gabor_filter
  <direction VALUES>
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <frequencies VALUES>
  <sigma #[1]>

Example:

gabor_filter

Qualifiers:

Kernel, Border, Dimension

Function:

The Gabor filter is essentially a Gaussian modulated by a sinusoidal function. The resulting image with 'sigma = 3.0', 'frequencies = 1 0' and 'direction = 1 2' is shown.

<img src="gabor_filter.png">

  • This function crashes if defined on a field that already exists, or crashes if another field is defined subsequently.
  • The code variable 'cord' is hard-coded to a size of '2' which will fail if a 'dimension > 2' is used.

Gaussian Filter

This filter applies a Gaussian kernel at each pixel.

CMGUI command:

gfx define field gaussian_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <sigma #[1]>

Example:

image_smooth

Qualifiers:

Kernel, Border

Function:

A Gaussian kernel of an appropriate size (for the sigma value specified) is constructed according to the following formula:

g[i] = (2.7 ^ (-0.5 * i * i / (sigma * sigma))) / (sigma * 2.5)

where 'i' varies about the center of the kernel. This kernel is then applied to each pixel, one dimension at a time. The resulting image with a sigma of 2.2 is shown.

<img src="gaussian_filter.png" />

  • There is no provision for specifying a different Gaussian weighting in different dimensions.
  • This is very poorly coded in that "2.7" should actually be "e", and "2.5" should actually be "sqrt(2*pi)".

Haar Wavelet Decomp

Decomposes an image using a Haar wavelet transform.

CMGUI command:

gfx define field haar_wavelet_decomp
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <number_of_levels #[1]{>0,integer}>

Example:

image_wavelet_transform

Qualifiers:

none

Function:

The image generated with 'number_of_levels = 2' is shown.

<img src="haar_wavelet_decomp.png" />

  • This would be better implemented by having the ability to pass a wavelet transform type to a generic wavelet decomposition routine. See the wavelet_decomp function for more details.

Haar Wavelet Reconstruct

Heat Equation

Histogram-Based Threshold

Segments an image based on a histogram of the input image.

CMGUI command:

gfx define field histogram_based_threshold
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>

Example:

image_segmentation

Qualifiers:

none

Function:

Computes an "intensity" image which is the normalised average of all image planes. Computes a 256-bin histogram from the intensity image, and a global mean. It appears that the threshold is computed using a clustering technique which balances the means of the histogram above and below the threshold. The image generated is shown.

<img src="histogram_based_threshold.png" />

  • There is no reason to compute an "intensity" image.
  • It would be good to have some details specified somewhere of the particular algorithm used - there are a number of histogram-based thresholding techniques available.

Histogram Equalize

Histogram Normalize

Histogram Stretch

Image Approximation

Image Contour

Image Correlation

Image Enhancement

Enhances contrast within an image.

CMGUI command:

gfx define field image_enhancement
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <enhance_rate #[1]>

Example:

image_contrast_adjustment

Qualifiers:

Kernel, Border

Function:

At each pixel, a local mean (Lmean) is computed over the kernel. The new pixel value is computed as:

y[i] = Lmean + enhance_rate * (x[i] - Lmean)

The resulting image is then zero-one normalised. When applied to the test image with an enhance_rate of 2.5, the resulting image is shown.

<img src="image_enhancement.png" />

  • In addition to the kernel not being flexible, the radius is preset to 2.

Image Gamma

Image Mask

Image Mean Value

Image Threshold

Segments an image based on a threshold field.

CMGUI command:

gfx define field image_threshold
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <object_label #[0]{integer}>
  <threshold_field FIELD_NAME[.COMPONENT_NAME]|none[none]>

Example:

image_segmentation

Qualifiers:

none

Function:

Computes an "intensity" image which is the average of all image planes. At each pixel, the result is 'object_label' if the pixel intensity is less than the threshold, otherwise '1 - object_label'. The image generated for a threshold value of 0.67 is shown.

<img src="image_threshold.png" />

  • For some reason, the threshold value must be specified in a field, but in the code there can only be a single constant threshold value.
  • There is no reason to compute an "intensity" image.
  • It is restrictive to only be able to compare against a single threshold value. For many applications, it is important to be able to specify one or more ranges (with minima and maxima) for each image plane.
  • The object_label should be identified as being only 0 or 1, not other integers.

Image TV Restoration

Iteration Threshold

Segments an image based on a histogram of the input image.

CMGUI command:

gfx define field iteration_threshold
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>

Example:

image_segmentation

Qualifiers:

none

Function:

Computes an "intensity" image which is the normalised average of all image planes. The threshold is computed using a clustering technique which balances the means of the histogram above and below the threshold by adjusting the threshold until the following formula is satisfied:

(mu1+mu2)/2 = T

The image generated is shown.

<img src="iteration_threshold.png" />

  • There is no reason to compute an "intensity" image.
  • This appears to result in the same threshold as the histogram-based thresholding technique.

K Nearest Mean

Local Frequency

Local Histogram Features

Local Mean Smooth

Local StD

Local Thresholding

Median Filter

Computes the median value of neighbouring pixels.

CMGUI command:

gfx define field median_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <radius #[0]{>0,integer}>

Example:

image_smooth, local_frequency, stereology

Qualifiers:

Kernel, Border

Function:

At each pixel, the new value is the median value from a sorted list of neighbouring pixels according to the size of the kernel. When applied to the test image with an radius of 4, the resulting image is shown.

<img src="median_filter.png" />

  • This filter could be better implemented as a generic rank filter where values in the list other than the median may be chosen. Many other morphological filters can be implemented as rank filters, include erode and dilate.

Morphology Thinning

Power Spectrum

Region Label

Region Maximum

Second Order Hermite

Shock Filter

Sobel Filter

Detects edges in the image using a Sobel filter.

CMGUI command:

gfx define field sobel_filter
  <field FIELD_NAME[.COMPONENT_NAME]|none[none]>
  <radius #[1]{>0,integer}>

Example:

image_edge_extraction

Qualifiers:

Border

Function:

Returns a binary edge map with a somewhat arbitrary threshold. Results for the Sobel filter with radii of 1,2, 3 and 5 are shown.

<img src="sobel_filter_1.png" /> <img src="sobel_filter_2.png" /> <img src="sobel_filter_3.png" /> <img src="sobel_filter_5.png" />

  • This filter seems to be implemented incorrectly - the result should be the maximum of the kernel convolutions in each direction, whereas this code computes the sum of the two kernels.
  • Although the implementation is completely different, the essential features are duplicated in the edge detection filter with the Sobel option. The differences are that the radius gives a different aspect to the filter size, and a threshold is computed to binarize the image.
  • The threshold is restricted to one of the values {0.2, 0.3, .., 0.9}, and is chosen such that the cumulative total greater than that threshold is greater than 0.999 of the largest proportion between thresholds. This seems to create some unusual results.

Spatial Skeleton

Steerable Filter

Stereology Measures

Throw Away Weakest

Volterra Filter

Wavelet Decomp

Wavelet Reconstruct

Wiener Filter

Missing Functions

Crop

As far as I can tell, the only place to crop an image is when reading from a file.

Resample

Because an image is a discrete field, it is important to be able to resample the image at either a higher or lower resolution.