Classification Workflow

Tutorial Classification Tutorial

This topic describes the Classification Workflow in ENVI. This workflow uses unsupervised or supervised methods to categorize pixels in an image into different classes. You can perform an unsupervised classification without providing training data, or you can perform a supervised classification where you provide training data and specify a classification method of maximum likelihood, minimum distance, Mahalanobis distance, or Spectral Angle Mapper (SAM).

See the following for help on a particular step of the workflow:

You can also write a script to perform classification using the following tasks:

Select Input Files for Classification

  1. Click Browse and select a panchromatic or multispectral image, using the Data Selection dialog. The Classification workflow accepts any image format listed in Supported Data Types.
  2. To apply a mask, select the Input Mask tab in the File Selection panel. Masked pixels constitute a separate class in classification output.
  3. Click Next. The Classification Type panel appears and the file opens in a new workflow view. If the selected file is displayed in an active view before you start the workflow, the display bands and image location are retained, as well as any brightness, contrast, stretch, and sharpen settings. The image location is not retained for pixel-based images or those with pseudo or arbitrary projections.

Select a Classification Method

In the Classification Type panel, select the type of workflow you want to follow, then click Next.

Unsupervised Classification Settings

Unsupervised classification clusters pixels in a dataset based on statistics only, without requiring you to define training classes.

ISODATA unsupervised classification starts by calculating class means evenly distributed in the data space, then iteratively clusters the remaining pixels using minimum distance techniques. Each iteration recalculates means and reclassifies pixels with respect to the new means. This process continues until the percentage of pixels that change classes during an iteration is less than the change threshold or the maximum number of iterations is reached.

Preview is not available for unsupervised classification, as ENVI would need to process the entire image in order to provide a preview image.

In the Unsupervised Classification panel, set the values to use for classification.

  1. Enter the Requested Number of Classes to define. The default is 5.
  2. Click the Advanced tab for additional options.
    • Enter the Maximum Iterations to perform. If the Change Threshold % is not met before the maximum number of iterations is reached, the classification process ends. The default is 10.
    • Enter the Change Threshold % to specify when to end the classification process. When the percentage of pixels that change classes during an iteration is less than the threshold value, the classification process ends. The default is 2.
  3. Click Next. The classification process begins, and the status displays on the Unsupervised Classification panel. When the classification process is complete, the Cleanup panel appears.

Supervised Classification Settings

Supervised classification clusters pixels in a dataset into classes based on user-defined training data. The training data can come from an imported ROI file, or from regions you create on the image. The training data must be defined before you can continue in the supervised classification workflow (see Work with Training Data). Once defined, select the classes that you want mapped in the output.

Supervised classification methods include Maximum likelihood, Minimum distance, Mahalanobis distance, and Spectral Angle Mapper (SAM). If you used single-band input data, only Maximum likelihood and Minimum distance are available.

In the Supervised Classification panel, select the supervised classification method to use, and define training data.

  1. Under the Algorithm tab, select a classification method from the drop-down list provided. To optionally adjust parameter settings for the algorithms, see Set Advanced Options:
    • Maximum Likelihood: Assumes that the statistics for each class in each band are normally distributed and calculates the probability that a given pixel belongs to a specific class. Each pixel is assigned to the class that has the highest probability (that is, the maximum likelihood). This is the default.
    • Minimum Distance: Uses the mean ROIs for each class and calculates the Euclidean distance from each unknown pixel to the mean ROI for each class. The pixels are classified to the nearest class.
    • Mahalanobis Distance: A direction-sensitive distance classifier that uses statistics for each class. It is similar to maximum likelihood classification, but it assumes all class covariances are equal, and therefore is a faster method. All pixels are classified to the closest training data.
    • Spectral Angle Mapper: (SAM) is a physically-based spectral classification that uses an n-D angle to match pixels to training data. This method determines the spectral similarity between two spectra by calculating the angle between the spectra and treating them as ROIs in a space with dimensionality equal to the number of bands. This technique, when used on calibrated reflectance data, is relatively insensitive to illumination and albedo effects. SAM compares the angle between the training mean ROI and each pixel ROI in n-D space. Smaller angles represent closer matches to the reference spectrum. The pixels are classified to the class with the smallest angle.
  2. Define the training data to use for classification. You must define a minimum of two classes, with at least one training sample per class.

    You can write a script to calculate training data statistics using the ROIStatistics task or TrainingClassificationStatistics task.

  3. Enable the Preview check box to see a preview of the settings before you click OK to process the data. The preview is calculated only on the area in the view and uses the resolution level at which you are viewing the image. See Preview for details on the results. To preview a different area in your image, pan and zoom to the area of interest and re-enable the Preview option.
  4. Click Next. The classification process begins, and the status displays on the Supervised Classification panel. When the classification process is complete, the Cleanup panel appears.

Set Advanced Options

In the Algorithm tab, you can apply no thresholding, one thresholding value for all classes, or different thresholding values for each class. Specifying a different threshold value for each class includes more or fewer pixels in a class. Enabling the Preview check box helps you to preview the adjusted the values. To specify multiple values, select the class in the Training Data tree and enter the value. Press the Enter key to accept the value.

Maximum Likelihood

Set Probability Threshold

Minimum Distance

Set thresholding options for Set Standard Deviations from Mean and/or Set Maximum Distance Error.

Set Standard Deviations from Mean

Set Maximum Distance Error

Mahalanobis Distance

Set Maximum Distance Error: Select one of the following options:

Spectral Angle Mapper

Set Maximum Spectral Angle: Select one of the following options:

Compute Rule Images

You can export rule images to a file at the end of the workflow and use them to perform additional analysis outside of the Classification workflow, such as apply different stretches or thresholding, or in the Rule Classifier to create a new classification image without having to recalculate the entire classification. To compute rule images for the selected classification algorithm, enable the Compute Rule Images check box. The output is a single file containing one rule image per class, with measurements for each pixel related to each class. The measures for the rule images differ based on the classification algorithm you choose. In contrast, the final classification image is a single-band image that contains the final class assignments; pixels are either classified or unclassified.

The pixel values in the rule images are calculated as follows:

Maximum Likelihood classification calculates the following discriminant functions for each pixel in the image:

where:

i = the ith class

x = n-dimensional data (where n is the number of bands)

pi) = probability that a class occurs in the image and is assumed the same for all classes

i| = determinant of the covariance matrix of the data in a class

Σi-1 = the inverse of the covariance matrix of a class

mi = mean ROI of a class

 

Minimum Distance classification calculates the Euclidean distance for each pixel in the image to each class:

where:

D = Euclidean distance

i = the ith class

x = n-dimensional data (where n is the number of bands)

mi = mean ROI of a class

 

Mahalanobis Distance classification calculates the Mahalanobis distance for each pixel in the image to each class:

where:

D =Mahalanobis distance

i = the ith class

x = n-dimensional data (where n is the number of bands)

Σi-1 = the inverse of the covariance matrix of a class

mi = mean ROI of a class

 

Spectral Angle Mapper classification calculates the spectral angle in radians for each pixel in the image to the mean spectral value for each class:

where:

x = n-dimensional data (where n is the number of bands)

mi = mean ROI of a class

Define Training Data

You can load previously-created ROIs from a file, or you can create ROIs interactively on the input image. To provide adequate training data, create a minimum of two classes, with at least one region per class. If you applied a mask to the input data, create training samples within the masked area only.

Load Previously Defined Training Data

Click the Load Training Data Set button and select a file that contains training data. Examples include ROIs (.roi or .xml) and shapefiles. When you load training data that uses a different projection as the input image, ENVI reprojects it. If the training data uses different extents, the overlapping area is used for training. When you load a training data set from a file, it will replace any ROIs that you drew on the screen previously.

Tip: If you click the Delete Class or Delete All Classes button to remove ROIs, they will no longer be available to re-open through the Data Manager or Layer Manager. If you change your mind and want to re-open one or more ROI classes, click the Reopen ROIs button and select the ROIs that you need.

Interactively Define Training Data

You can add additional ROIs to an existing ROI layer that you imported, and you can create new ROI layers.

Edit Class Properties

You can change the following properties in the Properties tab of the Supervised Classification panel:

Clean Up Classification Results

The optional Cleanup step refines the classification result. You can preview the refinement before you apply the settings.

Tip: Cleanup is recommended if you plan to save the classification vectors to a file in the final step of the workflow. Performing cleanup significantly reduces the time needed to export classification vectors. To write a script that performs cleanup, use the ClassificationAggregation and ClassificationSmoothing tasks.

  1. Enable the check boxes for the cleanup methods you want to use. The following are available:
    • Enable Smoothing: removes speckling noise during cleanup.
    • Enable Aggregation: removes small regions.
  2. Enter values for the cleanup methods you enabled:
    • Specify the Smooth Kernel Size using an odd number (e.g., 3 = 3x3 pixels). The square kernel's center pixel will be replaced with the majority class value of the kernel. The default is 3.
    • Specify the Aggregate Minimum Size in pixels. Regions with a size of this value or smaller are aggregated to an adjacent, larger region. The default is 9.
  3. Enable the Preview option to see the cleanup results in a Preview Window before processing the entire image. You can change the cleanup settings and preview the results again, as needed.
  4. Click Next. The Export panel appears.

Export Classification Results

You can write a script to export classification results to a vector using the ClassificationToShapefile task. Or, export classification results to ROIs using the ClassificationToPixelROI task and ClassificationToPolygonROI task.

  1. In the Export Files tab in the Export panel, enable the output options you want. The following are available:
  2. In the Additional Export tab, enable any other output options you want. The following are available:
  3. Click Finish to create the output, add the new layers to the Layer Manager, and save the files to the directories you specified. When the export is complete, the workflow view closes. The original data and the export data display in the Image window view. The classes display in the Layer Manager as children of the raster.
  4. You can convert the exported vectors to ROIs, which is described in Region of Interest (ROI) Tool.

See Also

ENVI Classification Files

Class Layers

Classification Tools

Classification Tutorial

Thematic Change Detection