DeepLearningOptimizedObjectClassification Task

This task uses a trained Deep Learning ONNX model to perform inference on a raster in regions that contain features of interest identified by a grid model (patent pending). The output is an object shapefile of bounding boxes for each class, and a grid shapefile of areas containing objects detected.

Example

; Start the application

e = ENVI()

 

; Open a raster for classification

; Update the following line with a valid raster path

RasterURI = 'RasterToClassify.dat'

Raster = e.OpenRaster(RasterURI)

 

; Select a trained object detection model

; Update the following line with a valid object detection model

ObjectModelURI = 'objectModel.envi.onnx'

ObjectModel = ENVIDeepLearningOnnxModel(ObjectModelURI)

 

; Select a trained Grid model

; Update the following line with a valid grid model

GridModelURI = 'gridModel.envi.onnx'

GridModel = ENVIDeepLearningOnnxModel(GridModelURI)

 

; Get the task from the catalog of ENVITasks

Task = ENVITask('DeepLearningOptimizedObjectClassification')

 

; Select task inputs

Task.INPUT_RASTER = Raster

Task.INPUT_OBJECT_MODEL = ObjectModel

Task.INPUT_GRID_MODEL = GridModel

 

; Update based on model accuracy

Task.OBJECT_CONFIDENCE_THRESHOLD = 0.8

Task.GRID_CONFIDENCE_THRESHOLD = 0.8

Task.IOU_THRESHOLD = 0.5

 

; Set task outputs

Task.OUTPUT_OBJECT_VECTOR_URI = e.GetTemporaryFilename('.shp', /CLEANUP_ON_EXIT)

Task.OUTPUT_GRID_VECTOR_URI = e.GetTemporaryFilename('.shp', /CLEANUP_ON_EXIT)

 

; Run the task

Task.Execute

 

; Add the output to the Data Manager

e.Data.Add, Task.OUTPUT_OBJECT_VECTOR

e.Data.Add, Task.OUTPUT_GRID_VECTOR

 

; Display the result

View = e.GetView()

Layer1 = View.CreateLayer(Raster)

Layer2 = View.CreateLayer(Task.OUTPUT_GRID_VECTOR)

Layer3 = View.CreateLayer(Task.OUTPUT_OBJECT_VECTOR)

Syntax

Result = ENVITask('DeepLearningOptimizedObjectClassification')

Input parameters (Set, Get): CLASS_FILTER, CUDA_DEVICE_ID, ENHANCE_DISPLAY, GRID_CONFIDENCE_THRESHOLD, INPUT_GRID_MODEL, INPUT_METADATA, INPUT_OBJECT_MODEL, INPUT_RASTER, IOU_THRESHOLD, OBJECT_CONFIDENCE_THRESHOLD, OUTPUT_GRID_VECTOR_URI, OUTPUT_OBJECT_VECTOR_URI, RUNTIME, VISUAL_RGB

Output parameters (Get only): OUTPUT_GRID_VECTOR, OUTPUT_OBJECT_VECTOR

Properties marked as "Set" are those that you can set to specific values. You can also retrieve their current values any time. Properties marked as "Get" are those whose values you can retrieve but not set.

Input Parameters

CLASS_FILTER (optional)

Specify the class labels to exclude from the output classification results. This will filter out the specified labels to provide a more targeted and customized output.

CUDA_DEVICE_ID (optional)

If the RUNTIME parameter is set to CUDA, specify the target GPU device ID. If a valid ID is provided, the classification task will execute on the specified CUDA-enabled GPU. If the ID is omitted or invalid, the system defaults to GPU device 0. Use this parameter to explicitly control GPU selection in multi-GPU environments.

ENHANCE_DISPLAY (optional)

Specify whether to apply an additional small stretch to the processed data to suppress noise and enhance feature visibility. The optional stretch is effective for improving visual clarity in imagery acquired from aerial platforms or sensors with higher noise profiles.

GRID_CONFIDENCE_THRESHOLD (optional)

Specify a floating-point threshold value between 0 and 1.0. Bounding boxes with a confidence score less than this value will be discarded before applying the IOU_THRESHOLD. The default value is 0.2. Decreasing this value generally results in more classification bounding boxes throughout the scene. Increasing it results in fewer classification bounding boxes.

INPUT_GRID_MODEL (required)

Specify the trained ONNX model (.envi.onnx) that was designed for grid-based analysis to classify the INPUT_RASTER.

INPUT_METADATA (optional)

Specify an optional hash containing metadata that will be passed on and accessible to ONNX preprocessor and postprocessor functions.

INPUT_OBJECT_MODEL (required)

Specify the trained ONNX model (.envi.onnx) to use for object detection classification in the grid-detected cells.

INPUT_RASTER (required)

Specify the raster to classify.

IOU_THRESHOLD (optional)

(Non-Maximum Suppression Intersection Over Union) Specify the value to use to measure the overlap of a predicted versus actual bounding box for an object.

OBJECT_CONFIDENCE_THRESHOLD (optional)

Specify a floating-point threshold value between 0 and 1.0. Bounding boxes with a confidence score less than this value will be discarded before applying the IOU_THRESHOLD. The default value is 0.2. Decreasing this value generally results in more classification bounding boxes throughout the scene. Increasing it results in fewer classification bounding boxes.

OUTPUT_GRID_VECTOR_URI (optional)

Specify a string with the fully-qualified path and filename for OUTPUT_GRID_VECTOR. This is a shape file containing rectangles in areas where one or more features are present.

OUTPUT_OBJECT_VECTOR_URI (optional)

Specify a string with the fully qualified filename and path of the associated OUTPUT_OBJECT_VECTOR. This is a shape file containing rectangles around detected objects.

RUNTIME (optional)

Specify the execution environment for the classification task with one of these options:

VISUAL_RGB (optional)

Specify whether to encode the output raster as a three-band RGB composite (red, green, blue) for color image processing. This ensures consistent band selection from ENVI display types (such as RGB, CIR, and pan) and supports integration of diverse data sources (such as MSI, panchromatic, and VNIR) without band mismatch.

Output Parameters

OUTPUT_GRID_VECTOR (required)

This is a reference to the output grid vector. This is the output from the grid model providing the cells classified by the object detection model.

OUTPUT_OBJECT_VECTOR (required)

This is a reference to the output object vector. This is the output from the object model containing the features that were detected.

Methods

Execute

Parameter

ParameterNames

See ENVI Help for details on these ENVITask methods.

Properties

DESCRIPTION

DISPLAY_NAME

NAME

REVISION

TAGS

See the ENVITask topic in ENVI Help for details.

Version History

Deep Learning 3.0

Introduced

Deep Learning 4.0

Renamed from TensorFlowOptimizedObjectClassification task.

Added parameters: CLASS_FILTER, CUDA_DEVICE_ID, ENHANCE_DISPLAY, INPUT_METADATA, RUNTIME, and VISUAL_RGB.

See Also

DeepLearningObjectClassification Task , TrainDeepLearningGridModel Task, TrainDeepLearningObjectModel Task