nvidia / visual-changenet

Visual ChangeNet Segmentation Model Card (Commercial)

Model Overview

The Visual ChangeNet-Segmentation Model detects changes in land cover using remote sensing imagery (RSI).
This model is ready for commercial use.

References:

Using TAO Pre-trained Models

Model Architecture:

Architecture Type: Transformer-Based

Network Architecture: Siamese Network

Visual ChangeNet is a state of the art transformer-based Change Detection model. Visual ChangeNet is based on Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. In TAO, Visual ChangeNet supports two images as input where the end goal is to either classify or segment the change between the "golden or reference" image and the "test" image. More specifically, this model was trained with the NVDINOv2 backbone, which was trained using a self-supervised manner on NVIDIA proprietary data and achieved SOTA accuracy on zero-shot ImageNet classification. To enable the ViT backbone into Visual ChangeNet, the ViT-Adapter was used as the neck architecture. The ViT-Adapter improves the accuracy on dense predictions, such as object detection and segmentation.
In TAO, two different types of Change Detection networks are supported:

  • Visual ChangeNet-Segmentation - for segmentation of change between two input images.
  • Visual ChangeNet-Classification - for classification of change between two input images.

Visual ChangeNet-Segmentation is specifically intended for change segmentation. In this model card, the Visual ChangeNet-Segmentation model is leveraged to demonstrate land cover semantic change detection using the LandSat-SCD dataset. The model uses a pretrained NVDINOv2 backbone, trained on NVIDIA-commercial dataset, and then fine-tuned on the LandSat-SCD dataset.

Input:

Input Type(s): Images

Input Format(s): Red, Green, Blue (RGB)

Input Parameters: Three-Dimensional (3D)

Other Properties Related to Input:

Two input images:

  • Golden: RGB Image of dimensions: 416 X 416 X 3 (H x W x C)
  • Sample: RGB Image of dimensions: 416 X 416 X 3 (H x W x C)

Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (416), W = Width of the images (416)

Here is a sample image for a pre and post change images along with ground-truth segmentation change map side-by-side.

Output:

Output Type(s): Segmentation Change Map

Output Format: 3D Vector

Other Properties Related to Output:

Segmentation change map with the same resolution as the input images: 416 X 416 X 10 (H x W x C), where C = number of output change classes.

Software Integration:

Runtime Engine(s):

  • TAO - 5.2

Supported Hardware Architecture(s):

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing
  • Volta

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

  • trainable_v1.0 - NVDINOv2 Visual ChangeNet-Segmentation model LandSat-SCD is trainable.
  • deployable_v1.0 - NVDINOv2 Visual ChangeNet-Segmentation model LandSat-SCD is deployable to DeepStream.
  • deployable_v1.1 - NVDINOv2 Visual ChangeNet-Segmentation model LandSat-SCD is deployable to DeepStream.
  • deployable_v1.2 - NVDINOv2 Visual ChangeNet-Segmentation model LandSat-SCD is deployable to DeepStream.

Training & Evaluation:

Training Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:

Trained on the open-source remote sensing semantic land change detection dataset consisting of 8468 images containing remote sensing (RS) image pairs of resolution 416 × 416. They are randomly split into three parts to make train, val, and test sets of samples 6053, 1729, and 686 respectively.

DatasetNo. of images
LandSat-SCD8468

Evaluation Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:

Evaluated on the open-source remote sensing semantic land change detection dataset of 686 images.

Methodology and KPI

The performance of the Visual ChangeNet-Segmentation model for multi-class semantic change detection is measured using overall accuracy, average precision, average recall and avergae IoU score for all the classes.

ModelModel ArchitectureTesting ImagesPrecisionRecallIoUF1Overall Accuracy
Visual ChangeNet-SegmentationSiamese Network6869392.286.592.5897.85

Inference:

Engine: Tensor(RT)

Test Hardware:

  • Jetson AGX Xavier
  • Xavier NX
  • Orin
  • Orin NX
  • NVIDIA T4
  • Ampere GPU
  • A2
  • A30
  • L4
  • T4
  • DGX H100
  • DGX A100
  • DGX H100
  • L40
  • JAO 64GB
  • Orin NX16GB
  • Orin Nano 8GB

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.

NVDINOv2 + ViT-Adapter + Visual ChangeNet

PlatformBSFPS
Orin NX 16GB161.5
AGX Orin 64GB169.41
A285.9
T4162.29
L4164.68
A301635.8
L401611.3
A1003210.8
H1003223.5

Using this Model

These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with the Train Adapt Optimize (TAO) Toolkit, or TensorRT.

The primary use case for these models is for Visual ChangeNet-Segmentation using RGB images. The model is a Siamese Network that outputs semantic change maps denoting pixel-level change between the two images.

These models are intended for training and fine-tuning using the TAO Toolkit and your datasets for image comparison. High-fidelity models can be trained on new use cases. A Jupyter Notebook is available as a part of the TAO container and can be used to re-training.

The models are also intended for edge deployment using TensorRT.

Using the Model with TAO

To use these models as pretrained weights for transfer learning, use the following as a template for the model and train component of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, see the TAO Toolkit User Guide - Visual ChangeNet-Segmentation.

model:
  backbone:
    type: "vit_large_nvdinov2"
    pretrained_backbone_path: null
    freeze_backbone: False

Technical Blogs

Suggested Reading

Ethical Considerations:

NVIDIA Visual ChangeNet-Segmentation model detects changes between pair-wise images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed.

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.