nvidia / retail-object-detection

Retail Object Detection

Description:

RetailObjectDetection detects retail items within an image; it classifies objects as retail or not.

This model is ready for commercial use.

References:

Citations

  • Tobin, Josh, et al. "Domain randomization for transferring deep neural networks from simulation to the real world." 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2017.
  • Morrical, Nathan, et al. "NViSII: A scriptable tool for photorealistic image generation." arXiv preprint arXiv:2105.13962 (2021).

Using TAO Pre-trained Models

Model Architecture:

Architecture Type: Convolution Neural Network (CNN)

Network Architecture: EfficientDet, DINO-FAN_base

Input:

Input Type(s): Image

Input Format(s): Red, Green, Blue (RGB)

Input Parameters: 2D

Other Properties Related to Input: RGB Fixed Resolution: 416x416 and 960x544 (W x H); No minimum bit depth, alpha, or gamma.

The logos on retail items were smudged.

Output:

Output Type(s): Label(s)

Output Format: Label: Text String

Other Properties Related to Output: Category Label(s): returns a single category.

The logos on retail items were smudged.

Software Integration:

Runtime Engine(s):

  • TAO - 5.2
  • DeepStream 6.1 or later

Supported Hardware Architecture(s):

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing
  • Volta

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

  • trainable_binary_v1.0
  • deployable_binary_v1.0

Training & Evaluation:

Training Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:

320,000 proprietary synthetic images of objects found in retail settings randomizing several simulation domains including:

  • light types, light intensities
  • object sizes, orientations, and locations
  • camera locations
  • background textures
  • flying distractors

The background textures are real images sampled from:

  • Proprietary real images
  • images taken from a retail checkout counter
  • HDRI texture maps create by NVIDIA Omniverse

Each synthetic image contains 1 target retail item. This dataset is set up to simulate the diverse environments in the real world and to have the detector learn to extract retail items from noisy backgrounds. The logos on retail items were smudged.

datasettotal #imagestrain #imagesval #images
Synthetic data1,500,0001,425,00075,000
Real data - checkout counter 45 overhead1078523
Real data - shelf1078522
Real data - conveyor belt1068422
Real data - basket1068422
Real data - checkout counter barcode scanner view12510025
Real data - checkout counter overhead988018

Fine-tuning Data

This model is fine-tuned on about 600 real proprietary images from 6 different real environments. In each environment, only 1 image per item is collected.

The fine tuning data are captured under random camera heights and field of views. All fine tuning data were collected indoor, having retail items placed at the checkout counter, shelf, baskets, and conveyor belt. The camera is typically set up at approximately 10 feet height, 45-degree angle off the vertical axis and has close field-of-view. This content was chosen to decrease the simulation-to-reality gap of the model trained on synthetic data, and to improve the accuracy and the robustness of the model. The logos on retail items were smudged.

Fine-tuning Data Ground-truth Labeling Guidelines

The fine tuning data are created by labeling ground-truth bounding-boxes and categories by human-labelers. The following guidelines were used while labeling the training data for NVIDIA Retail Detection models. If you are looking to transfer-learn or to fine-tune the models to adapt to your target environment and classes, please follow the guidelines below for better model accuracy.

  1. All objects that fall under the definition of retail items and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px) are labeled with the appropriate class label.
  2. Occlusion: For partially occluded objects that are visible approximately 60% or are marked as visible objects with a bounding box around the visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.
  3. Truncation: An object, at the edge of the frame, which is 60% or more visible is marked with the truncation flag.
  4. Each frame is not required to have an object.

Evaluation Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:

15,000 proprietary real-world images of objects found in retail settings.

Methodology and KPI

AP50 is calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on AP50 and AR0.5:0.95. Both AR and AP numbers are based on 100 maximum detections each image. Please note that “unseen items” measurements are irrelevant to the 100-class detection model.

Binary-class Retail Item Detection Model

sceneseen items result (AP50)seen items result (AR MaxDets=100)unseen items result (AP50)unseen items result (AR MaxDets=100)
checkout counter 45 degree overhead0.9600.7910.9590.753
shelf0.9830.8880.9780.841
conveyor belt1.0000.9210.9950.887
basket0.9560.8510.9590.861
checkout counter barcode scanner view0.8580.7890.7440.655
checkout counter overhead0.9900.9150.9930.910
overall (mean of all scenes)0.9590.8590.9380.818

Inference:

Engine: Tensor(RT)

Test Hardware:

  • Jetson AGX Xavier
  • Xavier NX
  • Orin
  • Orin NX
  • NVIDIA T4
  • Ampere GPU
  • A2
  • A30
  • L4
  • T4
  • DGX H100
  • DGX A100
  • DGX H100
  • L40
  • JAO 64GB
  • Orin NX16GB
  • Orin Nano 8GB

The inference is run on the provided unpruned model at FP16 precision. The model input resolution is 416x416. The inference performance is run using trtexec on Jetson AGX Orin 64GB and A10. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

modeldevicebatch sizelatency (ms)images per second
Retail Item Detection (binary)Jetson AGX Orin 64GB110.4396
Retail Item Detection (binary)Jetson AGX Orin 64GB16131.79121
Retail Item Detection (binary)Jetson AGX Orin 64GB32258.44124
Retail Item Detection (binary)Tesla A1014.27234
Retail Item Detection (binary)Tesla A101644.94356
Retail Item Detection (binary)Tesla A1064174.46367

How to use this model

Instructions to use unpruned model with TAO

In order to use these models as pretrained weights for transfer learning, please use the snippet below as template for the model component of the experiment spec file to train a Efficientdet-TF2 model. For more information on the experiment spec file, please refer to the [RetailDetector notebook and the EfficientDdet-TFtf2 TAO doc].

% spec file
model:
    name: 'efficientdet-d5'
data:
    loader:
      prefetch_size: 4
      shuffle_file: True
    num_classes: 101 # switch to 2 for RetailDetector_binary model
    image_size: '416x416'
    max_instances_per_image: 10
    train_tfrecords:
       - [train tfrecords]
    val_tfrecords:
       - [validation tfrecords]
    val_json_file: [validation annotation json file path]
train:
    num_examples_per_epoch: 10000 # change to train set size
    ...

evaluate:
    num_samples: 500 # change to test set size
    label_map: # label map file here
    ...

Instructions to deploy these models with DeepStream

Here is an example of using the Retail Item Embedder together with the Retail Item Detector [TODO: add Retail Item Embedder url here] for an end-to-end video analytic application. To do so, deploy these models with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of these models into the deepstream sample app.

To deploy these models with DeepStream 6.2, please follow the instructions below:

Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

/opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

The sample config files are provided in NVIDIA-AI-IOT(TODO: Update the URL when deepstream_tao_apps are merged with???). Assume the repo is cloned under $DS_TAO_APPS_HOME, in $DS_TAO_APPS_HOME/configs/retailDetector_tao,

# Binary-class detector (the primary GIE) inference setting
pgie_retailDetector_binary_config.yml
pgie_retailDetector_binary_config.txt

Key Parameters in pgie_retailDetector_100_tao_config.yml

property:
  gpu-id: 0
  net-scale-factor: 1
  offsets: 0;0;0
  model-color-format: 0
  tlt-model-key: nvidia_tlt
  tlt-encoded-model: ../../models/retailDetector/retailDetector_100.etlt
  model-engine-file: ../../models/retailDetector/retailDetector_100.etlt_b1_gpu0_fp16.engine
  labelfile-path:    ../../models/retailDetector/retailDetector_100_labels.txt
  network-input-order: 1
  infer-dims: 3;416;416
  maintain-aspect-ratio: 1
  batch-size: 1
  ## 0=FP32, 1=INT8, 2=FP16 mode
  network-mode: 2
  num-detected-classes: 100
  interval: 0
  cluster-mode: 3
  output-blob-names: num_detections;detection_boxes;detection_scores;detection_classes
  parse-bbox-func-name: NvDsInferParseCustomEfficientDetTAO
  custom-lib-path: ../../post_processor/libnvds_infercustomparser_tao.so
#Use the config params below for NMS clustering mode
class-attrs-all:
  pre-cluster-threshold: 0.5

In order to decode the bounding box information from the EfficientDet output tensor, the custom parser function and library have to be specified. To inference the model, please run:

cd $DS_TAO_APPS_HOME/configs/retailDetector_tao
$DS_TAO_APPS_HOME/apps/tao_detection/ds-tao-detection -c retailDetector_100_config.txt -i file://$DS_TAO_APPS_HOME/samples/streams/retailDetector_h264.mp4

The "Deploying to DeepStream" chapter of TAO User Guide provides more details.

Technical blogs

Suggested reading

Ethical Considerations:

NVIDIA Retail Object Detection model detects retail items. However, no additional information such as people and other distractors in the background are inferred. Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.