Configure Accuracy Settings

You can modify parameters and specify additional parameters to achieve more precise calibration and accuracy results. You are required to specify several parameters for classification and object-detection models. All parameters are propagated to the Accuracy Checker tool.

To configure accuracy settings, click on the gear sign next to the model name in the Projects table or go to the settings from the INT8 calibration tab before the optimization process. Once you have specified your parameters, you are directed back to your previous window, either the Projects table or the INT8 tab.

Accuracy settings depend on the model usage. The default usage of a model is Generic. Specify Classification, Object-Detection, Instance Segmentation, Semantic Segmentation, or Generic usage in the drop-down list in the Accuracy Settings. If you choose Object Detection, specify SSD or YOLO model type. The latter requires additional type specification between V2 and Tiny V2:

Refer to the table below to see available parameters for each usage type:

Usage Configuration Parameters
Classification Preprocessing configuration: Separate background class, Normalization

Metric configuration: Metric, Top K
Object-Detection SSD Preprocessing configuration: Resize type, Color space, Separate background class, Normalization

Post-processing configuration: Prediction boxes

Metric configuration: Metric (Auto), Overlap threshold, Integral
Object-Detection YOLO V2 and YOLO Tiny V2 Preprocessing configuration: Resize type, Color space, Separate background class, Normalization

Post-processing configuration: Prediction boxes, NMS overlap

Metric configuration: Metric (Auto), Overlap threshold, Integral
Semantic Segmentation Preprocessing configuration: Resize type, Color space

Post-processing configuration: Segmentation mask encoding, Segmentation mask resizing

Metric configuration: Metric, Argmax
Instance Segmentation Adapter configuration: Input info layer, Output layer masks, Output layer boxes, Output layer classes, Output layer scores

Preprocessing configuration: Resize type, Color space

Metric configuration: Metric, Threshold start, Threshold step, Threshold end

Annotation conversion configuration: Separate background class
Generic N/A. Accuracy measurement for generic models is not available.

Do not change optional settings unless you are well aware of the impact they have.
For more details on parameter setting for calibration and accuracy checking, refer to the command-line documentation.

Adapter Configuration

Adapter parameters define conversion of inference results into a metrics-friendly format. These parameters are required for instance segmentation models. DL Workbench supports only TensorFlow* and ONNX* instance segmentation models. ONNX instance segmentation models have different output layers for masks, boxes, predictions, and confidence scores, while TensorFlow ones have a layer for masks and a layer for boxes, predictions, and confidence scores.

Example of an ONNX instance segmentation model: instance segmentation-security-0010 Example of a TensorFlow instance segmentation model: Mask R-CNN

Parameter Values Explanation
Input info layer Layer name Name of the layer with image metadata such as height, width, and depth
Output layer masks Layer name Detected object mask coordinates
Output mask detections Layer name TensorFlow-specific parameter. Boxes coordinates, predictions, and confidence scores for detected objects
Output layer boxes Layer name ONNX-specific parameter. Boxes coordinates for detected objects
Output layer classes Layer name ONNX-specific parameter. Predictions for detected objects
Output layer scores Layer name ONNX-specific parameter. Confidence score for detected objects

Preprocessing Configuration

Preprocessing configuration parameters define how to process images prior to inference with a model.

Parameter Values Explanation
Resize type Auto Resize images to the model input dimensions
Color space RGB
BGR
Transform image color space from RGB to BGR or back
Separate background class Yes
No
Use label index 0 to denote the background in an image
Normalization: mean [0; 255] The values to be subtracted from the corresponding image channels
Normalization: standard deviation [0; 255] The values to divide image channels by

Post-Processing Configuration

Post-processing parameters define how to process images after inference with a model. Post-processing also provides prediction values and/or annotation data after inference and before metric calculation.

Parameter Values Explanation
Prediction boxes None
ResizeBoxes
ResizeBoxes NMS
Resize images or set Non-Maximum Suppression (NMS) to make sure that detected objects are identified only once
NMS Overlap [0; 1] Non-maximum suppression overlap threshold to merge detections
Segmentation mask encoding Annotation Transfer mask colors to class labels using the color mapping from metadata in the annotation of a dataset
Segmentation mask resizing Prediction Resize the model output to initial values

Metric Configuration

Metric parameters specify metrics for post-inference measurements.

Parameter Values Explanation
Metric mAP
COCO Precision
MEAN IOU
The unit of measurement applied to evaluate performance of a model
Overlap Threshold [0; 1] For ImageNet and Pascal VOC datasets only. Minimal value for intersection over union to qualify that a bounding box of a prediction as true positive
Integral Max
11 Point
For ImageNet and Pascal VOC datasets only. Integral type to calculate average precision
Max detections Positive numbers For COCO datasets only. Maximum number of predicted results per image. If you have more predictions, results with minimal confidence are ignored.
Threshold start [0;1] For instance segmentation models only. Lower threshold of the intersection over union (IoU) value
Threshold step [0;1] For instance segmentation models only. Increment in the intersection over union (IoU) value
Threshold end [0;1] For instance segmentation models only. Upper threshold of the intersection over union (IoU) value
Argmax On For semantic segmentation models only. Applying argmax on output is required to enable accuracy measurements for models that do not perform argmaxing internally

Annotation Conversion Configuration

Annotation conversion parameters define conversion of a dataset annotation.

Parameter Values Explanation
Separate background class No
Yes
For instance segmentation models only. Specifies whether the selected model was trained on a dataset with an additional background class

See Also