Deep Learning accuracy validation framework

Usage

You may test your installation and get familiar with accuracy checker by running sample.

Once you installed accuracy checker you can evaluate your configurations with:

python3 accuracy_check.py -c path/to/configuration_file -m /path/to/models -s /path/to/source/data -a /path/to/annotation

All relative paths in config files will be prefixed with values specified in command line:

You may refer to -h, --help to full list of command line options. Some optional arguments are:

Configuration

There is config file which declares validation process. Every validated model has to have its entry in models list with distinct name and other properties described below.

There is also definitions file, which declares global options shared across all models. Config file has priority over definitions file.

example:

models:
- name: model_name
launchers:
- framework: caffe
model: public/alexnet/caffe/bvlc_alexnet.prototxt
weights: public/alexnet/caffe/bvlc_alexnet.caffemodel
adapter: classification
batch: 128
datasets:
- name: dataset_name

Launchers

Launcher is a description of how your model should be executed. Each launcher configuration starts with setting framework name. Currently caffe and dlsdk supported. Launcher description can have differences.

Please view:

Datasets

Dataset entry describes data on which model should be evaluated, all required preprocessing and postprocessing/filtering steps, and metrics that will be used for evaluation.

If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models it is reasonable to declare it in some global configuration file (definition file). This way in your local configuration file you can provide only name and all required steps will be picked from global one. To pass path to this global configuration use --definition argument of CLI.

Each dataset must have:

And optionally:

Also it must contain data related to annotation. You can convert annotation inplace using:

or use existing annotation file and dataset meta:

example of dataset definition:

- name: dataset_name
annotation: annotation.pickle
data_source: images_folder
preprocessing:
- type: resize
dst_width: 256
dst_height: 256
- type: normalization
mean: imagenet
- type: crop
dst_width: 227
dst_height: 227
metrics:
- type: accuracy

Preprocessing, Metrics, Postprocessing

Each entry of preprocessing, metrics, postprocessing must have type field, other options are specific to type. If you do not provide any other option, then it will be picked from definitions file.

You can find useful following instructions:

Some metrics support providing vector results ( e. g. mAP is able to return average precision for each detection class). You can change view mode for metric results using presenter (e.g. print_vector, print_scalar).

example:

metrics:
- type: accuracy
top_k: 5
reference: 86.43
threshold: 0.005

Testing new models

Typical workflow for testing new model include:

  1. Convert annotation of your dataset. Use one of the converters from annotation-converters, or write your own if there is no converter for your dataset. You can find detailed instruction how to use converters here.
python3 convert_annotation.py converter --converter_specific_parameter --output_dir data/annotations
  1. Choose one of adapters or write your own. Adapter converts raw output produced by framework to high level problem specific representation (e.g. ClassificationPrediction, DetectionPrediction, etc).
  1. Create entry in config file and execute.