You may test your installation and get familiar with accuracy checker by running sample.
Once you installed accuracy checker you can evaluate your configurations with:
All relative paths in config files will be prefixed with values specified in command line:
-c, --config
path to configuration file.-m, --models
specifies directory in which models and weights declared in config file will be searched.-s, --source
specifies directory in which input images will be searched.-a, --annotations
specifies directory in which annotation and meta files will be searched.You may refer to -h, --help
to full list of command line options. Some optional arguments are:
-e, --extensions
directory with InferenceEngine extensions.-b, --bitstreams
directory with bitstream (for Inference Engine with fpga plugin).directory to store Model Optimizer converted models (used for DLSDK launcher only). -
-tf, –target_frameworkframework for infer. -
-td, –target_devices` devices for infer. You can specify several devices using space as a delimiter.There is config file which declares validation process. Every validated model has to have its entry in models
list with distinct name
and other properties described below.
There is also definitions file, which declares global options shared across all models. Config file has priority over definitions file.
example:
Launcher is a description of how your model should be executed. Each launcher configuration starts with setting framework
name. Currently caffe and dlsdk supported. Launcher description can have differences.
Please view:
Dataset entry describes data on which model should be evaluated, all required preprocessing and postprocessing/filtering steps, and metrics that will be used for evaluation.
If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models it is reasonable to declare it in some global configuration file (definition file). This way in your local configuration file you can provide only name
and all required steps will be picked from global one. To pass path to this global configuration use --definition
argument of CLI.
Each dataset must have:
name
- unique identifier of your model/topology.data_source
: path to directory where input data is stored.metrics
: list of metrics that should be computed.And optionally:
postprocessing
: list of postprocessing steps.reader
: approach for data reading. You can specify: opencv_imread
or pillow_imread
for reading images and opencv_capture
for reading frames from video. Default reader is opencv_imread
.Also it must contain data related to annotation. You can convert annotation inplace using:
annotation_conversion
: parameters for annotation conversionor use existing annotation file and dataset meta:
annotation
- path to annotation file, you must convert annotation to representation of dataset problem first, you may choose one of the converters from annotation-converters if there is already converter for your dataset or write your own.dataset_meta
: path to metadata file (generated by converter). More detailed information about annotation conversion you can find here
example of dataset definition:
Each entry of preprocessing, metrics, postprocessing must have type
field, other options are specific to type. If you do not provide any other option, then it will be picked from definitions file.
You can find useful following instructions:
Some metrics support providing vector results ( e. g. mAP is able to return average precision for each detection class). You can change view mode for metric results using presenter
(e.g. print_vector
, print_scalar
).
example:
Typical workflow for testing new model include: