Hello Classification C++ Sample¶
This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API.
Models with only 1 input and output are supported.
The following C++ API is used in the application:
Feature |
API |
Description |
---|---|---|
OpenVINO Runtime Version |
|
Get Openvino API version |
Basic Infer Flow |
|
Common API to do inference: read and compile a model, create an infer request, configure input and output tensors |
Synchronous Infer |
Do synchronous inference |
|
Model Operations |
Get inputs and outputs of a model |
|
Tensor Operations |
Get a tensor shape |
|
Preprocessing |
|
Set image of the original size as input for a model with other input size. Resize and layout conversions are performed automatically by the corresponding plugin just before inference. |
Options |
Values |
---|---|
Validated Models |
|
Model Format |
OpenVINO Intermediate Representation (*.xml + *.bin), ONNX (*.onnx) |
Supported devices |
|
Other language realization |
How It Works¶
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the OpenVINO Runtime plugin and performs synchronous inference. Then processes output data and writes it to a standard output stream.
For more information, refer to the explicit description of each sample Integration Steps in the Integrate OpenVINO Runtime with Your Application.
Building¶
To build the sample, use the instructions available at Build the Sample Applications section in OpenVINO™ Toolkit Samples.
Running¶
Before running the sample, specify a model and an image:
you may use public or Intel’s pre-trained models from the Open Model Zoo. The models can be downloaded using the Model Downloader.
you may use images from the media files collection, available online in the test data storage.
To run the sample, use the following script:
hello_classification <path_to_model> <path_to_image> <device_name>
NOTES :
By default, samples and demos in OpenVINO Toolkit expect input with
BGR
order of channels. If you trained your model to work withRGB
order, you need to manually rearrange the default order of channels in the sample or demo application, or reconvert your model, using Model Optimizer with--reverse_input_channels
argument specified. For more information about the argument, refer to the When to Reverse Input Channels section of Embedding Preprocessing Computation.Before running the sample with a trained model, make sure that the model is converted to the OpenVINO Intermediate Representation (OpenVINO IR) format (*.xml + *.bin) using Model Optimizer.
The sample accepts models in the ONNX format (.onnx) that do not require preprocessing.
Example¶
Install the
openvino-dev
Python package to use Open Model Zoo Tools:python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
Download a pre-trained model, using:
omz_downloader --name googlenet-v1
If a model is not in the OpenVINO IR or ONNX format, it must be converted. You can do this using the model converter:
omz_converter --name googlenet-v1
Perform inference of the
car.bmp
image, using thegooglenet-v1
model on aGPU
, for example:hello_classification googlenet-v1.xml car.bmp GPU
Sample Output¶
The application outputs top-10 inference results.
[ INFO ] OpenVINO Runtime version ......... <version>
[ INFO ] Build ........... <build>
[ INFO ]
[ INFO ] Loading model files: /models/googlenet-v1.xml
[ INFO ] model name: GoogleNet
[ INFO ] inputs
[ INFO ] input name: data
[ INFO ] input type: f32
[ INFO ] input shape: {1, 3, 224, 224}
[ INFO ] outputs
[ INFO ] output name: prob
[ INFO ] output type: f32
[ INFO ] output shape: {1, 1000}
Top 10 results:
Image /images/car.bmp
classid probability
------- -----------
656 0.8139648
654 0.0550537
468 0.0178375
436 0.0165405
705 0.0111694
817 0.0105820
581 0.0086823
575 0.0077515
734 0.0064468
785 0.0043983