Model Creation Python* Sample

This sample demonstrates how to run inference using a model built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly.

The following OpenVINO Python API is used in the application:

Feature

API

Description

Model Operations

openvino.runtime.Model , openvino.runtime.set_batch , openvino.runtime.Model.input

Managing of model

Opset operations

openvino.runtime.op.Parameter , openvino.runtime.op.Constant , openvino.runtime.opset8.convolution , openvino.runtime.opset8.add , openvino.runtime.opset1.max_pool , openvino.runtime.opset8.reshape , openvino.runtime.opset8.matmul , openvino.runtime.opset8.relu , openvino.runtime.opset8.softmax

Description of a model topology using OpenVINO Python API

Basic OpenVINO™ Runtime API is covered by Hello Classification Python* Sample.

Options

Values

Validated Models

LeNet

Model Format

Model weights file (*.bin)

Supported devices

All

Other language realization

C++

How It Works

At startup, the sample application does the following:

  • Reads command line parameters

  • Build a Model and passed weights file

  • Loads the model and input data to the OpenVINO™ Runtime plugin

  • Performs synchronous inference and processes output data, logging each step in a standard output stream

You can see the explicit description of each sample step at Integration Steps section of “Integrate OpenVINO™ Runtime with Your Application” guide.

Running

To run the sample, you need to specify model weights and device.

python model_creation_sample.py <path_to_model> <device_name>

Note

  • This sample supports models with FP32 weights only.

  • The lenet.bin weights file was generated by the Model Optimizer tool from the public LeNet model with the --input_shape [64,1,28,28] parameter specified.

  • The original model is available in the Caffe* repository on GitHub*.

For example:

python model_creation_sample.py lenet.bin GPU

Sample Output

The sample application logs each step in a standard output stream and outputs 10 inference results.

[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Loading the model using ngraph function with weights from lenet.bin
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Top 1 results:
[ INFO ] Image 0
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 0       1.0000000   0
[ INFO ]
[ INFO ] Image 1
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 1       1.0000000   1
[ INFO ]
[ INFO ] Image 2
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 2       1.0000000   2
[ INFO ]
[ INFO ] Image 3
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 3       1.0000000   3
[ INFO ]
[ INFO ] Image 4
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 4       1.0000000   4
[ INFO ]
[ INFO ] Image 5
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 5       1.0000000   5
[ INFO ]
[ INFO ] Image 6
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 6       1.0000000   6
[ INFO ]
[ INFO ] Image 7
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 7       1.0000000   7
[ INFO ]
[ INFO ] Image 8
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 8       1.0000000   8
[ INFO ]
[ INFO ] Image 9
[ INFO ]
[ INFO ] classid probability label
[ INFO ] -------------------------
[ INFO ] 9       1.0000000   9
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool