If you want to know more about an error, you can download server logs, by clicking on the box with three dots next to the Version field and then pressing Download Log:
A .txt
file with server logs will be downloaded. You can use logs to investigate the problems and manually run tools to debug the problem by entering the Docker* container. For more information, go to Refer to Enter Docker Container with DL Workbench for more details..
This error appears due to model and dataset type confusion.
This error appears when you try to infer a model with precision FP32 on a VPU device, or a model with precision FP16 on a CPU device.
Choose another target device for inference. For more information, visit supported devices and precisions".
When configuring numbers of streams and batches to run a range of inferences, make sure minimum values do not exceed maximum ones, as well as a number of steps is not greater than the difference between maximum and minimum values.
If layers of a model have different precisions, such as FP16, FP32, INT8 and so on, the model will be recognized as a mixed-precision one, and a selected device might be unable to support it. Investigate your model to see if a plugin supports its precision.
An error in the picture below can appear due to loading a wrong archive:
Please check the archive with a model or a dataset. This model must contain two files: .xml
and .bin
If you cannot import models from the Open Model Zoo, perhaps you have not specified your proxy settings. Make sure you specify them when running a Docker container. For details, refer to Install DL Workbench.