DL Workbench enables you to visually estimate how well a model recognizes images by testing the model on particular sample images. This functionality considerably enhances the analysis of inference results, giving you an opportunity not only to estimate the performance, but also to visually understand whether the model works correctly and the accuracy is tolerable for client applications.
NOTE: The feature is available for models trained for the following tasks:
- Classification
- Object-Detection
- Instance-Segmentation
- Semantic-Segmentation
- Super-Resolution
- Style-Transfer
- Image-Inpainting
To get a visual representation of the output of your model, go to the Perform tab on the Projects page and open the Visualize Output tab.
Select an image on your system or drag and drop an image directly. Click Test, and the model predictions appear on the right.
Predictions for a classification model with corresponding confidence levels are sorted from the highest confidence rate to the lowest.
With object-detection models, you can visualize bounding boxes by hovering your mouse over a class prediction on the right.
Use the Threshold drop-down list to filter classes based on the confidence score.
With instance-segmentation models, you can visualize masks by hovering your mouse over a class prediction on the right.
Use the Threshold drop-down list to filter classes based on the confidence score.
For semantic-segmentation models, the DL Workbench provides areas of categorized objects, which enables you to see whether your model recognized all object types, like the buses in this image:
Or the road in the same image:
Asses the performance of your super-resolution model by looking at a higher-resolution image on the right:
For style-transfer models, see the style for which your model was trained applied to a sample image:
With image-inpainting models, select areas that you want to inpaint on your test image by drawing rectangles.
In this example, the goal is to conceal license plates. Click Test and see the result on the right.
All images were taken from ImageNet, Pascal Visual Object Classes, and Common Objects in Context datasets for demonstration purposes only.
___