Converting a TensorFlow Attention OCR Model¶
The code described here has been deprecated! Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but you should not use it in contemporary applications.
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the Python tutorials.
This tutorial explains how to convert the Attention OCR (AOCR) model from the TensorFlow Attention OCR repository to the Intermediate Representation (IR).
Extracting a Model from
To get an AOCR model, download
aocr Python library:
pip install git+https://github.com/emedvedev/attention-ocr.git@master#egg=aocr
This library contains a pretrained model and allows training and running AOCR, using the command line. After installation of aocr, extract the model:
aocr export --format=frozengraph model/path/
Once extracted, the model can be found in
Converting the TensorFlow AOCR Model to IR¶
The original AOCR model includes the preprocessing data, which contains:
Decoding input data to binary format where input data is an image represented as a string.
Resizing binary image to working resolution.
The resized image is sent to the convolution neural network (CNN). Because model conversion API does not support image decoding, the preprocessing part of the model should be cut off, using the
input command-line parameter.
--output "transpose_1,transpose_2" \
map/TensorArrayStack/TensorArrayGatherV3:0[1 32 86 1]- name of node producing tensor after preprocessing.
transpose_1- name of the node producing tensor with predicted characters.
transpose_2- name of the node producing tensor with predicted characters probabilities.