Interactive question answering with OpenVINO™

This Jupyter notebook can be launched on-line, opening an interactive environment in a browser window. You can also make a local installation. Choose one of the following options:

BinderGoogle ColabGithub

This demo shows interactive question answering with OpenVINO, using small BERT-large-like model distilled and quantized to INT8 on SQuAD v1.1 training set from larger BERT-large model. The model comes from Open Model Zoo. Final part of this notebook provides live inference results from your inputs.

Table of contents:

%pip install -q "openvino>=2023.1.0"
DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
Note: you may need to restart the kernel to use updated packages.

Imports

# Fetch `notebook_utils` module
import urllib.request
urllib.request.urlretrieve(
    url='https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/utils/notebook_utils.py',
    filename='notebook_utils.py'
)

from notebook_utils import download_file
import operator
import time
from urllib import parse
from pathlib import Path

import numpy as np
import openvino as ov


download_file(
    url='https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/213-question-answering/html_reader.py',
    filename='html_reader.py'
)
import html_reader as reader

download_file(
    url='https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/213-question-answering/tokens_bert.py',
    filename='tokens_bert.py'
)
import tokens_bert as tokens
html_reader.py:   0%|          | 0.00/635 [00:00<?, ?B/s]
tokens_bert.py:   0%|          | 0.00/929 [00:00<?, ?B/s]

The model

Download the model

Download pretrained models from https://storage.openvinotoolkit.org/repositories/open_model_zoo. If the model is already downloaded, this step is skipped.

You can download and use any of the following models: bert-large-uncased-whole-word-masking-squad-0001, bert-large-uncased-whole-word-masking-squad-int8-0001, bert-small-uncased-whole-word-masking-squad-0001, bert-small-uncased-whole-word-masking-squad-0002, bert-small-uncased-whole-word-masking-squad-int8-0002, just change the model name in the code below. All of these models are already converted to OpenVINO Intermediate Representation (OpenVINO IR).

MODEL_DIR = Path("model")
MODEL_DIR.mkdir(exist_ok=True)

model_xml_url = "https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/bert-small-uncased-whole-word-masking-squad-int8-0002/FP16-INT8/bert-small-uncased-whole-word-masking-squad-int8-0002.xml"
model_bin_url = "https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/bert-small-uncased-whole-word-masking-squad-int8-0002/FP16-INT8/bert-small-uncased-whole-word-masking-squad-int8-0002.bin"

download_file(model_xml_url, model_xml_url.split("/")[-1], MODEL_DIR)
download_file(model_bin_url, model_bin_url.split("/")[-1], MODEL_DIR)

model_path = MODEL_DIR / model_xml_url.split("/")[-1]
model/bert-small-uncased-whole-word-masking-squad-int8-0002.xml:   0%|          | 0.00/1.11M [00:00<?, ?B/s]
model/bert-small-uncased-whole-word-masking-squad-int8-0002.bin:   0%|          | 0.00/39.3M [00:00<?, ?B/s]
model_path
PosixPath('model/bert-small-uncased-whole-word-masking-squad-int8-0002.xml')

Load the model

Downloaded models are located in a fixed structure, which indicates a vendor, a model name and a precision. Only a few lines of code are required to run the model. First, create an OpenVINO Runtime object. Then, read the network architecture and model weights from the .xml and .bin files. Finally, compile the network for the desired device. You can choose CPU or GPU for this model.

# Initialize OpenVINO Runtime.
core = ov.Core()
# Read the network and corresponding weights from a file.
model = core.read_model(model_path)

Select inference device

select device from dropdown list for running inference using OpenVINO

import ipywidgets as widgets

core = ov.Core()

device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],
    value='AUTO',
    description='Device:',
    disabled=False,
)

device
Dropdown(description='Device:', index=1, options=('CPU', 'AUTO'), value='AUTO')
compiled_model = core.compile_model(model=model, device_name=device.value)

# Get input and output names of nodes.
input_keys = list(compiled_model.inputs)
output_keys = list(compiled_model.outputs)

# Get the network input size.
input_size = compiled_model.input(0).shape[1]

Input keys are the names of the input nodes and output keys contain names of output nodes of the network. There are 4 inputs and 2 outputs for BERT-large-like model.

[i.any_name for i in input_keys], [o.any_name for o in output_keys]
(['input_ids', 'attention_mask', 'token_type_ids', 'position_ids'],
 ['output_s', 'output_e'])

Processing

NLP models usually take a list of tokens as a standard input. A token is a single word converted to some integer. To provide the proper input, you need the vocabulary for such mapping. You also need to define some special tokens, such as separators or padding and a function to load the content from provided URLs.

# Download the vocabulary from the openvino_notebooks storage
vocab_file_path = download_file(
    "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/text/bert-uncased/vocab.txt",
    directory="data"
)

# Create a dictionary with words and their indices.
vocab = tokens.load_vocab_file(str(vocab_file_path))

# Define special tokens.
cls_token = vocab["[CLS]"]
pad_token = vocab["[PAD]"]
sep_token = vocab["[SEP]"]


# A function to load text from given urls.
def load_context(sources):
    input_urls = []
    paragraphs = []
    for source in sources:
        result = parse.urlparse(source)
        if all([result.scheme, result.netloc]):
            input_urls.append(source)
        else:
            paragraphs.append(source)

    paragraphs.extend(reader.get_paragraphs(input_urls))
    # Produce one big context string.
    return "\n".join(paragraphs)
data/vocab.txt:   0%|          | 0.00/226k [00:00<?, ?B/s]

Preprocessing

The input size in this case is 384 tokens long. The main input (input_ids) to used BERT model consists of two parts: question tokens and context tokens separated by some special tokens.

If question + context are shorter than 384 tokens, padding tokens are added. If question + context is longer than 384 tokens, the context must be split into parts and the question with different parts of context must be fed to the network many times.

Use overlapping, so neighbor parts of the context are overlapped by half size of the context part (if the context part equals 300 tokens, neighbor context parts overlap with 150 tokens). You also need to provide the following sequences of integer values:

  • attention_mask - a sequence of integer values representing the mask of valid values in the input.

  • token_type_ids - a sequence of integer values representing the segmentation of input_ids into question and context.

  • position_ids - a sequence of integer values from 0 to 383 representing the position index for each input token.

For more information, refer to the Input section of BERT model documentation.

# A generator of a sequence of inputs.
def prepare_input(question_tokens, context_tokens):
    # A length of question in tokens.
    question_len = len(question_tokens)
    # The context part size.
    context_len = input_size - question_len - 3

    if context_len < 16:
        raise RuntimeError("Question is too long in comparison to input size. No space for context")

    # Take parts of the context with overlapping by 0.5.
    for start in range(0, max(1, len(context_tokens) - context_len), context_len // 2):
        # A part of the context.
        part_context_tokens = context_tokens[start:start + context_len]
        # The input: a question and the context separated by special tokens.
        input_ids = [cls_token] + question_tokens + [sep_token] + part_context_tokens + [sep_token]
        # 1 for any index if there is no padding token, 0 otherwise.
        attention_mask = [1] * len(input_ids)
        # 0 for question tokens, 1 for context part.
        token_type_ids = [0] * (question_len + 2) + [1] * (len(part_context_tokens) + 1)

        # Add padding at the end.
        (input_ids, attention_mask, token_type_ids), pad_number = pad(input_ids=input_ids,
                                                                      attention_mask=attention_mask,
                                                                      token_type_ids=token_type_ids)

        # Create an input to feed the model.
        input_dict = {
            "input_ids": np.array([input_ids], dtype=np.int32),
            "attention_mask": np.array([attention_mask], dtype=np.int32),
            "token_type_ids": np.array([token_type_ids], dtype=np.int32),
        }

        # Some models require additional position_ids.
        if "position_ids" in [i_key.any_name for i_key in input_keys]:
            position_ids = np.arange(len(input_ids))
            input_dict["position_ids"] = np.array([position_ids], dtype=np.int32)

        yield input_dict, pad_number, start


# A function to add padding.
def pad(input_ids, attention_mask, token_type_ids):
    # How many padding tokens.
    diff_input_size = input_size - len(input_ids)

    if diff_input_size > 0:
        # Add padding to all the inputs.
        input_ids = input_ids + [pad_token] * diff_input_size
        attention_mask = attention_mask + [0] * diff_input_size
        token_type_ids = token_type_ids + [0] * diff_input_size

    return (input_ids, attention_mask, token_type_ids), diff_input_size

Postprocessing

The results from the network are raw (logits). Use the softmax function to get the probability distribution. Then, find the best answer in the current part of the context (the highest score) and return the score and the context range for the answer.

# Based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L163
def postprocess(output_start, output_end, question_tokens, context_tokens_start_end, padding, start_idx):

    def get_score(logits):
        out = np.exp(logits)
        return out / out.sum(axis=-1)

    # Get start-end scores for the context.
    score_start = get_score(output_start)
    score_end = get_score(output_end)

    # An index of the first context token in a tensor.
    context_start_idx = len(question_tokens) + 2
    # An index of the last+1 context token in a tensor.
    context_end_idx = input_size - padding - 1

    # Find product of all start-end combinations to find the best one.
    max_score, max_start, max_end = find_best_answer_window(start_score=score_start,
                                                            end_score=score_end,
                                                            context_start_idx=context_start_idx,
                                                            context_end_idx=context_end_idx)

    # Convert to context text start-end index.
    max_start = context_tokens_start_end[max_start + start_idx][0]
    max_end = context_tokens_start_end[max_end + start_idx][1]

    return max_score, max_start, max_end


# Based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L188
def find_best_answer_window(start_score, end_score, context_start_idx, context_end_idx):
    context_len = context_end_idx - context_start_idx
    score_mat = np.matmul(
        start_score[context_start_idx:context_end_idx].reshape((context_len, 1)),
        end_score[context_start_idx:context_end_idx].reshape((1, context_len)),
    )
    # Reset candidates with end before start.
    score_mat = np.triu(score_mat)
    # Reset long candidates (>16 words).
    score_mat = np.tril(score_mat, 16)
    # Find the best start-end pair.
    max_s, max_e = divmod(score_mat.flatten().argmax(), score_mat.shape[1])
    max_score = score_mat[max_s, max_e]

    return max_score, max_s, max_e

First, create a list of tokens from the context and the question. Then, find the best answer by trying different parts of the context. The best answer should come with the highest score.

def get_best_answer(question, context):
    # Convert the context string to tokens.
    context_tokens, context_tokens_start_end = tokens.text_to_tokens(text=context.lower(),
                                                                     vocab=vocab)
    # Convert the question string to tokens.
    question_tokens, _ = tokens.text_to_tokens(text=question.lower(), vocab=vocab)

    results = []
    # Iterate through different parts of the context.
    for network_input, padding, start_idx in prepare_input(question_tokens=question_tokens,
                                                           context_tokens=context_tokens):
        # Get output layers.
        output_start_key = compiled_model.output("output_s")
        output_end_key = compiled_model.output("output_e")

        # OpenVINO inference.
        result = compiled_model(network_input)
        # Postprocess the result, getting the score and context range for the answer.
        score_start_end = postprocess(output_start=result[output_start_key][0],
                                      output_end=result[output_end_key][0],
                                      question_tokens=question_tokens,
                                      context_tokens_start_end=context_tokens_start_end,
                                      padding=padding,
                                      start_idx=start_idx)
        results.append(score_start_end)

    # Find the highest score.
    answer = max(results, key=operator.itemgetter(0))
    # Return the part of the context, which is already an answer.
    return context[answer[1]:answer[2]], answer[0]

Main Processing Function

Run question answering on a specific knowledge base (websites) and iterate through the questions.

def run_question_answering(sources, example_question=None):
    print(f"Context: {sources}", flush=True)
    context = load_context(sources)

    if len(context) == 0:
        print("Error: Empty context or outside paragraphs")
        return

    if example_question is not None:
        start_time = time.perf_counter()
        answer, score = get_best_answer(question=example_question, context=context)
        end_time = time.perf_counter()

        print(f"Question: {example_question}")
        print(f"Answer: {answer}")
        print(f"Score: {score:.2f}")
        print(f"Time: {end_time - start_time:.2f}s")
    else:
        while True:
            question = input()
            # if no question - break
            if question == "":
                break

            # measure processing time
            start_time = time.perf_counter()
            answer, score = get_best_answer(question=question, context=context)
            end_time = time.perf_counter()

            print(f"Question: {question}")
            print(f"Answer: {answer}")
            print(f"Score: {score:.2f}")
            print(f"Time: {end_time - start_time:.2f}s")

Run

Run on local paragraphs

Change sources to your own to answer your questions. You can use as many sources as you want. Usually, you need to wait a few seconds for the answer, but the longer the context, the longer the waiting time. The model is very limited and sensitive for the input. The answer can depend on whether there is a question mark at the end. The model will try to answer any of your questions even if there is no good answer in the context. Therefore, in such cases, you can see random results.

Sample source: a paragraph from Computational complexity theory

Sample questions:

  • What is the term for a task that generally lends itself to being solved by a computer?

  • By what main attribute are computational problems classified utilizing computational complexity theory?

  • What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?

If you want to stop the processing just put an empty string.

First, run the code below. If you want to run it in interactive mode set ``example_question`` as ``None``, run the code, and then put your questions in the box.

sources = ["Computational complexity theory is a branch of the theory of computation in theoretical computer "
           "science that focuses on classifying computational problems according to their inherent difficulty, "
           "and relating those classes to each other. A computational problem is understood to be a task that "
           "is in principle amenable to being solved by a computer, which is equivalent to stating that the "
           "problem may be solved by mechanical application of mathematical steps, such as an algorithm."]

question = "What is the term for a task that generally lends itself to being solved by a computer?"

run_question_answering(sources, example_question=question)
Context: ['Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.']
Question: What is the term for a task that generally lends itself to being solved by a computer?
Answer: A computational problem
Score: 0.53
Time: 0.03s

Run on websites

You can also provide URLs. Note that the context (a knowledge base) is built from paragraphs on websites. If some information is outside the paragraphs, the algorithm will not be able to find it.

Sample source: OpenVINO wiki

Sample questions:

  • What does OpenVINO mean?

  • What is the license for OpenVINO?

  • Where can you deploy OpenVINO code?

If you want to stop the processing just put an empty string.

First, run the code below. If you want to run it in interactive mode set ``example_question`` as ``None``, run the code, and then put your questions in the box.

sources = ["https://en.wikipedia.org/wiki/OpenVINO"]

question = "What does OpenVINO mean?"

run_question_answering(sources, example_question=question)
Context: ['https://en.wikipedia.org/wiki/OpenVINO']
Question: What does OpenVINO mean?
Answer: open-source software toolkit for optimizing and deploying deep learning models
Score: 0.06
Time: 0.06s