AI-based auto-correction products are becoming increasingly popular due
to their ease of use, editing speed, and affordability. These products
improve the quality of written text in emails, blogs, and chats.
Grammatical Error Correction (GEC) is the task of correcting different
types of errors in text such as spelling, punctuation, grammatical and
word choice errors. GEC is typically formulated as a sentence correction
task. A GEC system takes a potentially erroneous sentence as input and
is expected to transform it into a more correct version. See the example
given below:
Input (Erroneous)
Output (Corrected)
I like to rides my bicycle.
I like to ride my bicycle.
As shown in the image below, different types of errors in written
language can be corrected.
This tutorial shows how to perform grammatical error correction using
OpenVINO. We will use pre-trained models from the Hugging Face
Transformers
library. To simplify the user experience, the Hugging Face
Optimum library is used to
convert the models to OpenVINO™ IR format.
A Grammatical Error Correction task can be thought of as a
sequence-to-sequence task where a model is trained to take a
grammatically incorrect sentence as input and return a grammatically
correct sentence as output. We will use the
FLAN-T5
model finetuned on an expanded version of the
JFLEG dataset.
The version of FLAN-T5 released with the Scaling Instruction-Finetuned
Language Models paper is an
enhanced version of T5 that has
been finetuned on a combination of tasks. The paper explores instruction
finetuning with a particular focus on scaling the number of tasks,
scaling the model size, and finetuning on chain-of-thought data. The
paper discovers that overall instruction finetuning is a general method
that improves the performance and usability of pre-trained language
models.
For more details about the model, please check out
paper, original
repository, and Hugging
Face model card
Additionally, to reduce the number of sentences required to be
processed, you can perform grammatical correctness checking. This task
should be considered as a simple binary text classification, where the
model gets input text and predicts label 1 if a text contains any
grammatical errors and 0 if it does not. You will use the
roberta-base-CoLA
model, the RoBERTa Base model finetuned on the CoLA dataset. The RoBERTa
model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining
Approach paper. It builds on BERT
and modifies key hyperparameters, removing the next-sentence
pre-training objective and training with much larger mini-batches and
learning rates. Additional details about the model can be found in a
blog
post
by Meta AI and in the Hugging Face
documentation
Now that we know more about FLAN-T5 and RoBERTa, let us get started. 🚀
First, we need to install the Hugging Face
Optimum library
accelerated by OpenVINO integration. The Hugging Face Optimum API is a
high-level API that enables us to convert and quantize models from the
Hugging Face Transformers library to the OpenVINO™ IR format. For more
details, refer to the Hugging Face Optimum
documentation.
Optimum Intel can be used to load optimized models from the Hugging
Face Hub and
create pipelines to run an inference with OpenVINO Runtime using Hugging
Face APIs. The Optimum Inference models are API compatible with Hugging
Face Transformers models. This means we just need to replace
AutoModelForXxx class with the corresponding OVModelForXxx
class.
Below is an example of the RoBERTa text classification model
Model class initialization starts with calling from_pretrained
method. When downloading and converting Transformers model, the
parameter from_transformers=True should be added. We can save the
converted model for the next usage with the save_pretrained method.
Tokenizer class and pipelines API are compatible with Optimum models.
2023-09-27 14:53:36.462575: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-09-27 14:53:36.496914: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-27 14:53:37.063292: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Let us check model work, using inference pipeline for
text-classification task. You can find more information about usage
Hugging Face inference pipelines in this
tutorial
input_text="They are moved by salar energy"grammar_checker_pipe=pipeline("text-classification",model=grammar_checker_model,tokenizer=grammar_checker_tokenizer)result=grammar_checker_pipe(input_text)[0]print(f"input text: {input_text}")print(f'predicted label: {"contains_errors"ifresult["label"]=="LABEL_1"else"no errors"}')print(f'predicted score: {result["score"]:.2}')
The steps for loading the Grammar Corrector model are very similar,
except for the model class that is used. Because FLAN-T5 is a
sequence-to-sequence text generation model, we should use the
OVModelForSeq2SeqLM class and the text2text-generation pipeline
to run it.
Frameworknotspecified.UsingpttoexporttoONNX.UsingframeworkPyTorch:1.13.1+cpuOverriding1configurationitem(s)-use_cache->FalseUsingframeworkPyTorch:1.13.1+cpuOverriding1configurationitem(s)-use_cache->True/home/nsavel/venvs/ov_notebooks_tmp/lib/python3.8/site-packages/transformers/modeling_utils.py:875:TracerWarning:ConvertingatensortoaPythonbooleanmightcausethetracetobeincorrect.Wecan't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!ifcausal_mask.shape[1]<attention_mask.shape[1]:UsingframeworkPyTorch:1.13.1+cpuOverriding1configurationitem(s)-use_cache->True/home/nsavel/venvs/ov_notebooks_tmp/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py:509:TracerWarning:ConvertingatensortoaPythonbooleanmightcausethetracetobeincorrect.Wecan't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!elifpast_key_value.shape[2]!=key_value_states.shape[1]:CompilingtheencodertoAUTO...CompilingthedecodertoAUTO...CompilingthedecodertoAUTO...
/home/nsavel/venvs/ov_notebooks_tmp/lib/python3.8/site-packages/optimum/intel/openvino/modeling_seq2seq.py:339: FutureWarning: shared_memory is deprecated and will be removed in 2024.0. Value of shared_memory is going to override share_inputs value. Please use only share_inputs explicitly.
last_hidden_state = torch.from_numpy(self.request(inputs, shared_memory=True)["last_hidden_state"]).to(
/home/nsavel/venvs/ov_notebooks_tmp/lib/python3.8/site-packages/optimum/intel/openvino/modeling_seq2seq.py:416: FutureWarning: shared_memory is deprecated and will be removed in 2024.0. Value of shared_memory is going to override share_inputs value. Please use only share_inputs explicitly.
self.request.start_async(inputs, shared_memory=True)
Now let us put everything together and create the pipeline for grammar
correction. The pipeline accepts input text, verifies its correctness,
and generates the correct version if required. It will consist of
several steps:
Split text on sentences.
Check grammatical correctness for each sentence using Grammar
Checker.
Generate an improved version of the sentence if required.
importreimporttransformersfromtqdm.notebookimporttqdmdefsplit_text(text:str)->list:""" Split a string of text into a list of sentence batches. Parameters: text (str): The text to be split into sentence batches. Returns: list: A list of sentence batches. Each sentence batch is a list of sentences. """# Split the text into sentences using regexsentences=re.split(r"(?<=[^A-Z].[.?]) +(?=[A-Z])",text)# Initialize a list to store the sentence batchessentence_batches=[]# Initialize a temporary list to store the current batch of sentencestemp_batch=[]# Iterate through the sentencesforsentenceinsentences:# Add the sentence to the temporary batchtemp_batch.append(sentence)# If the length of the temporary batch is between 2 and 3 sentences, or if it is the last batch, add it to the list of sentence batchesiflen(temp_batch)>=2andlen(temp_batch)<=3orsentence==sentences[-1]:sentence_batches.append(temp_batch)temp_batch=[]returnsentence_batchesdefcorrect_text(text:str,checker:transformers.pipelines.Pipeline,corrector:transformers.pipelines.Pipeline,separator:str=" ")->str:""" Correct the grammar in a string of text using a text-classification and text-generation pipeline. Parameters: text (str): The inpur text to be corrected. checker (transformers.pipelines.Pipeline): The text-classification pipeline to use for checking the grammar quality of the text. corrector (transformers.pipelines.Pipeline): The text-generation pipeline to use for correcting the text. separator (str, optional): The separator to use when joining the corrected text into a single string. Default is a space character. Returns: str: The corrected text. """# Split the text into sentence batchessentence_batches=split_text(text)# Initialize a list to store the corrected textcorrected_text=[]# Iterate through the sentence batchesforbatchintqdm(sentence_batches,total=len(sentence_batches),desc="correcting text.."):# Join the sentences in the batch into a single stringraw_text=" ".join(batch)# Check the grammar quality of the text using the text-classification pipelineresults=checker(raw_text)# Only correct the text if the results of the text-classification are not LABEL_1 or are LABEL_1 with a score below 0.9ifresults[0]["label"]!="LABEL_1"or(results[0]["label"]=="LABEL_1"andresults[0]["score"]<0.9):# Correct the text using the text-generation pipelinecorrected_batch=corrector(raw_text)corrected_text.append(corrected_batch[0]["generated_text"])else:corrected_text.append(raw_text)# Join the corrected text into a single stringcorrected_text=separator.join(corrected_text)returncorrected_text
Let us see it in action.
default_text=("Most of the course is about semantic or content of language but there are also interesting"" topics to be learned from the servicefeatures except statistics in characters in documents.At"" this point, He introduces herself as his native English speaker and goes on to say that if"" you contine to work on social scnce")corrected_text=correct_text(default_text,grammar_checker_pipe,grammar_corrector_pipe)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
NNCF enables
post-training quantization by adding quantization layers into model
graph and then using a subset of the training dataset to initialize the
parameters of these additional quantization layers. Quantized operations
are executed in INT8 instead of FP32/FP16 making model
inference faster.
Grammar checker model takes up a tiny portion of the whole text
correction pipeline so we optimize only the grammar corrector model.
Grammar corrector itself consists of three models: encoder, first call
decoder and decoder with past. The last model’s share of inference
dominates the other ones. Because of this we quantize only it.
The optimization process contains the following steps:
Create a calibration dataset for quantization.
Run nncf.quantize() to obtain quantized models.
Serialize the INT8 model using openvino.save_model()
function.
Please select below whether you would like to run quantization to
improve model inference speed.
Below we retrieve the quantized model. Please see utils.py for
source code. Quantization is relatively time-consuming and will take
some time to complete.
/home/nsavel/workspace/openvino_notebooks/notebooks/214-grammar-correction/utils.py:39: FutureWarning: shared_memory is deprecated and will be removed in 2024.0. Value of shared_memory is going to override share_inputs value. Please use only share_inputs explicitly.
return original_fn(*args, **kwargs)
Let’s see correction results. The generated texts for quantized INT8
model and original FP32 model should be almost the same.
ifto_quantize.value:corrected_text_int8=correct_text(default_text,grammar_checker_pipe,grammar_corrector_pipe_int8)print(f"Input text: {default_text}\n")print(f'Generated text by INT8 model: {corrected_text_int8}')
Second, we compare two grammar correction pipelines from performance and
accuracy stand points.
Test split of jflegdataset is used for testing. One dataset sample consists of a text with
errors as input and several corrected versions as labels. When measuring
accuracy we use mean (1-WER) against corrected text versions,
where WER is Word Error Rate metric.
fromutilsimportcalculate_inference_time_and_accuracyTEST_SUBSET_SIZE=50ifto_quantize.value:inference_time_fp32,accuracy_fp32=calculate_inference_time_and_accuracy(grammar_corrector_pipe_fp32,TEST_SUBSET_SIZE)print(f"Evaluation results of FP32 grammar correction pipeline. Accuracy: {accuracy_fp32:.2f}%. Time: {inference_time_fp32:.2f} sec.")inference_time_int8,accuracy_int8=calculate_inference_time_and_accuracy(grammar_corrector_pipe_int8,TEST_SUBSET_SIZE)print(f"Evaluation results of INT8 grammar correction pipeline. Accuracy: {accuracy_int8:.2f}%. Time: {inference_time_int8:.2f} sec.")print(f"Performance speedup: {inference_time_fp32/inference_time_int8:.3f}")print(f"Accuracy drop :{accuracy_fp32-accuracy_int8:.2f}%.")print(f"Model footprint reduction: {model_size_fp32/model_size_int8:.3f}")
importgradioasgrimporttimedefcorrect(text,quantized,progress=gr.Progress(track_tqdm=True)):grammar_corrector=grammar_corrector_pipe_int8ifquantizedelsegrammar_corrector_pipestart_time=time.perf_counter()corrected_text=correct_text(text,grammar_checker_pipe,grammar_corrector)end_time=time.perf_counter()returncorrected_text,f"{end_time-start_time:.2f}"defcreate_demo_block(quantized:bool,show_model_type:bool):model_type=(" optimized"ifquantizedelse" original")ifshow_model_typeelse""withgr.Row():gr.Markdown(f"## Run{model_type} grammar correction pipeline")withgr.Row():withgr.Column():input_text=gr.Textbox(label="Text")withgr.Column():output_text=gr.Textbox(label="Correction")correction_time=gr.Textbox(label="Time (seconds)")withgr.Row():gr.Examples(examples=[default_text],inputs=[input_text])withgr.Row():button=gr.Button(f"Run{model_type}")button.click(correct,inputs=[input_text,gr.Number(quantized,visible=False)],outputs=[output_text,correction_time])withgr.Blocks()asdemo:gr.Markdown("# Interactive demo")quantization_is_present=grammar_corrector_pipe_int8isnotNonecreate_demo_block(quantized=False,show_model_type=quantization_is_present)ifquantization_is_present:create_demo_block(quantized=True,show_model_type=True)# if you are launching remotely, specify server_name and server_port# demo.launch(server_name='your server name', server_port='server port in int')# Read more in the docs: https://gradio.app/docs/try:demo.queue().launch(debug=False)exceptException:demo.queue().launch(share=True,debug=False)
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().