OpenVINO Release Notes#
2025.3 - 3 September 2025#
System Requirements | Release policy | Installation Guides
What’s new#
More Gen AI coverage and frameworks integrations to minimize code changes
New models supported: Phi-4-mini-reasoning, AFM-4.5B, Gemma-3-1B-it, Gemma-3-4B-it, and Gemma-3-12B.
NPU support added for: Qwen3-1.7B, Qwen3-4B, and Qwen3-8B.
LLMs optimized for NPU now available on OpenVINO Hugging Face collection.
Preview: Intel® Core™ Ultra Processor and Windows-based AI PCs can now leverage the OpenVINO™ Execution Provider for Windows* ML for high-performance, off-the-shelf starting experience on Windows*.
Broader LLM model support and more model compression optimization technique
The NPU plug-in adds support for longer contexts of up to 8K tokens, dynamic prompts, and dynamic LoRA for improved LLM performance.
The NPU plug-in now supports dynamic batch sizes by reshaping the model to a batch size of 1 and concurrently managing multiple inference requests, enhancing performance and optimizing memory utilization.
Accuracy improvements for GenAI models on both built-in and discrete graphics achieved through the implementation of the key cache compression per channel technique, in addition to the existing KV cache per-token compression method.
OpenVINO™ GenAI introduces TextRerankPipeline for improved retrieval relevance and RAG pipeline accuracy, plus Structured Output for enhanced response reliability and function calling while ensuring adherence to predefined formats.
More portability and performance to run AI at the edge, in the cloud or locally
Announcing support for Intel® Arc™ Pro B-Series (B50 and B60).
Preview: Hugging Face models that are GGUF-enabled for OpenVINO GenAI are now supported by the OpenVINO™ Model Server for popular LLM model architectures such as DeepSeek Distill, Qwen2, Qwen2.5, and Llama 3. This functionality reduces memory footprint and simplifies integration for GenAI workloads.
With improved reliability and tool call accuracy, the OpenVINO™ Model Server boosts support for agentic AI use cases on AI PCs, while enhancing performance on Intel CPUs, built-in GPUs, and NPUs.
int4 data-aware weights compression, now supported in the Neural Network Compression Framework (NNCF) for ONNX models, reduces memory footprint while maintaining accuracy and enables efficient deployment in resource-constrained environments.
OpenVINO™ Runtime#
Common#
Public API has been added to set and reset the log message handling callback. It allows injecting an external log handler to read OpenVINO messages in the user’s infrastructure, rather than from log files left by OpenVINO.
Build-time optimizations have been introduced to improve developer experience in project compilation.
Ability to import a precompiled model from an ov::Tensor has been added. Using ov::Tensor, which also supports memory-mapped files, to store precompiled models benefits both the OpenVINO caching mechanism and applications using core.import_model().
Several fixes for conversion between different precisions, such as u2 and f32->f4e2,1, have been implemented to improve compatibility with quantized models.
Support for negative indices in GatherElements and GatherND operators has been added to ensure compliance with ONNX standards.
vLLM-OpenVINO integration now supports vLLM API v1.
CPU Device Plugin#
Sage Attention is now supported. This feature is turned on with the
ENABLE_SAGE_ATTN
property, providing a performance boost for 1st token generation in LLMs with long prompts, while maintaining accuracy.FP16 model performance on 6th generation Intel® Xeon® processors has been enhanced by improving utilization of the underlying AMX FP16 capabilities and graph-level optimizations.
GPU Device Plugin#
LLM accuracy has been improved with by-channel key cache compression. Default KV-cache compression has also been switched from by-token to by-channel compression.
Gemma3-4b and Qwen-VL VLM performance has been improved on XMX-supporting platforms.
Basic functionalities for dynamic shape custom operations in GPU extension have been enabled.
LoRA performance has been improved for systolic platforms.
NPU Device Plugin#
Models compressed as NF4-FP16 are now enabled on NPU. This is the recommended precision for the following models: deepseek-r1-distill-qwen-7b, deepseek-r1-distill-qwen-14b, and qwen2-7b-instruct, providing a reasonable balance between accuracy and performance. This quantization is not supported on Intel® Core™ Ultra Series 1, where only symmetrically quantized channel-wise or group-wise INT4-FP16 models are supported.
Peak memory consumption of LLMs on NPU has been significantly reduced when using ahead-of-time compilation.
Optimizations for LLM vocabularies (LM Heads) compressed in INT8 asymmetric have been introduced, available with NPU driver 32.0.100.4181 or later.
Accuracy of LLMs with RoPE on longer contexts has been improved.
The NPU plug-in now supports dynamic batch sizes by reshaping the model to a batch size of 1 and concurrently managing multiple inference requests, enhancing performance and optimizing memory utilization. This requires driver 32.0.202.298 or later.
The remote tensor interface has been extended to support tensor creation from files; recent NPU drivers now support memory-mapped inputs/outputs.
OpenVINO Python API#
TensorVector binding has been enabled to avoid extra copies and speed up PostponedConstant usage.
Support for building experimental free-threaded 3.13t Python API has been added; prebuilt wheels are not distributed yet.
Free-threaded Python performance has been improved.
set_rt_info() method has been added to Node, Output, and Input to align with Model.set_rt_info().
OpenVINO Node.js API#
AsyncInferQueue class has been added to support easier implementation of asynchronous inference. The change comes with a benchmark tool to evaluate performance.
Model.reshape method has been exposed, including type conversion ability and type validation helpers, useful for reshaping LLMs.
Support for ov-node types in TypeScript part of bindings has been extended, enabling direct integration with the JavaScript API.
Wrapping of compileModel() method has been fixed to allow checking type of returned objects.
The version of LLMPipeline.generate() that returns strings is now deprecated. Starting with 2026.0.0 LLMPipeline.generate() will return DecodedResults by default. To use the new behavior with current release, set
["return_decoded_results": true]
in GenerationConfig.
PyTorch Framework Support#
Tensor concatenation inside loops is now supported, enabling the Paraformer model family.
OpenVINO™ Model Server#
Major new features:
Tool-guided generation has been implemented with the enable_tool_guided_generation parameter and –tool_parser option to enable model-specific XGrammar configuration for following expected response syntax. It uses dynamic rules based on the generated sequence, increasing model accuracy and minimizing invalid response formats for tools.
Tool parser has been added for Mistral-7B-Instruct-v0.3, extending the list of supported models with tool handling.
Stream response has been implemented for Qwen3, Hermes3 and Llama3 models, enabling more interactive use with tools.
BREAKING CHANGE: Separation of tool parser and reasoning parser has been implemented. Instead of the response_parser parameter, use separate parameters: tool_parser and reasoning_parser, allowing more flexible implementation and configuration on the server. Parsers can now be shared independently between models. Currently, Qwen3 is the only reasoning parser implemented.
Reading of the chat template has been changed from template.jinja to chat_template.jinja if the chat template is not included in tokenizer_config.json.
Structured output is now supported with the addition of JSON schema-guided generation using the OpenAI response_format field. This parameter enables generation of JSON responses for automation purposes and improvements in response accuracy. See Structured response in LLM models article for more details. A script testing the accuracy gain is also included.
Enforcement of tool call generation has been implemented using the tool_call=required in chat/completions field. This feature forces the model to generate at least one tool response, increasing response reliability while not guaranteeing response validity.
MCP server demo has been updated to include available features.
New models and use cases supported:
Qwen3-embedding and cross-encoders embedding models,
Qwen3-reranker,
Gemma3 VLM models.
Deployment improvements:
Progress bar display has been implemented for model downloads from Hugging Face. For models from the OpenVINO organization, download status is now shown in the logs.
Documentation on how to build a docker image with optimum-cli is now available, enabling the image to pull any model from Hugging Face and convert it to IR online in one step.
Models endpoint for OpenAI has been implemented, returning a list of available models in the expected OpenAI JSON schema for easier integration with existing applications.
The package size has been reduced by removing git and gitlfs dependencies, reducing the image by ~15MB. Model files are now pulled from Hugging Face using libgit2 and curl libraries.
UTF-8 chat template is now supported out of the box, no additional installation steps on Windows required.
Preview functionality for GGUF models has been added for LLM architectures including Qwen2, Qwen2.5, and Llama3. Models can now be deployed directly from HuggingFace Hub by passing model_id and file name. Note that accuracy and performance may be lower than with IR format models.
Bug fixes:
Truncation of prompts exceeding model length in embeddings has been implemented.
Neural Network Compression Framework#
4-bit data-aware Scale Estimation and AWQ compression methods have been introduced for ONNX models, providing more accurate compression results.
NF4 data type is now supported as an FP8 look-up table for faster inference.
New parameter has been added to support a fallback group size in 4-bit weight compression methods. This helps when the specified group size can not be applied, for example, in models with an unusual number of channels in matrix multiplication (matmuls). When enabled with nncf.AdvancedCompressionParameters(group_size_fallback_mode=ADJUST), NNCF automatically adjusts the group size. By default, nncf.AdvancedCompressionParameters(group_size_fallback_mode=IGNORE) is used, meaning that NNCF will not compress nodes when the specified group size can not be applied.
Initialization for 4-bit QAT with absorbable LoRA has been enhanced using advanced compression methods (AWQ + Scale Estimation). This replaces the previous basic data-free compression approach, enabling QAT to start with a more accurate model baseline and achieve better final accuracy.
External quantizers in the quantize_pt2e API have been enabled, including XNNPACKQuantizer and CoreMLQuantizer.
PyTorch 2.8 is now supported.
OpenVINO Tokenizers#
OpenVINO GenAI integration:
Padding side can now be set dynamically during runtime.
Tokenizer loading now supports a second input for relevant GenAI pipelines, for example TextRerankPipeline.
Two inputs are now supported to accommodate a wider range of tokenizer types.
OpenVINO GenAI#
New OpenVINO GenAI docs homepage: https://openvinotoolkit.github.io/openvino.genai/
Transitioned from Jinja2Cpp to Minja, improving chat_template coverage support.
Cache eviction algorithms added:
KVCrush algorithm
Sparse attention prefill
Support for Structured Output for flexible and efficient structured generation with XGrammar:
C++ and Python samples
Constraint sampling with Regex, JSONSchema, EBNF Grammar and Structural tags
Compound grammar to combine multiple grammar types (Regex, JSONSchema, EBNF) using Union (|) and Concat (+) operations for more flexible and complex output constraints.
GGUF
Qwen3 architecture is now supported
enable_save_ov_model property to serialize generated ov::Model as IR for faster LLMPipeline construction next time
LoRA
Dynamic LoRA for NPU has been enabled.
Model weights can now be overridden from .safetensors
Tokenizer
padding_side property has been added to specify padding direction (left or right)
add_second_input property to transform Tokenizer from one input to two inputs, used for TextRerankPipeline
JavaScript bindings:
New pipeline: TextEmbeddingPipeline
PerfMetrics for the LLMPipeline
Implemented getTokenizer into LLMPipeline
Other changes:
C API for WhisperPipeline has been added
gemma3-4b-it model is now supported in VLM Pipeline
Performance metrics for speculative decoding have been extended
Qwen2-VL and Qwen2.5-VL have been optimized for GPU
Exporting stateful Whisper models is now supported on NPU out of the box, using –disable-stateful is no longer required.
Dynamic prompts are now enabled by default on NPU:
Longer contexts are available as preview feature on 32GB Intel® Core™ Ultra Series 2 (with prompt size up to 8..12K tokens).
The default chunk size is 1024 and can be controlled via property NPUW_LLM_PREFILL_CHUNK_SIZE. For example, set it to 256 to see the effect on shorter prompts.
PREFILL_HINT can be set to STATIC to bring back the old behavior.
Other Changes and Known Issues#
Jupyter Notebooks#
Known Issues#
2025.2 - 18 June 2025
OpenVINO™ Runtime
Common
Better developer experience with shorter build times, due to optimizations and source code refactoring. Code readability has been improved, helping developers understand the components included between different C++ files.
Memory consumption has been optimized by expanding the usage of mmap for the GenAI component and introducing the delayed constant weights mechanism.
Support for ISTFT operator for GPU has been expanded, improving support of text-to-speech,speech-to-text, and speech-to-speech models, like AudioShake and Kokoro.
Models like Behavior Sequence Transformer are now supported, thanks to SparseFillEmptyRows and SegmentMax operators.
google/fnet-base, tf/InstaNet, and more models are now enabled, thanks to DFT operators (discrete Fourier transform) supporting dynamism.
“COMPILED_BLOB” hint property is now available to speed up model compilation. The “COMPILED_BLOB” can be a regular or weightless model. For weightless models, the “WEIGHT_PATH” hint provides location of the model weights.
Reading tensor data from file as copy or using mmap feature is now available.
AUTO Inference Mode
Memory footprint in model caching has been reduced by loading the model only for the selected plugin, avoiding duplicate model objects.
CPU Device Plugin
Per-channel INT8 KV cache compression is now enabled by default, helping LLMs maintain accuracy while reducing memory consumption.
Per-channel INT4 KV cache compression is supported and can be enabled using the properties KEY_CACHE_PRECISION and KEY_CACHE_QUANT_MODE. Some models may be sensitive to INT4 KV cache compression.
Performance of encoder-based LLMs has been improved through additional graph-level optimizations, including QKV (Query, Key, and Value) projection and Multi-Head Attention (MHA).
SnapKV support has been implemented in the CPU plugin to reduce KV cache size while maintaining comparable performance. It calculates attention scores in PagedAttention for both prefill and decode stages. This feature is enabled by default in OpenVINO GenAI when KV cache eviction is used.
GPU Device Plugin
Performance of generative models (e.g. large language models, visual language models, image generation models) has been improved on XMX-based platforms (Intel® Core™ Ultra Processor Series 2 built-in GPUs and Intel® Arc™ B Series Graphics) with dynamic quantization and optimization in GEMM and Convolution.
2nd token latency of INT4 generative models has been improved on Intel® Core™ Processors, Series 1.
LoRa support has been optimized for Intel® Core™ Processor GPUs and its memory footprint improved, by optimizing the OPS nodes dependency.
SnapKV cache rotation now supports accurate token eviction through re-rotation of cache segments that change position after token eviction.
KV cache compression is now available for systolic platforms with an update to micro kernel implementation.
Improvements to Paged Attention performance and functionality have been made, with support of different head sizes for Key and Value in KV-Cache inputs.
NPU Device Plugin
The NPU Plugin can now retrieve options from the compiler and mark only the corresponding OpenVINO properties as supported.
The model import path now supports passing precompiled models directly to the plugin using the ov::compiled_blob property (Tensor), removing the need for stream access.
The ov::intel_npu::turbo property is now forwarded both to the compiler and the driver when supported. Using NPU_TURBO may result in longer compile time, increased memory footprint, changes in workload latency, and compatibility issues with older NPU drivers.
The same Level Zero context is now used across OpenVINO Cores, enabling remote tensors created through one Core object to be used with inference requests created with another Core object.
BlobContainer has been replaced with regular OpenVINO tensors, simplifying the underlying container for a compiled blob.
Weightless caching and compilation for LLMs are now available when used with OpenVINO GenAI.
LLM accuracy issues with BF16 models have been resolved.
The NPU driver is now included in OpenVINO Docker images for Ubuntu, enabling out-of-the-box NPU support without manual driver installation. For instructions, refer to the OpenVINO Docker documentation.
NPU support for FP16-NF4 precision on Intel® Core™ 200V Series processors for models with up to 8B parameters is enabled through symmetrical and channel-wise quantization, improving accuracy while maintaining performance efficiency. FP16-NF4 is not supported on CPUs and GPUs.
OpenVINO Python API
Wheel package and source code now include type hinting support (.pyi files), to help Python developers work in IDE. By default, pyi files will be generated automatically but can be triggered manually by developers themselves.
The compiled_blob property has been added to improve work with compiled blobs for NPU.
OpenVINO C API
A new API function is now available, to read IR models directly from memory.
OpenVINO Node.js API
OpenVINO GenAI has been expanded for JS package API compliance, to address future LangChain.js user requirements (defined by the LangChain adapter definition).
A new sample has been added, demonstrating OpenVINO GenAI in JS.
PyTorch Framework Support
Complex numbers in the RoPE pattern, used in Wan2.1 model, are now supported.
OpenVINO Model Server
Major new features:
Image generation endpoint - this preview feature enables image generation based on text prompts. The endpoint is compatible with OpenAI API making it easy to integrate with the existing ecosystem.
Agentic AI enablement via support for tools in LLM models. This preview feature allows easy integration of OpenVINO serving with AI Agents.
Model management via OVMS CLI now includes automatic download of OpenVINO models from Hugging Face Hub. This makes it possible to deploy generative pipelines with just a single command and manage the models without extra scripts or manual steps.
Other improvements
VLM models with chat/completion endpoint can now support passing the images as URL or as path to a local file system.
Option to use C++ only server version with support for LLM models. This smaller deployment package can be used both for completion and chat/completions.
The following issues have been fixed:
Correct error status now reported in streaming mode.
Known limitations
VLM models QuenVL2, QwenVL2.5 and Phi3_VL have low accuracy when deployed in a text generation pipeline with continuous batching. It is recommended to deploy these models in a stateful pipeline which processes the requests serially.
Neural Network Compression Framework
Data-free AWQ (Activation-aware Weight Quantization) method for 4-bit weight compression, nncf.compress_weights(), is now available for OpenVINO models. Now it is possible to compress weights to 4-bit with AWQ even without the dataset.
8-bit and 4-bit data-free weight compression, nncf.compress_weights(), is now available for models in ONNX format. See example.
4-bit data-aware AWQ (Activation-aware Weight Quantization) and Scale Estimation methods are now available for models in the TorchFX format.
TorchFunctionMode-based model tracing is now enabled by default for PyTorch models in nncf.quantize() and nncf.compress_weights().
Neural Low-Rank Adapter Search (NLS) Quantization-Aware Training (QAT) for more accurate 4-bit compression of LLMs on downstream tasks is now available. See example.
Weight compression time for NF4 data type has been reduced.
OpenVINO Tokenizers
Regex-based normalization and split operations have been optimized, resulting in significant speed improvements, especially for long input strings.
Two-string inputs are now supported, enabling various tasks, including RAG reranking.
Sentencepiece char-level tokenizers are now supported to enhance the SpeechT5 TTS model.
The tokenization node factory has been exposed to enable OpenVINO GenAI GGUF support.
OpenVINO.GenAI
New preview pipelines with C++ and Python samples have been added:
Text2SpeechPipeline,
TextEmbeddingPipeline covering RAG scenario.
Visual language modeling (VLMPipeline):
- VLM prompt can now refer to specific images. For example,
<ov_genai_image_0>What’s in the image?
will prepend the corresponding image to the promptwhile ignoring other images. See VLMPipeline’s docstrings for more details.
VLM uses continuous batching by default, improving performance.
VLMPipeline can now be constructed from in-memory ov::Model.
Qwen2.5-VL support has been added.
JavaScript:
JavaScript samples have been added: beam_search_causal_lm and multinomial_causal_lm.
An interruption option for LLMPipeline streaming has been introduced.
The following has been added:
cache encryption samples demonstrating how to encode OpenVINO’s cached compiled model,
LLM ReAct Agent sample capable of calling external functions during text generation,
SD3 LoRA Adapter support for Text2ImagePipeline,
ov::genai::Tokenizer::get_vocab() method for C++ and Python,
ov::Property as arguments to the ov_genai_llm_pipeline_create function for the C API,
support for the SnapKV method for more accurate KV cache eviction, enabled by default when KV cache eviction is used,
preview support for GGUF models (GGML Unified Format). See the OpenVINO blog for details.
Other Changes and Known Issues
Jupyter Notebooks
Known Issues
Component: GPUID: 168284Description:Using the phi-3 or phi-3.5 model for speculative decoding with large input sequences on GPU may cause an OpenCL out of resources error.Component: GPUID: 168637Description:Quantizing the Qwen3-8b model to int4 using the AWQ method results in accuracy issues on GPU.Component: GPUID: 168889Description:Running multiple benchmark_app processes simultaneously on Intel® Flex 170 or Intel® Arc™ A770 may lead to a system crash. This is due to a device driver issue but appears when using benchmark_app.Component: OpenVINO GenAIID: 167065, 168564, 168360, 168339, 168361Description:Models such as Qwen-7B-Chat, Phi4-Reasoning, Llama-3.2-1B-Instruct, Qwen3-8B, and DeepSeek-R1-Distill-* show reduced accuracy in chat scenarios compared to regular generation requests. Currently no workaround is available; a fix is planned for future releases.Component: OpenVINO GenAIID: 168957Description:The stable-diffusion-v1-5 model in FP16 precision shows up to a 10% degradation in the 2nd token latency on Intel® Xeon® Platinum 8580. Currently no workaround is available; a fix is planned for future releases.Component: ARMID: 166178Description:Performance regression of models on ARM due to an upgrade to the latest ACL. A corresponding issue has been created in the ACL and oneDNN repositories.
Deprecation And Support#
Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. For more details, refer to: OpenVINO Legacy Features and Components.
Discontinued in 2025#
Runtime components:
The OpenVINO property of Affinity API is no longer available. It has been replaced with CPU binding configurations (
ov::hint::enable_cpu_pinning
).The openvino-nightly PyPI module has been discontinued. End-users should proceed with the Simple PyPI nightly repo instead. More information in Release Policy.
Binary operations Node API has been removed from Python API after previous deprecation.
Tools:
The OpenVINO™ Development Tools package (pip install openvino-dev) is no longer available for OpenVINO releases in 2025.
Model Optimizer is no longer available. Consider using the new conversion methods instead. For more details, see the model conversion transition guide.
Intel® Streaming SIMD Extensions (Intel® SSE) are currently not enabled in the binary package by default. They are still supported in the source code form.
Legacy prefixes:
l_
,w_
, andm_
have been removed from OpenVINO archive names.
OpenVINO GenAI:
StreamerBase::put(int64_t token)
The
Bool
value for Callback streamer is no longer accepted. It must now return one of three values of StreamingStatus enum.ChunkStreamerBase is deprecated. Use StreamerBase instead.
NNCF
create_compressed_model()
method is now deprecated.nncf.quantize()
method is recommended for Quantization-Aware Training of PyTorch and TensorFlow models.Deprecated OpenVINO Model Server (OVMS) benchmark client in C++ using TensorFlow Serving API.
Deprecated and to be removed in the future#
Python 3.9 is now deprecated and will be unavailable after OpenVINO version 2025.4.
openvino.Type.undefined
is now deprecated and will be removed with version 2026.0.openvino.Type.dynamic
should be used instead.APT & YUM Repositories Restructure: Starting with release 2025.1, users can switch to the new repository structure for APT and YUM, which no longer uses year-based subdirectories (like “2025”). The old (legacy) structure will still be available until 2026, when the change will be finalized. Detailed instructions are available on the relevant documentation pages:
OpenCV binaries will be removed from Docker images in 2026.
The openvino namespace of the OpenVINO Python API has been redesigned, removing the nested openvino.runtime module. The old namespace is now considered deprecated and will be discontinued in 2026.0. A new namespace structure is available for immediate migration. Details will be provided through warnings and documentation.
Starting with the next release, manylinux2014 will be upgraded to manylinux_2_28. This aligns with modern toolchain requirements but also means that CentOS 7 will no longer be supported due to glibc incompatibility.
With the release of Node.js v22, updated Node.js bindings are now available and compatible with the latest LTS version. These bindings do not support CentOS 7, as they rely on newer system libraries unavailable on legacy systems.
Legal Information#
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at www.intel.com or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Atom, Core, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. Other names and brands may be claimed as the property of others.
Copyright © 2025, Intel Corporation. All rights reserved.
For more complete information about compiler optimizations, see our Optimization Notice.
Performance varies by use, configuration and other factors.