OpenVINO™ Integrations#
OpenVINO has been adopted by multiple AI projects in various areas. For an extensive list of community-based projects involving OpenVINO, see the Awesome OpenVINO repository.
Hugging Face Optimum-Intel
Grab and use models leveraging OpenVINO within the Hugging Face API.
The repository hosts pre-optimized OpenVINO IR models, so that you can use
them in your projects without the need for any adjustments.
Benefits:
- Minimize complex coding for Generative AI.
OpenVINO Execution Provider for ONNX Runtime
Utilize OpenVINO as a backend with your existing ONNX Runtime code.
Benefits:
- Enhanced inference performance on Intel hardware with minimal code modifications.
A notebook example: YOLOv8 object detection
Torch.compile with OpenVINO
Use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels.
Benefits:
- Enhanced inference performance on Intel hardware with minimal code modifications.
OpenVINO LLMs with LlamaIndex
Build context-augmented GenAI applications with the LlamaIndex framework and enhance
runtime performance with OpenVINO.
Benefits:
- Minimize complex coding for Generative AI.
OpenVINO Backend for ExecuTorch
Export and run AI models using OpenVINO with ExecuTorch to optimize performance on
Intel hardware.
Benefits:
- Accelerate inference, reduce latency, and simplify deployment for efficient AI applications.
OpenVINO Integration for LangChain
Integrate OpenVINO with the LangChain framework to enhance runtime performance for GenAI applications.
Benefits:
- Streamline the integration and chaining of language models for efficient AI workflows.
Intel® Geti™
Build computer vision models faster with less data using Intel® Geti™. It streamlines labeling, training, and deployment, exporting models optimized for OpenVINO.
Benefits:
- Train with less data and deploy models faster.
AI Playground™
Use Intel® OpenVINO™ in AI Playground to optimize and run AI models efficiently on Intel
CPUs and Arc GPUs, enabling local image generation, editing, and video processing. It
supports OpenVINO-optimized models like TinyLlama, Mistral 7B, and Phi-3 mini, no conversion
needed.
Benefits:
- Easily set up pre-optimized models.
- Run faster, hardware-accelerated inference with OpenVINO.
Intel® AI Assistant Builder
Run local AI assistants with Intel® AI Assistant Builder using OpenVINO-optimized models
like Phi-3 and Qwen2.5. Build secure, efficient assistants customized for your data
and workflows.
Benefits:
- Build custom assistants with agentic workflows and knowledge bases.
- Keep data secure by running fully local.