OpenVINO™ Integrations#
Hugging Face Optimum-Intel
Grab and use models leveraging OpenVINO within the Hugging Face API.
The repository hosts pre-optimized OpenVINO IR models, so that you can use
them in your projects without the need for any adjustments.
Benefits:
- Minimize complex coding for Generative AI.
OpenVINO Execution Provider for ONNX Runtime
Utilize OpenVINO as a backend with your existing ONNX Runtime code.
Benefits:
- Enhanced inference performance on Intel hardware with minimal code modifications.
A notebook example: YOLOv8 object detection
Torch.compile with OpenVINO
Use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels.
Benefits:
- Enhanced inference performance on Intel hardware with minimal code modifications.
A notebook example: n.a.
OpenVINO LLMs with LlamaIndex
Build context-augmented GenAI applications with the LlamaIndex framework and enhance
runtime performance with OpenVINO.
Benefits:
- Minimize complex coding for Generative AI.