openvino_genai.VLMPipeline#
- class openvino_genai.VLMPipeline#
Bases:
pybind11_object
This class is used for generation with VLMs
- __init__(self: openvino_genai.py_openvino_genai.VLMPipeline, models_path: os.PathLike, device: str, **kwargs) None #
- device on which inference will be done
VLMPipeline class constructor. models_path (str): Path to the folder with exported model files. device (str): Device to run the model on (e.g., CPU, GPU). Default is ‘CPU’. kwargs: Device properties
Methods
__delattr__
(name, /)Implement delattr(self, name).
__dir__
()Default dir() implementation.
__eq__
(value, /)Return self==value.
__format__
(format_spec, /)Default object formatter.
__ge__
(value, /)Return self>=value.
__getattribute__
(name, /)Return getattr(self, name).
__gt__
(value, /)Return self>value.
__hash__
()Return hash(self).
__init__
(self, models_path, device, **kwargs)device on which inference will be done
This method is called when a class is subclassed.
__le__
(value, /)Return self<=value.
__lt__
(value, /)Return self<value.
__ne__
(value, /)Return self!=value.
__new__
(**kwargs)Helper for pickle.
__reduce_ex__
(protocol, /)Helper for pickle.
__repr__
()Return repr(self).
__setattr__
(name, value, /)Implement setattr(self, name, value).
Size of object in memory, in bytes.
__str__
()Return str(self).
Abstract classes can override this to customize issubclass().
finish_chat
(self)generate
(*args, **kwargs)Overloaded function.
get_generation_config
(self)get_tokenizer
(self)set_chat_template
(self, arg0)set_generation_config
(self, arg0)start_chat
(self[, system_message])- __class__#
alias of
pybind11_type
- __delattr__(name, /)#
Implement delattr(self, name).
- __dir__()#
Default dir() implementation.
- __eq__(value, /)#
Return self==value.
- __format__(format_spec, /)#
Default object formatter.
- __ge__(value, /)#
Return self>=value.
- __getattribute__(name, /)#
Return getattr(self, name).
- __gt__(value, /)#
Return self>value.
- __hash__()#
Return hash(self).
- __init__(self: openvino_genai.py_openvino_genai.VLMPipeline, models_path: os.PathLike, device: str, **kwargs) None #
- device on which inference will be done
VLMPipeline class constructor. models_path (str): Path to the folder with exported model files. device (str): Device to run the model on (e.g., CPU, GPU). Default is ‘CPU’. kwargs: Device properties
- __init_subclass__()#
This method is called when a class is subclassed.
The default implementation does nothing. It may be overridden to extend subclasses.
- __le__(value, /)#
Return self<=value.
- __lt__(value, /)#
Return self<value.
- __ne__(value, /)#
Return self!=value.
- __new__(**kwargs)#
- __reduce__()#
Helper for pickle.
- __reduce_ex__(protocol, /)#
Helper for pickle.
- __repr__()#
Return repr(self).
- __setattr__(name, value, /)#
Implement setattr(self, name, value).
- __sizeof__()#
Size of object in memory, in bytes.
- __str__()#
Return str(self).
- __subclasshook__()#
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).
- finish_chat(self: openvino_genai.py_openvino_genai.VLMPipeline) None #
- generate(*args, **kwargs)#
Overloaded function.
generate(self: openvino_genai.py_openvino_genai.VLMPipeline, prompt: str, images: list[openvino._pyopenvino.Tensor], generation_config: openvino_genai.py_openvino_genai.GenerationConfig = None, streamer: Union[Callable[[str], bool], openvino_genai.py_openvino_genai.StreamerBase, None] = None, **kwargs) -> object
Generates sequences for VLMs.
- param prompt:
input prompt
- type prompt:
str
- param images:
list of images
- type inputs:
List[ov.Tensor]
- param generation_config:
generation_config
- type generation_config:
GenerationConfig or a Dict
- param streamer:
streamer either as a lambda with a boolean returning flag whether generation should be stopped
:type : Callable[[str], bool], ov.genai.StreamerBase
- param kwargs:
arbitrary keyword arguments with keys corresponding to GenerationConfig fields.
:type : Dict
- return:
return results in decoded form
- rtype:
DecodedResults
generate(self: openvino_genai.py_openvino_genai.VLMPipeline, prompt: str, **kwargs) -> object
Generates sequences for VLMs.
- param prompt:
input prompt
- type prompt:
str
- param kwargs:
arbitrary keyword arguments with keys corresponding to generate params.
Expected parameters list: image: ov.Tensor - input image, images: List[ov.Tensor] - input images, generation_config: GenerationConfig, streamer: Callable[[str], bool], ov.genai.StreamerBase - streamer either as a lambda with a boolean returning flag whether generation should be stopped
- return:
return results in decoded form
- rtype:
DecodedResults
- get_generation_config(self: openvino_genai.py_openvino_genai.VLMPipeline) openvino_genai.py_openvino_genai.GenerationConfig #
- get_tokenizer(self: openvino_genai.py_openvino_genai.VLMPipeline) openvino_genai.py_openvino_genai.Tokenizer #
- set_chat_template(self: openvino_genai.py_openvino_genai.VLMPipeline, arg0: str) None #
- set_generation_config(self: openvino_genai.py_openvino_genai.VLMPipeline, arg0: openvino_genai.py_openvino_genai.GenerationConfig) None #
- start_chat(self: openvino_genai.py_openvino_genai.VLMPipeline, system_message: str = '') None #