Deprecated List¶
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::Blob::element_size () const =0 |
Cast to MemoryBlob and use its API instead. Blob class can represent compound blob, which do not refer to the only solid memory. |
Global InferenceEngine::Blob::product (const SizeVector &dims) noexcept |
Cast to MemoryBlob and use its API instead. |
Global InferenceEngine::Blob::properProduct (const SizeVector &dims) noexcept |
Cast to MemoryBlob and use its API instead. |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::CNNNetwork::CNNNetwork (std::shared_ptr< ICNNNetwork > network) |
Don’t use this constructor. It will be removed soon |
Global InferenceEngine::CNNNetwork::operator const ICNNNetwork & () const |
InferenceEngine::ICNNNetwork interface is deprecated |
Global InferenceEngine::CNNNetwork::operator ICNNNetwork & () |
InferenceEngine::ICNNNetwork interface is deprecated |
Global InferenceEngine::CNNNetwork::operator ICNNNetwork::Ptr () |
InferenceEngine::ICNNNetwork interface is deprecated |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::Core::ImportNetwork (std::istream &networkModel) |
Use Core::ImportNetwork with explicit device name |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::Data::reshape (const std::initializer_list< size_t > &dims, Layout layout) |
Use InferenceEngine::Data::reshape(const SizeVector&, Layout) |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::ExecutableNetwork::CreateInferRequestPtr () |
|
Global InferenceEngine::ExecutableNetwork::operator std::shared_ptr< IExecutableNetwork > () |
Will be removed. Use operator bool |
Global InferenceEngine::ExecutableNetwork::reset (std::shared_ptr< IExecutableNetwork > newActual) |
The method Will be removed |
Class InferenceEngine::ExperimentalDetectronPriorGridGeneratorLayer |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
Class InferenceEngine::ExperimentalSparseWeightedReduceLayer |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Class InferenceEngine::GRUCell |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
Use InferenceEngine::CNNNetwork wrapper instead |
|
Global InferenceEngine::ICNNNetwork::addOutput (const std::string &layerName, size_t outputIndex=0, ResponseDesc *resp=nullptr) noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getBatchSize () const =0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getFunction () const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getFunction () noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getInput (const std::string &inputName) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getInputsInfo (InputsDataMap &inputs) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getName () const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getOutputsInfo (OutputsDataMap &out) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::getOVNameForTensor (std::string &ov_name, const std::string &orig_name, ResponseDesc *resp) const noexcept |
Use InferenceEngine::CNNNetwork wrapper instead |
Use InferenceEngine::CNNNetwork wrapper instead |
|
Global InferenceEngine::ICNNNetwork::layerCount () const =0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Use InferenceEngine::CNNNetwork wrapper instead |
|
Global InferenceEngine::ICNNNetwork::reshape (const std::map< std::string, ngraph::PartialShape > &partialShapes, ResponseDesc *resp) noexcept |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::reshape (const InputShapes &inputShapes, ResponseDesc *resp) noexcept |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::serialize (std::ostream &xmlStream, Blob::Ptr &binData, ResponseDesc *resp) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::serialize (std::ostream &xmlStream, std::ostream &binStream, ResponseDesc *resp) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::serialize (const std::string &xmlPath, const std::string &binPath, ResponseDesc *resp) const noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::ICNNNetwork::setBatchSize (size_t size, ResponseDesc *responseDesc) noexcept=0 |
Use InferenceEngine::CNNNetwork wrapper instead |
Global InferenceEngine::IExecutableNetwork::GetExecGraphInfo (ICNNNetwork::Ptr &graphPtr, ResponseDesc *resp) noexcept=0 |
Use InferenceEngine::ExecutableNetwork::GetExecGraphInfo instead |
Global InferenceEngine::IExecutableNetworkInternal::Export (const std::string &modelFileName) |
Use IExecutableNetworkInternal::Export(std::ostream& networkModel) |
Global InferenceEngine::IInferencePlugin::ImportNetwork (const std::string &modelFileName, const std::map< std::string, std::string > &config) |
Use ImportNetwork(std::istream& networkModel, const std::map<std::string, std::string>& config) |
Use InferenceEngine::InferRequest C++ wrapper |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::LowLatency (InferenceEngine::CNNNetwork &network) |
Use InferenceEngine::lowLatency2 instead. This transformation will be removed in 2023.1. |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global InferenceEngine::PluginConfigParams::KEY_DUMP_EXEC_GRAPH_AS_DOT |
Use InferenceEngine::ExecutableNetwork::GetExecGraphInfo::serialize method |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Class InferenceEngine::RNNCell |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Use IE_VERSION_[MAJOR|MINOR|PATCH] definitions, buildNumber property |
|
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1 |
|
Global ngraph::CoordinateTransformBasic::index (const Coordinate &c) const |
|
Global ngraph::maximum_value (const Output< Node > &value) |
Use evaluate_upper_bound instead |
Global ov::Core::add_extension (const std::shared_ptr< InferenceEngine::IExtension > &extension) |
This method is deprecated. Please use other Core::add_extension methods. |
Global ov::Model::evaluate (const ov::HostTensorVector &output_tensors, const ov::HostTensorVector &input_tensors) const |
Use evaluate with ov::Tensor instead |
Global ov::Model::evaluate (const ov::HostTensorVector &output_tensors, const ov::HostTensorVector &input_tensors, ov::EvaluationContext &evaluation_context) const |
Use evaluate with ov::Tensor instead |
Global ov::Node::evaluate (const ov::HostTensorVector &output_values, const ov::HostTensorVector &input_values, const EvaluationContext &evaluationContext) const |
Use evaluate with ov::Tensor instead |
Global ov::Node::evaluate (const ov::HostTensorVector &output_values, const ov::HostTensorVector &input_values) const |
Use evaluate with ov::Tensor instead |