This structure provides functions that fill inference options for ONNX OpenVINO Execution Provider. Please follow https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options. More...
#include <opencv2/gapi/infer/onnx.hpp>
Public Member Functions | |
GAPI_WRAP | OpenVINO (const std::string &dev_type) |
Class constructor. | |
GAPI_WRAP OpenVINO & | cfgCacheDir (const std::string &dir) |
Specifies OpenVINO Execution Provider cache dir. | |
GAPI_WRAP OpenVINO & | cfgEnableDynamicShapes () |
Enables OpenVINO Execution Provider dynamic shapes. | |
GAPI_WRAP OpenVINO & | cfgEnableOpenCLThrottling () |
Enables OpenVINO Execution Provider opencl throttling. | |
GAPI_WRAP OpenVINO & | cfgNumThreads (size_t nthreads) |
Specifies OpenVINO Execution Provider number of threads. | |
Public Attributes | |
std::string | cache_dir |
std::string | device_type |
bool | enable_dynamic_shapes = false |
bool | enable_opencl_throttling = false |
size_t | num_of_threads = 0 |
Detailed Description
This structure provides functions that fill inference options for ONNX OpenVINO Execution Provider. Please follow https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options.
Constructor & Destructor Documentation
◆ OpenVINO()
|
inlineexplicit |
Class constructor.
Constructs OpenVINO parameters based on device type information.
- Parameters
-
dev_type Target device type to use. ("CPU_FP32", "GPU_FP16", etc)
Member Function Documentation
◆ cfgCacheDir()
Specifies OpenVINO Execution Provider cache dir.
This function is used to explicitly specify the path to save and load the blobs enabling model caching feature.
- Parameters
-
dir Path to the directory what will be used as cache.
- Returns
- reference to this parameter structure.
◆ cfgEnableDynamicShapes()
Enables OpenVINO Execution Provider dynamic shapes.
This function is used to enable OpenCL queue throttling for GPU devices (reduces CPU utilization when using GPU). This function is used to enable work with dynamic shaped models whose shape will be set dynamically based on the infer input image/data shape at run time in CPU.
- Returns
- reference to this parameter structure.
◆ cfgEnableOpenCLThrottling()
Enables OpenVINO Execution Provider opencl throttling.
This function is used to enable OpenCL queue throttling for GPU devices (reduces CPU utilization when using GPU).
- Returns
- reference to this parameter structure.
◆ cfgNumThreads()
Specifies OpenVINO Execution Provider number of threads.
This function is used to override the accelerator default value of number of threads with this value at runtime.
- Parameters
-
nthreads Number of threads.
- Returns
- reference to this parameter structure.
Member Data Documentation
◆ cache_dir
std::string cv::gapi::onnx::ep::OpenVINO::cache_dir |
◆ device_type
std::string cv::gapi::onnx::ep::OpenVINO::device_type |
◆ enable_dynamic_shapes
bool cv::gapi::onnx::ep::OpenVINO::enable_dynamic_shapes = false |
◆ enable_opencl_throttling
bool cv::gapi::onnx::ep::OpenVINO::enable_opencl_throttling = false |
◆ num_of_threads
size_t cv::gapi::onnx::ep::OpenVINO::num_of_threads = 0 |
The documentation for this struct was generated from the following file:
- opencv2/gapi/infer/onnx.hpp