Loading...
Searching...
No Matches
G-API Video processing functionality

Functions

GMat cv::gapi::BackgroundSubtractor (const GMat &src, const cv::gapi::video::BackgroundSubtractorParams &bsParams)
 Gaussian Mixture-based or K-nearest neighbours-based Background/Foreground Segmentation Algorithm. The operation generates a foreground mask.
 
std::tuple< GArray< GMat >, GScalarcv::gapi::buildOpticalFlowPyramid (const GMat &img, const Size &winSize, const GScalar &maxLevel, bool withDerivatives=true, int pyrBorder=BORDER_REFLECT_101, int derivBorder=BORDER_CONSTANT, bool tryReuseInputImage=true)
 Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.
 
std::tuple< GArray< Point2f >, GArray< uchar >, GArray< float > > cv::gapi::calcOpticalFlowPyrLK (const GArray< GMat > &prevPyr, const GArray< GMat > &nextPyr, const GArray< Point2f > &prevPts, const GArray< Point2f > &predPts, const Size &winSize=Size(21, 21), const GScalar &maxLevel=3, const TermCriteria &criteria=TermCriteria(TermCriteria::COUNT|TermCriteria::EPS, 30, 0.01), int flags=0, double minEigThresh=1e-4)
 
std::tuple< GArray< Point2f >, GArray< uchar >, GArray< float > > cv::gapi::calcOpticalFlowPyrLK (const GMat &prevImg, const GMat &nextImg, const GArray< Point2f > &prevPts, const GArray< Point2f > &predPts, const Size &winSize=Size(21, 21), const GScalar &maxLevel=3, const TermCriteria &criteria=TermCriteria(TermCriteria::COUNT|TermCriteria::EPS, 30, 0.01), int flags=0, double minEigThresh=1e-4)
 Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
 
GMat cv::gapi::KalmanFilter (const GMat &measurement, const GOpaque< bool > &haveMeasurement, const cv::gapi::KalmanParams &kfParams)
 
GMat cv::gapi::KalmanFilter (const GMat &measurement, const GOpaque< bool > &haveMeasurement, const GMat &control, const cv::gapi::KalmanParams &kfParams)
 Standard Kalman filter algorithm http://en.wikipedia.org/wiki/Kalman_filter.
 

Detailed Description

Function Documentation

◆ BackgroundSubtractor()

GMat cv::gapi::BackgroundSubtractor ( const GMat src,
const cv::gapi::video::BackgroundSubtractorParams bsParams 
)

#include <opencv2/gapi/video.hpp>

Gaussian Mixture-based or K-nearest neighbours-based Background/Foreground Segmentation Algorithm. The operation generates a foreground mask.

Returns
Output image is foreground mask, i.e. 8-bit unsigned 1-channel (binary) matrix CV_8UC1.
Note
Functional textual ID is "org.opencv.video.BackgroundSubtractor"
Parameters
srcinput image: Floating point frame is used without scaling and should be in range [0,255].
bsParamsSet of initialization parameters for Background Subtractor kernel.

◆ buildOpticalFlowPyramid()

std::tuple< GArray< GMat >, GScalar > cv::gapi::buildOpticalFlowPyramid ( const GMat img,
const Size winSize,
const GScalar maxLevel,
bool  withDerivatives = true,
int  pyrBorder = BORDER_REFLECT_101,
int  derivBorder = BORDER_CONSTANT,
bool  tryReuseInputImage = true 
)

#include <opencv2/gapi/video.hpp>

Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

Note
Function textual ID is "org.opencv.video.buildOpticalFlowPyramid"
Parameters
img8-bit input image.
winSizewindow size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.
maxLevel0-based maximal pyramid level number.
withDerivativesset to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.
pyrBorderthe border mode for pyramid layers.
derivBorderthe border mode for gradients.
tryReuseInputImageput ROI of input image into the pyramid if possible. You can pass false to force data copying.
Returns
  • output pyramid.
  • number of levels in constructed pyramid. Can be less than maxLevel.

◆ calcOpticalFlowPyrLK() [1/2]

std::tuple< GArray< Point2f >, GArray< uchar >, GArray< float > > cv::gapi::calcOpticalFlowPyrLK ( const GArray< GMat > &  prevPyr,
const GArray< GMat > &  nextPyr,
const GArray< Point2f > &  prevPts,
const GArray< Point2f > &  predPts,
const Size winSize = Size(21, 21),
const GScalar maxLevel = 3,
const TermCriteria criteria = TermCriteria(TermCriteria::COUNT|TermCriteria::EPS, 30, 0.01),
int  flags = 0,
double  minEigThresh = 1e-4 
)

#include <opencv2/gapi/video.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Note
Function textual ID is "org.opencv.video.calcOpticalFlowPyrLKForPyr"

◆ calcOpticalFlowPyrLK() [2/2]

std::tuple< GArray< Point2f >, GArray< uchar >, GArray< float > > cv::gapi::calcOpticalFlowPyrLK ( const GMat prevImg,
const GMat nextImg,
const GArray< Point2f > &  prevPts,
const GArray< Point2f > &  predPts,
const Size winSize = Size(21, 21),
const GScalar maxLevel = 3,
const TermCriteria criteria = TermCriteria(TermCriteria::COUNT|TermCriteria::EPS, 30, 0.01),
int  flags = 0,
double  minEigThresh = 1e-4 
)

#include <opencv2/gapi/video.hpp>

Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

See [Bouguet00] .

Note
Function textual ID is "org.opencv.video.calcOpticalFlowPyrLK"
Parameters
prevImgfirst 8-bit input image (GMat) or pyramid (GArray<GMat>) constructed by buildOpticalFlowPyramid.
nextImgsecond input image (GMat) or pyramid (GArray<GMat>) of the same size and the same type as prevImg.
prevPtsGArray of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.
predPtsGArray of 2D points initial for the flow search; make sense only when OPTFLOW_USE_INITIAL_FLOW flag is passed; in that case the vector must have the same size as in the input.
winSizesize of the search window at each pyramid level.
maxLevel0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel.
criteriaparameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon).
flagsoperation flags:
  • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
  • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure.
minEigThreshthe algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.
Returns
  • GArray of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image.
  • status GArray (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.
  • GArray of errors (doubles); each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases).

◆ KalmanFilter() [1/2]

GMat cv::gapi::KalmanFilter ( const GMat measurement,
const GOpaque< bool > &  haveMeasurement,
const cv::gapi::KalmanParams kfParams 
)

#include <opencv2/gapi/video.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. The case of Standard Kalman filter algorithm when there is no control in a dynamic system. In this case the controlMatrix is empty and control vector is absent.

Note
Function textual ID is "org.opencv.video.KalmanFilterNoControl"
Parameters
measurementinput matrix: 32-bit or 64-bit float 1-channel matrix containing measurements.
haveMeasurementdynamic input flag that indicates whether we get measurements at a particular iteration.
kfParamsSet of initialization parameters for Kalman filter kernel.
Returns
Output matrix is predicted or corrected state. They can be 32-bit or 64-bit float 1-channel matrix CV_32FC1 or CV_64FC1.
See also
cv::KalmanFilter

◆ KalmanFilter() [2/2]

GMat cv::gapi::KalmanFilter ( const GMat measurement,
const GOpaque< bool > &  haveMeasurement,
const GMat control,
const cv::gapi::KalmanParams kfParams 
)

#include <opencv2/gapi/video.hpp>

Standard Kalman filter algorithm http://en.wikipedia.org/wiki/Kalman_filter.

Note
Functional textual ID is "org.opencv.video.KalmanFilter"
Parameters
measurementinput matrix: 32-bit or 64-bit float 1-channel matrix containing measurements.
haveMeasurementdynamic input flag that indicates whether we get measurements at a particular iteration .
controlinput matrix: 32-bit or 64-bit float 1-channel matrix contains control data for changing dynamic system.
kfParamsSet of initialization parameters for Kalman filter kernel.
Returns
Output matrix is predicted or corrected state. They can be 32-bit or 64-bit float 1-channel matrix CV_32FC1 or CV_64FC1.

If measurement matrix is given (haveMeasurements == true), corrected state will be returned which corresponds to the pipeline cv::KalmanFilter::predict(control) -> cv::KalmanFilter::correct(measurement). Otherwise, predicted state will be returned which corresponds to the call of cv::KalmanFilter::predict(control).

See also
cv::KalmanFilter