Using DepthAI Hardware / OAK depth sensors
Table of Contents
Prev Tutorial: Implementing a face beautification algorithm with G-API

Oak-D and Oak-D-Light cameras
Depth sensors compatible with Luxonis DepthAI library are supported through OpenCV Graph API (or G-API) module. RGB image and some other formats of output can be retrieved by using familiar interface of G-API module.
In order to use DepthAI sensor with OpenCV you should do the following preliminary steps:
- Install Luxonis DepthAI library depthai-core.
- Configure OpenCV with DepthAI library support by setting
WITH_OAK
flag in CMake. If DepthAI library is found in install folders OpenCV will be built with depthai-core (see a statusWITH_OAK
in CMake log). - Build OpenCV.
Source code
You can find source code how to process heterogeneous graphs in the modules/gapi/samples/oak_basic_infer.cpp
of the OpenCV source code library.
C++
#include <algorithm>
#include <iostream>
#include <sstream>
#include <opencv2/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/gapi.hpp>
#include <opencv2/gapi/core.hpp>
#include <opencv2/gapi/imgproc.hpp>
#include <opencv2/gapi/infer.hpp>
#include <opencv2/gapi/infer/parsers.hpp>
#include <opencv2/gapi/render.hpp>
#include <opencv2/gapi/cpu/gcpukernel.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/gapi/oak/oak.hpp>
#include <opencv2/gapi/oak/infer.hpp>
const std::string keys =
"{ h help | | Print this help message }"
"{ detector | | Path to compiled .blob face detector model }"
"{ duration | 100 | Number of frames to pull from camera and run inference on }";
namespace custom {
using GDetections = cv::GArray<cv::Rect>;
using GSize = cv::GOpaque<cv::Size>;
using GPrims = cv::GArray<cv::gapi::wip::draw::Prim>;
G_API_OP(BBoxes, <GPrims(GDetections)>, "sample.custom.b-boxes") {
return cv::empty_array_desc();
}
};
GAPI_OCV_KERNEL(OCVBBoxes, BBoxes) {
// This kernel converts the rectangles into G-API's
// rendering primitives
static void run(const std::vector<cv::Rect> &in_face_rcs,
std::vector<cv::gapi::wip::draw::Prim> &out_prims) {
out_prims.clear();
return cv::gapi::wip::draw::Rect(rc, clr, 2);
};
for (auto &&rc : in_face_rcs) {
out_prims.emplace_back(cvt(rc, CV_RGB(0,255,0))); // green
}
}
};
} // namespace custom
int main(int argc, char *argv[]) {
cv::CommandLineParser cmd(argc, argv, keys);
if (cmd.has("help")) {
cmd.printMessage();
return 0;
}
const auto det_name = cmd.get<std::string>("detector");
const auto duration = cmd.get<int>("duration");
if (det_name.empty()) {
std::cerr << "FATAL: path to detection model is not provided for the sample."
<< "Please specify it with --detector options."
<< std::endl;
return 1;
}
// Prepare G-API kernels and networks packages:
auto detector = cv::gapi::oak::Params<custom::FaceDetector>(det_name);
cv::gapi::kernels<custom::OCVBBoxes>(),
auto args = cv::compile_args(kernels, networks);
// Initialize graph structure
cv::GFrame in;
cv::GOpaque<cv::Size> sz = cv::gapi::streaming::size(copy);
// infer is not affected by the actual copy here
cv::GMat blob = cv::gapi::infer<custom::FaceDetector>(copy);
// FIXME: OAK infer detects faces slightly out of frame bounds
auto rendered = cv::gapi::wip::draw::renderFrame(copy, custom::BBoxes::on(rcs));
// on-the-fly conversion NV12->BGR
cv::GMat out = cv::gapi::streaming::BGR(rendered);
.compileStreaming(std::move(args));
// Graph execution
pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::oak::ColorCamera>());
pipeline.start();
cv::Mat out_mat;
std::vector<cv::Rect> out_dets;
int frames = 0;
while (pipeline.pull(cv::gout(out_mat, out_dets))) {
std::string name = "oak_infer_frame_" + std::to_string(frames) + ".png";
cv::imwrite(name, out_mat);
if (!out_dets.empty()) {
std::cout << "Got " << out_dets.size() << " detections on frame #" << frames << std::endl;
}
++frames;
if (frames == duration) {
pipeline.stop();
break;
}
}
std::cout << "Pipeline finished. Processed " << frames << " frames" << std::endl;
return 0;
}
cv::GArray<T> template class represents a list of objects of class T in the graph.
Definition: garray.hpp:366
GComputation class represents a captured computation graph. GComputation objects form boundaries for ...
Definition: gcomputation.hpp:120
GAPI_WRAP GStreamingCompiled compileStreaming(GMetaArgs &&in_metas, GCompileArgs &&args={})
Compile the computation for streaming mode.
cv::GOpaque<T> template class represents an object of class T in the graph.
Definition: gopaque.hpp:326
void setSource(GRunArgs &&ins)
Specify the input data to GStreamingCompiled for processing, a generic version.
Definition: infer.hpp:39
GCompileArgs compile_args(Ts &&... args)
Wraps a list of arguments (a parameter pack) into a vector of compilation arguments (cv::GCompileArg)...
Definition: gcommon.hpp:214
GFrame renderFrame(const GFrame &m_frame, const GArray< Prim > &prims)
Renders Media Frame.
CV_EXPORTS_W bool imwrite(const String &filename, InputArray img, const std::vector< int > ¶ms=std::vector< int >())
Saves an image to a specified file.
cv::GKernelPackage kernels()
GFrame copy(const GFrame &in)
cv::gapi::GKernelPackage kernels()
std::tuple< GArray< Rect >, GArray< int > > parseSSD(const GMat &in, const GOpaque< Size > &inSz, const float confidenceThreshold=0.5f, const int filterLabel=-1)
Parses output of SSD network.
cv::GKernelPackage combine(const cv::GKernelPackage &lhs, const cv::GKernelPackage &rhs)
Definition: garray.hpp:39
This structure represents a rectangle to draw.
Definition: render_types.hpp:128