Goal
- Learn to capture video from a camera and display it.
Capture video from camera
Often, we have to capture live stream with a camera. In OpenCV.js, we use WebRTC and HTML canvas element to implement this. Let's capture a video from the camera(built-in or a usb), convert it into grayscale video and display it.
To capture a video, you need to add some HTML elements to the web page:
- a <video> to display video from camera directly
- a <canvas> to transfer video to canvas ImageData frame-by-frame
- another <canvas> to display the video OpenCV.js gets
First, we use WebRTC navigator.mediaDevices.getUserMedia to get the media stream.
- Note
- This function is unnecessary when you capture video from a video file. But notice that HTML video element only supports video formats of Ogg(Theora), WebM(VP8/VP9) or MP4(H.264).
Playing video
Now, the browser gets the camera stream. Then, we use CanvasRenderingContext2D.drawImage() method of the Canvas 2D API to draw video onto the canvas. Finally, we can use the method in Getting Started with Images to read and display image in canvas. For playing video, cv.imshow() should be executed every delay milliseconds. We recommend setTimeout() method. And if the video is 30fps, the delay milliseconds should be (1000/30 - processing_time).
OpenCV.js implements cv.VideoCapture (videoSource) using the above method. You need not to add the hidden canvas element manually.
- Parameters
-
videoSource the video id or element.
- Returns
- cv.VideoCapture instance
We use read (image) to get one frame of the video. For performance reasons, the image should be constructed with cv.CV_8UC4 type and same size as the video.
- Parameters
-
image image with cv.CV_8UC4 type and same size as the video.
The above code of playing video could be simplified as below.
- Note
- Remember to delete src and dst after when stop.
Try it