5. Camera display

Code path:/home/pi/Yahboom_Project/1.OpenCV Course/04 Advanced Tutorial/Camera.ipynb

Common API functions used by OpenCV:

1. cv2.VideoCapture() function:

cap = cv2.VideoCapture(0)

The parameter in VideoCapture() is 0, which means Raspberry Pi video0.

(Note: You can view the current camera through the command ls /dev/)

cap = cv2.VideoCapture("…/1.avi")

VideoCapture(".../1.avi") means that if the parameter is the video file path, the video will be opened.

2. cap.set() function

Set camera parameters. Do not modify them at will. Common configuration methods:

capture.set(CV_CAP_PROP_FRAME_WIDTH, 1920); #Width

capture.set(CV_CAP_PROP_FRAME_HEIGHT, 1080); #Height

capture.set(CV_CAP_PROP_FPS, 30); #Frame number

capture.set(CV_CAP_PROP_BRIGHTNESS, 1); #Brightness 1

capture.set(CV_CAP_PROP_CONTRAST,40); #Contrast 40

capture.set(CV_CAP_PROP_SATURATION, 50); #Saturation 50

capture.set(CV_CAP_PROP_HUE, 50); #Hue 50

capture.set(CV_CAP_PROP_EXPOSURE, 50); #Exposure 50

 

CV_CAP_PROP_POS_MSEC - current position of the video, get timestamp as milliseconds or video

CV_CAP_PROP_POS_FRAMES - Frame index that will be decompressed/acquired next, starting from 0

CV_CAP_PROP_POS_AVI_RATIO - relative position of the video file (0 - start of video, 1 - end of video)

CV_CAP_PROP_FRAME_WIDTH - Frame width in the video stream

CV_CAP_PROP_FRAME_HEIGHT - Frame height in the video stream

CV_CAP_PROP_FPS - frame rate

CV_CAP_PROP_FOURCC - Four characters representing the codec

CV_CAP_PROP_FRAME_COUNT - Total number of frames in the video file

The function cvGetCaptureProperty obtains the specified properties of the camera or video file.

The following are detailed parameters:

#define CV_CAP_PROP_POS_MSEC 0 //Current position in milliseconds

#define CV_CAP_PROP_POS_FRAMES 1 //Calculate the current position in frames

#define CV_CAP_PROP_POS_AVI_RATIO 2 //The relative position of the video, from 0 to 1. The first three parameters should be related to video playback and reading related dynamic information.

#define CV_CAP_PROP_FRAME_WIDTH 3 //Frame width

#define CV_CAP_PROP_FRAME_HEIGHT 4 //Frame height

#define CV_CAP_PROP_FPS 5 //Frame rate

#define CV_CAP_PROP_FOURCC 6 //4 character encoding method

#define CV_CAP_PROP_FRAME_COUNT 7 //Video frame number

#define CV_CAP_PROP_FORMAT 8 //Video format

#define CV_CAP_PROP_MODE 9 //Backend specific value indicating the current capture mode.

#define CV_CAP_PROP_BRIGHTNESS 10 //Brightness

#define CV_CAP_PROP_CONTRAST 11 //Contrast

#define CV_CAP_PROP_SATURATION 12 //Saturation

#define CV_CAP_PROP_HUE 13 //Hue

#define CV_CAP_PROP_GAIN 14 //Gain

#define CV_CAP_PROP_EXPOSURE 15 //Exposure

#define CV_CAP_PROP_CONVERT_RGB 16 //Boolean flag whether the image should be converted to RGB.

#define CV_CAP_PROP_WHITE_BALANCE 17 //White balance

#define CV_CAP_PROP_RECTIFICATION 18 //Stereo camera correction flag (note: only supports DC1394 v2. x end cur-rently)

3, cap.isOpened() function:

Return true to indicate success, false to indicate unsuccessful

4 ret,frame = cap.read() function:

cap.read() reads the video frame by frame. ret and frame are the two return values of the cap.read() method. where ret is a Boolean value. If the read frame is correct, it returns True. If the file is not read to the end, its return value is False.

Frame is the image of each frame, which is a three-dimensional matrix.

5. cv2.waitKey() function:

The parameter is 1, which means switching to the next image with a delay of 1ms. If the parameter is too large, such as cv2.waitKey(1000), it will cause lag due to too long delay.

The parameter is 0. For example, cv2.waitKey(0) only displays the current frame image, which is equivalent to pausing the video.

6. cap.release() and destroyAllWindows() functions:

cap.release() releases the video and calls destroyAllWindows() to close all image windows.

Code implementation process

Since our entire tutorial runs in JupyterLab, we must understand the various components inside. Here we need to use the image display component.

  1. Import library:

import ipywidgets.widgets as widgets

  1. Set up the Image component:

image_widget = widgets.Image(format='jpeg', width=600, height=500)

l format: display format.

l width: width.

l height: height.

  1. Display the Image component:

display(image_widget)

  1. Turn on the camera and read the image:

image = cv2.VideoCapture(0) #Open the camera

ret, frame = image.read() #Read camera data

  1. Assign values to components

#Convert the image to jpeg and assign it to the video display component

image_widget.value = bgr8_to_jpeg(frame)

Code content:

For CSI camera,you need to create a python file.

Code content: