Convert GstVideoFrame to Opencv Mat and vice versa - opencv

I'm extracting a frame into Opencv Mat from GstVideoFrame. Then processing the frame in Mat.
custom_transform_frame_ip (GstVideoFilter *filter, GstVideoFrame *frame)
{
/*** convert GstVideoFrame to opencv format ***/
GstMapInfo map_info;
Mat cvframe;
if (!gst_buffer_map ((frame->buffer), &map_info, GST_MAP_READ)) {
gst_buffer_unmap ((frame->buffer), &map_info);
cout << "gst bufer map failed" << endl;
return GST_FLOW_ERROR;
}
cout << "frame width: "<< frame->info.width << "height: " << frame->info.height << endl;
//cvframe = Mat::zeros(/*1080*/frame->info.width, /*1920*/frame->info.height, CV_8UC3);
cvframe = cv::Mat(/*1080*/frame->info.width, /*1920*/frame->info.height, CV_8UC3, (char *)map_info.data, cv::Mat::AUTO_STEP);
/*** drawing custom polygon ROI on the Mat frame ***/
/***************************************************/
/****** convert Mat frame back to GstVideoFrame ******/
/*memcpy (map_info.data, cvframe.data, map_info.size);
gst_buffer_map(frame->buffer, &map_info, GST_MAP_WRITE);*/
/**************************************************/
gst_buffer_unmap ((frame->buffer), &map_info);
return;
}
Now I want to put this Mat frame back into GstVideoFrame so that further flow of the pipeline won't be affected.
I've tried to memcpy Mat frame.data into GstVideoFrame->GstVideoInfo.Data field. It's not working out. I'm sure this is wrong approach.
I want to know 2 things:
If I'm correct on the first part i.e. extracting frame from GstVideoFrame to Opencv Mat.
How to put this Mat frame into the same GstVideoFrame structure from where I extracted it. Or I can create another GstVideoFrame and copy the same data with processed Mat frame and then pass further.

Related

Image perspective correction and measurement with a homography

I want to calibrate a camera in order to make real world measurements of distance between features in an image. I'm using the OpenCV demo images for this example. I'd like to know if what I'm doing is valid and/or if there is another/better way to go about it. I could use a checkerboard of known pitch to calibrate my camera. But in this example the pose of the camera is as shown in the image below. So I can't just calculate px/mm to make my real world distance measurements, I need to correct for the pose first.
I find the chessboard corners and calibrate the camera with a call to OpenCV's calibrateCamera which gives the intrinsics, extrinsics and distortion parameters.
Now, I want to be able to make measurements with the camera in this same pose. As an example I'll try and measure between two corners on the image above. I want to do a perspective correction on the image so I can effectively get a birds eye view. I do this using the method described here. The idea is that the rotation for camera pose that gets a birds eye view of this image is a rotation on the z-axis. Below is the rotation matrix.
Then following this OpenCV homography example I calculate the homography between the original view and my desired bird's eye view. If I do this on the image above I get an image that looks good, see below. Now that I have my perspective rectified image I find the corners again with findChessboardCorners and I calculate an average pixel distance between corners of ~36 pixels. If the distance between corners in rw units is 25mm I can say my px/mm scaling is 36/25 = 1.44 px/mm. Now for any image taken with the camera in this pose I can rectify the image and use this pixel scaling to measure distance between objects in the image. Is there a better way to allow for real world distance measurements here? Is it possible to do this perspective correction for pixels only? For example if I find the pixel locations of two corners in the original image can I apply the image rectification on the pixel coordinates only? Rather than on the entire image which can be computationally expensive? Just trying to deepen my understanding. Thanks
Some of my code
void MyCalibrateCamera(float squareSize, const std::string& imgPath, const Size& patternSize)
{
Mat img = imread(samples::findFile(imgPath));
Mat img_corners = img.clone(), img_pose = img.clone();
//! [find-chessboard-corners]
vector<Point2f> corners;
bool found = findChessboardCorners(img, patternSize, corners);
//! [find-chessboard-corners]
if (!found)
{
cout << "Cannot find chessboard corners." << endl;
return;
}
drawChessboardCorners(img_corners, patternSize, corners, found);
imshow("Chessboard corners detection", img_corners);
waitKey();
//! [compute-object-points]
vector<Point3f> objectPoints;
calcChessboardCorners(patternSize, squareSize, objectPoints);
vector<Point2f> objectPointsPlanar;
vector <vector <Point3f>> objectPointsArray;
vector <vector <Point2f>> imagePointsArray;
imagePointsArray.push_back(corners);
objectPointsArray.push_back(objectPoints);
Mat intrinsics;
Mat distortion;
vector <Mat> rotation;
vector <Mat> translation;
double RMSError = calibrateCamera(
objectPointsArray,
imagePointsArray,
img.size(),
intrinsics,
distortion,
rotation,
translation,
CALIB_ZERO_TANGENT_DIST |
CALIB_FIX_K3 | CALIB_FIX_K4 | CALIB_FIX_K5 |
CALIB_FIX_ASPECT_RATIO);
cout << "intrinsics: " << intrinsics << endl;
cout << "rotation: " << rotation.at(0) << endl;
cout << "translation: " << translation.at(0) << endl;
cout << "distortion: " << distortion << endl;
drawFrameAxes(img_pose, intrinsics, distortion, rotation.at(0), translation.at(0), 2 * squareSize);
imshow("FrameAxes", img_pose);
waitKey();
//todo: is this a valid px/mm measure?
float px_to_mm = intrinsics.at<double>(0, 0) / (translation.at(0).at<double>(2,0) * 1000);
//undistort the image
Mat imgUndistorted, map1, map2;
initUndistortRectifyMap(
intrinsics,
distortion,
Mat(),
getOptimalNewCameraMatrix(
intrinsics,
distortion,
img.size(),
1,
img.size(),
0),
img.size(),
CV_16SC2,
map1,
map2);
remap(
img,
imgUndistorted,
map1,
map2,
INTER_LINEAR);
imshow("OrgImg", img);
waitKey();
imshow("UndistortedImg", imgUndistorted);
waitKey();
Mat img_bird_eye_view = img.clone();
//rectify
// https://docs.opencv.org/3.4.0/d9/dab/tutorial_homography.html#tutorial_homography_Demo3
// https://stackoverflow.com/questions/48576087/birds-eye-view-perspective-transformation-from-camera-calibration-opencv-python
//Get to a birds eye view, or -90 degrees z rotation
Mat rvec = rotation.at(0);
Mat tvec = translation.at(0);
//-90 degrees z. Required depends on the frame axes.
Mat R_desired = (Mat_<double>(3, 3) <<
0, 1, 0,
-1, 0, 0,
0, 0, 1);
//Get 3x3 rotation matrix from rotation vector
Mat R;
Rodrigues(rvec, R);
//compute the normal to the camera frame
Mat normal = (Mat_<double>(3, 1) << 0, 0, 1);
Mat normal1 = R * normal;
//compute d, distance . dot product between the normal and a point on the plane.
Mat origin(3, 1, CV_64F, Scalar(0));
Mat origin1 = R * origin + tvec;
double d_inv1 = 1.0 / normal1.dot(origin1);
//compute the homography to go from the camera view to the desired view
Mat R_1to2, tvec_1to2;
Mat tvec_desired = tvec.clone();
//get the displacement between our views
computeC2MC1(R, tvec, R_desired, tvec_desired, R_1to2, tvec_1to2);
//now calculate the euclidean homography
Mat H = R_1to2 + d_inv1 * tvec_1to2 * normal1.t();
//now the projective homography
H = intrinsics * H * intrinsics.inv();
H = H / H.at<double>(2, 2);
std::cout << "H:\n" << H << std::endl;
Mat imgToWarp = imgUndistorted.clone();
warpPerspective(imgToWarp, img_bird_eye_view, H, img.size());
Mat compare;
hconcat(imgToWarp, img_bird_eye_view, compare);
imshow("Bird eye view", compare);
waitKey();
...

(opencv) imread with CV_LOAD_IMAGE_GRAYSCALE yields a 4 channels Mat

The following code reads an image from a file into a cv::Mat object.
#include <string>
#include <opencv2/opencv.hpp>
cv::Mat load_image(std::string img_path)
{
cv::Mat img = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE);
cv::Scalar intensity = img.at<uchar>(0, 0);
std::cout << intensity << std::endl;
return img;
}
I would expect the cv::Mat to have only one channel (namely, the intensity of the image) but it has 4.
$ ./test_load_image
[164, 0, 0, 0]
I also tried converting the image with
cv::Mat gray(img.size(), CV_8UC1);
img.convertTo(gray, CV_8UC1);
but the gray matrix is also a 4 channels one.
I'd like to know if it's possible to have a single channel cv::Mat. Intuitively, that's what I would expect to have when dealing with a grayscale (thus, single channel) image.
The matrix is single channel. You're just reading the values in the wrong way.
Scalar is a struct with 4 values. Constructing a Scalar with a single value will result in a Scalar with the first value set, and the remaining at zero.
In your case, only the first values make sense. The zeros are as default for Scalar.
However, you don't need to use a Scalar:
uchar intensity = img.at<uchar>(0, 0);
std::cout << int(intensity) << std::endl; // Print the value, not the ASCII character

Opencv imshow Y from YUV

I'm trying to extract and display
Y channel from YUV converted image
My code is as follows:
Mat src, src_resized, src_gray;
src = imread("11.jpg", 1);
resize(src, src_resized, cvSize(400, 320));
cvtColor(src_resized, src_resized, cv::COLOR_BGR2RGB);
/*
I've tried both with and without the upper conversion
(mentioned here as bug
http://stackoverflow.com/questions/7954416/converting-yuv-into-bgr-or-rgb-in-opencv
in an opencv 2.4.* version - mine is 2.4.10 )
*/
cvtColor(src_resized, src_gray, CV_RGB2YUV); //YCrCb
vector<Mat> yuv_planes(3);
split(src_gray,yuv_planes);
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
// same result withg = Mat::zeros(Size(src_gray.cols, src_gray.rows), CV_8UC1);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
merge(channels, fin_img);
imshow("Y ", fin_img);
waitKey(0);
return 0;
As result I was expecting a Gray image showing luminescence.
Instead I get a B/G/R channel image depending on the position of (first/second/third)
channels.push_back(yuv_planes[0]);
as shown here:
What am I missing? (I plan to use the luminance to do a sum of rows/columns and extracting the License Plate later using the data obtained)
The problem was displaying the luminescence only in one channel instead of filling all channels with it.
If anyone else hits the same problem just change
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
to
(fill all channels with desired channel)
Mat fin_img;
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);

Accuracy tuning for Haar-Cascade Classifier

I'm using Haar-Cascade Classifier in order to detect faces.
I'm currently facing some problems with the following function:
void ImageManager::detectAndDisplay(Mat frame, CascadeClassifier face_cascade){
string window_name = "Capture - Face detection";
string filename;
std::vector<Rect> faces;
std::vector<Rect> eyes;
Mat frame_gray;
Mat crop;
Mat res;
Mat gray;
string text;
stringstream sstm;
cvtColor(frame, frame_gray, COLOR_BGR2GRAY);
equalizeHist(frame_gray, frame_gray);
// Detect faces
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30));
// Set Region of Interest
cv::Rect roi_b;
cv::Rect roi_c;
size_t ic = 0; // ic is index of current element
for (ic = 0; ic < faces.size(); ic++) // Iterate through all current elements (detected faces)
{
roi_c.x = faces[ic].x;
roi_c.y = faces[ic].y;
roi_c.width = (faces[ic].width);
roi_c.height = (faces[ic].height);
crop = frame_gray(roi_c);
faces_img.push_back(crop);
rectangle(frame, Point(roi_c.x, roi_c.y), Point(roi_c.x + roi_c.width, roi_c.y + roi_c.height), Scalar(0,0,255), 2);
}
imshow("test", frame);
waitKey(0);
cout << faces_img.size();
}
The frame is the photo I'm trying to scan.
The face_cascade is the classifier.
internally, the CascadeClassifier does several detections, and groups those.
minNeighbours (in the detectMultiScale call) is the amount of detections in about the same place nessecary to count as a valid detection, so increase that from your current 2 to maybe 5 or so, until you start to miss positives.
As an addition to berak's statement, it's not only about reducing/increasing of detectMultiScale parameters if you're not doing the stuff only on an image. You'll face performance problems that do not let the user use the application.
Performance issues are relying on miscalculations. And what calculation takes is just testing.
If you are not trying to have the best results under different light conditions(since this is visual-dependent information) you'll have to scale the input array before sending it as an argument to detectMultiScale function. Once detection's completed, rescale to the previous size(it may be done by changing the rectangle's size that's used as an argument for detectMultiScale).

3D matrix multiplication in opencv for RGB color mixing

I'm trying to perform an RGB Color mixing operation in opencv. I have the image contained in an MxNx3 Mat. I would like to multiple this with a 3x3 matrix. In Matlab I do the following:
*Flatten the image from MxNx3 to a MNx3
*multiply the MNx3 matrix by the 3x3 color mixing matrix
*reshape back to a MxNx3
In Opencv I would like to do the following:
void RGBMixing::mixColors(Mat &imData, Mat &rgbMixData)
{
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
// Scale the coefficents
multiply(rgbMixMat, 1, rgbMixMat, 256);
Mat temp = imData.reshape(0, 1);
temp = temp.t();
multiply(temp, rgbMixMat, rgbMixData);
}
This compiles but generates an exception:
OpenCV Error: Sizes of input arguments do not match (The operation is
neither 'a rray op array' (where arrays have the same size and the
same number of channels) , nor 'array op scalar', nor 'scalar op
array') in arithm_op, file C:/slave/WinI
nstallerMegaPack/src/opencv/modules/core/src/arithm.cpp, line 1253
terminate called after throwing an instance of 'cv::Exception'
what():
C:/slave/WinInstallerMegaPack/src/opencv/modules/core/src/arithm.cpp:
1253: error: (-209) The operation is neither 'array op array' (where
arrays have the same size and the same number of channels), nor
'array op scalar', nor 'sca lar op array' in function arithm_op
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information.
Update 1:
This is code that appears to work:
void RGBMixing::mixColors(Mat &imData, Mat&rgbMixData)
{
Size tempSize;
uint32_t channels;
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
Mat flatImage = imData.reshape(1, 3);
tempSize = flatImage.size();
channels = flatImage.channels();
cout << "temp channels: " << channels << " Size: " << tempSize.width << " x " << tempSize.height << endl;
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage.t() * rgbMixMat;
mixedImage = mixedImage.t();
rgbMixData = mixedImage.reshape(3, 1944);
channels = rgbMixData.channels();
tempSize = rgbMixData.size();
cout << "temp channels: " << channels << " Size: " << tempSize.width << " x " << tempSize.height << endl;
}
But the resulting image is distorted. If I skip the multiplication of the two matrices and just assign
mixedImage = flatFloatImage
The resulting image looks fine (just not color mixed). So I must be doing something wrong, but am getting close.
I see a couple of things here:
For scaling the coefficients, OpenCV supports multiplication by scalar so instead of multiply(rgbMixMat, 1, rgbMixMat, 256); you should do directly rgbMixMat = 256 * rgbMixMat;.
If that is all your code, you don't properly initialize or assign values to imData, so the line Mat temp = imData.reshape(0, 1); is probably going to crash.
Assuming that imData is a MxNx3 (3-channel Mat), you want to reshape that into a MNx3 (1-channel). According to the documentation, when you write Mat temp = imData.reshape(0, 1); you are saying that you want the number of channels to remain the same, and the row, should be 1. Instead, it should be:
Mat myData = Mat::ones(100, 100, CV_32FC3); // 100x100x3 matrix
Mat myDataReshaped = myData.reshape(1, myData.rows*myData.cols); // 10000x3 matrix
Why do you take the transpose temp = temp.t(); ?
When you write multiply(temp, rgbMixMat, mixData);, this is the per-element product. You want the matrix product, so you just have to do mixData = myDataReshaped * rgbMixMat; (and then reshape that).
Edit: It crashes if you don't use the transpose, because you do imData.reshape(1, 3); instead of imData.reshape(1, imData.rows);
Try
void RGBMixing::mixColors(Mat &imData, Mat&rgbMixData)
{
Size tempSize;
uint32_t channels;
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
Mat flatImage = imData.reshape(1, imData.rows*imData.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * rgbMixMat;
rgbMixData = mixedImage.reshape(3, imData.rows);
}

Resources