Opencv Vec3b vs uchar different output - opencv

I am trying to get pixel value before doing some calculation. When i run the below code at loc (10,10)i get the below answers. I am not sure why the color value is different than others. When do the same for loc (0,0) i get same answers for all. Image is black and white with 3 channels. Or is it that for 3 channels i should always use Vec3b?
char imageName[] = "0001.png";
cv::Mat img = cv::imread(imageName);
cout << "Image rows is: " << img.rows << " cols is : " << img.cols << endl;
cout << "Image channels " << img.channels() << endl;
cout << "Image depth " << img.depth() << endl; //CV_8U == 0
cout << (int)img.at<uchar>(10, 10) << endl;
cout << (int)img.ptr(10)[10] << endl;
Scalar intensity = img.at<uchar>(10, 10);
cout << intensity << endl;
Vec3b color = img.at<Vec3b>(10, 10);
cout << color << endl;
output:
Image rows is: 96 cols is : 72
Image channels 3
Image depth 0
103
103
[103, 0, 0, 0]
[164, 164, 164]

Related

How to compute the result after two perspective transformations?

I am doing an image stitching project using OpenCV. Now I have the homography H1 between img1 and img2, and the homography H2 between img2 and img3. Now I need to compute the homography between img1 and img3, simply multiply H1*H2 is not working.
Are any ideas to calculate the new homography between img1 and img3?
for me computing H1 * H2 works well and gives the right results.
Here, H1 = H2_1 since it warps from image2 to image1.
H2 = H3_2 since it warps from image3 to image2.
H1 * H2 = H3_1 since it warps from image3 to image 1.
int main()
{
try {
cv::Mat H2_1 = (cv::Mat_<double>(3, 3) << 1.0e+00, 0.0e+00, 1.8e+03, 0.0e+00, 1.0e+00, 5.0e+01, 0.0e+00, 0.0e+00, 1.0e+00);
cv::Mat H3_2 = (cv::Mat_<double>(3, 3) << 0.949534471, -0.00581765975, 66.7917766, -0.0113810490, 0.981450515, -0.672362563, -0.000147974585, 0.00000992770511, 1.0);
cv::Mat H3_1 = H2_1 * H3_2;
cv::Point2f p3(500, 500);
std::vector<cv::Point2f> src3;
src3.push_back( p3 );
src3.push_back(p3);
std::vector<cv::Point2f> dst2;
cv::perspectiveTransform(src3, dst2, H3_2);
std::cout << "p1: " << p3 << std::endl;
std::cout << "p2: " << dst2[0] << std::endl;
std::vector<cv::Point2f> dst1;
cv::perspectiveTransform(dst2, dst1, H2_1);
std::cout << "p1 from p2: " << dst1[0] << std::endl;
dst1.clear();
cv::perspectiveTransform(src3, dst1, H3_1);
std::cout << "p1 from p3: " << dst1[0] << std::endl;
}
catch (std::exception& e)
{
std::cout << e.what() << std::endl;
}
std::cin.get();
}
results:

How to track the 2d points over the images for 3d-reconstruction as specified in opencv sfm pipeline?

I am trying to do 3D reconstruction using this code from opencv. As far as I understood, I need a textfile with the 2D points as per the given format. I am wondering if anyone could help me in getting these 2D points they mention in this program. I have a set of images with the calibration parameters but I have been not able to understand how could I track and save these 2D points e.g. the first point in frame 1 is the same point in frame 2 in the format they specified .
#include <opencv2/core.hpp>
#include <opencv2/sfm.hpp>
#include <opencv2/viz.hpp>
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
using namespace cv;
using namespace cv::sfm;
static void help() {
cout
<< "\n------------------------------------------------------------\n"
<< " This program shows the camera trajectory reconstruction capabilities\n"
<< " in the OpenCV Structure From Motion (SFM) module.\n"
<< " \n"
<< " Usage:\n"
<< " example_sfm_trajectory_reconstruction <path_to_tracks_file> <f> <cx> <cy>\n"
<< " where: is the tracks file absolute path into your system. \n"
<< " \n"
<< " The file must have the following format: \n"
<< " row1 : x1 y1 x2 y2 ... x36 y36 for track 1\n"
<< " row2 : x1 y1 x2 y2 ... x36 y36 for track 2\n"
<< " etc\n"
<< " \n"
<< " i.e. a row gives the 2D measured position of a point as it is tracked\n"
<< " through frames 1 to 36. If there is no match found in a view then x\n"
<< " and y are -1.\n"
<< " \n"
<< " Each row corresponds to a different point.\n"
<< " \n"
<< " f is the focal lenght in pixels. \n"
<< " cx is the image principal point x coordinates in pixels. \n"
<< " cy is the image principal point y coordinates in pixels. \n"
<< "------------------------------------------------------------------\n\n"
<< endl;
}
/* Build the following structure data
*
* frame1 frame2 frameN
* track1 | (x11,y11) | -> | (x12,y12) | -> | (x1N,y1N) |
* track2 | (x21,y11) | -> | (x22,y22) | -> | (x2N,y2N) |
* trackN | (xN1,yN1) | -> | (xN2,yN2) | -> | (xNN,yNN) |
*
*
* In case a marker (x,y) does not appear in a frame its
* values will be (-1,-1).
*/
void
parser_2D_tracks(const string &_filename, std::vector<Mat> &points2d )
{
ifstream myfile(_filename.c_str());
if (!myfile.is_open())
{
cout << "Unable to read file: " << _filename << endl;
exit(0);
} else {
double x, y;
string line_str;
int n_frames = 0, n_tracks = 0;
// extract data from text file
vector<vector<Vec2d> > tracks;
for ( ; getline(myfile,line_str); ++n_tracks)
{
istringstream line(line_str);
vector<Vec2d> track;
for ( n_frames = 0; line >> x >> y; ++n_frames)
{
if ( x > 0 && y > 0)
track.push_back(Vec2d(x,y));
else
track.push_back(Vec2d(-1));
}
tracks.push_back(track);
}
// embed data in reconstruction api format
for (int i = 0; i < n_frames; ++i)
{
Mat_<double> frame(2, n_tracks);
for (int j = 0; j < n_tracks; ++j)
{
frame(0,j) = tracks[j][i][0];
frame(1,j) = tracks[j][i][1];
}
points2d.push_back(Mat(frame));
}
myfile.close();
}
}
/* Keyboard callback to control 3D visualization
*/
bool camera_pov = false;
void keyboard_callback(const viz::KeyboardEvent &event, void* cookie)
{
if ( event.action == 0 &&!event.symbol.compare("s") )
camera_pov = !camera_pov;
}
/* Sample main code
*/
int main(int argc, char** argv)
{
// Read input parameters
if ( argc != 5 )
{
help();
exit(0);
}
// Read 2D points from text file
std::vector<Mat> points2d;
parser_2D_tracks( argv[1], points2d );
// Set the camera calibration matrix
const double f = atof(argv[2]),
cx = atof(argv[3]), cy = atof(argv[4]);
Matx33d K = Matx33d( f, 0, cx,
0, f, cy,
0, 0, 1);
bool is_projective = true;
vector<Mat> Rs_est, ts_est, points3d_estimated;
reconstruct(points2d, Rs_est, ts_est, K, points3d_estimated, is_projective);
// Print output
cout << "\n----------------------------\n" << endl;
cout << "Reconstruction: " << endl;
cout << "============================" << endl;
cout << "Estimated 3D points: " << points3d_estimated.size() << endl;
cout << "Estimated cameras: " << Rs_est.size() << endl;
cout << "Refined intrinsics: " << endl << K << endl << endl;
cout << "3D Visualization: " << endl;
cout << "============================" << endl;
viz::Viz3d window_est("Estimation Coordinate Frame");
window_est.setBackgroundColor(); // black by default
window_est.registerKeyboardCallback(&keyboard_callback);
// Create the pointcloud
cout << "Recovering points ... ";
// recover estimated points3d
vector<Vec3f> point_cloud_est;
for (int i = 0; i < points3d_estimated.size(); ++i)
point_cloud_est.push_back(Vec3f(points3d_estimated[i]));
cout << "[DONE]" << endl;
cout << "Recovering cameras ... ";
vector<Affine3d> path_est;
for (size_t i = 0; i < Rs_est.size(); ++i)
path_est.push_back(Affine3d(Rs_est[i],ts_est[i]));
cout << "[DONE]" << endl;
cout << "Rendering Trajectory ... ";
cout << endl << "Press: " << endl;
cout << " 's' to switch the camera pov" << endl;
cout << " 'q' to close the windows " << endl;
if ( path_est.size() > 0 )
{
// animated trajectory
int idx = 0, forw = -1, n = static_cast<int>(path_est.size());
while(!window_est.wasStopped())
{
for (size_t i = 0; i < point_cloud_est.size(); ++i)
{
Vec3d point = point_cloud_est[i];
Affine3d point_pose(Mat::eye(3,3,CV_64F), point);
char buffer[50];
sprintf (buffer, "%d", static_cast<int>(i));
viz::WCube cube_widget(Point3f(0.1,0.1,0.0), Point3f(0.0,0.0,-0.1), true, viz::Color::blue());
cube_widget.setRenderingProperty(viz::LINE_WIDTH, 2.0);
window_est.showWidget("Cube"+string(buffer), cube_widget, point_pose);
}
Affine3d cam_pose = path_est[idx];
viz::WCameraPosition cpw(0.25); // Coordinate axes
viz::WCameraPosition cpw_frustum(K, 0.3, viz::Color::yellow()); // Camera frustum
if ( camera_pov )
window_est.setViewerPose(cam_pose);
else
{
// render complete trajectory
window_est.showWidget("cameras_frames_and_lines_est", viz::WTrajectory(path_est, viz::WTrajectory::PATH, 1.0, viz::Color::green()));
window_est.showWidget("CPW", cpw, cam_pose);
window_est.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
}
// update trajectory index (spring effect)
forw *= (idx==n || idx==0) ? -1: 1; idx += forw;
// frame rate 1s
window_est.spinOnce(1, true);
window_est.removeAllWidgets();
}
}
return 0;
}
I would be really grateful if someone can help me through this. Thank you.

Putting multiple video feed next to each other with OpenCV

I have real-time video from 3 different cameras, I would like to resize and then put the videos next to each other and fit them on one screen, basically:
-------------------------------------
| CAM LEFT | CAM MIDDLE | CAM RIGHT |
-------------------------------------
I have written the following code but for whatever reason I only have image on the LEFT side only. The middle and right side are replaced by a gray color background (I have no idea where that comes from). I have printed out all the coordinates and they look fine to me, with a total screen size of 1440 x 540 (I want the picture in the middle to be bigger).
- LEFT: 0,0,380,540
- MIDDLE: 380,0,680,540
- RIGHT: 1060,0,380,540
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching.hpp"
#include <opencv2/opencv.hpp>
#include <opencv2/imgcodecs.hpp>
#include<iostream>
#include <fstream>
#include<conio.h> // may have to modify this line if not using Windows
int main(int argc, char **argv) {
int combinedScreenWidth = 1440;
int combinedScreenHeight = 540;
int rearCameraBiggerByThis = 100;
int combinedScreenWidthHalv = combinedScreenWidth / 3;
//initialize and allocate memory to load the video stream from camera
cv::VideoCapture cameraRight(0); // RIGHT CAM
cv::VideoCapture cameraMiddle(3); // REAR CAM
cv::VideoCapture cameraLeft(2); // LEFT CAM
if (!cameraRight.isOpened()) return 1;
if (!cameraMiddle.isOpened()) return 1;
if (!cameraLeft.isOpened()) return 1;
cv::Mat3b cameraRightFrame;
cv::Mat3b cameraMiddleFrame;
cv::Mat3b cameraLeftFrame;
cv::Mat3b cameraRightFrameMirrored;
cv::Mat3b cameraMiddleFrameMirrored;
cv::Mat3b cameraLeftFrameMirrored;
cv::Size camRightSize;
cv::Size camMiddleSize;
cv::Size camLeftSize;
cv::Mat3b cameraRightFrameMirroredResize;
cv::Mat3b cameraMiddleFrameMirroredResize;
cv::Mat3b cameraLeftFrameMirroredResize;
while (true) {
// Grab
cameraRight >> cameraRightFrame;
cameraMiddle >> cameraMiddleFrame;
cameraLeft >> cameraLeftFrame;
// Mirror
cv::flip(cameraRightFrame, cameraRightFrameMirrored, 1);
cv::flip(cameraMiddleFrame, cameraMiddleFrameMirrored, 1);
cv::flip(cameraLeftFrame, cameraLeftFrameMirrored, 1);
// Resize
camRightSize = cameraRightFrame.size();
camMiddleSize = cameraMiddleFrame.size();
camLeftSize = cameraLeftFrame.size();
std::cout << "----------------------> original camRightSize " << camRightSize.width << "x" << camRightSize.height << std::endl;
std::cout << "----------------------> original camMiddletSize " << camMiddleSize.width << "x" << camMiddleSize.height << std::endl;
std::cout << "----------------------> original camLeftSize " << camLeftSize.width << "x" << camLeftSize.height << std::endl;
resize(cameraRightFrameMirrored, cameraRightFrameMirroredResize, cv::Size(combinedScreenWidthHalv - rearCameraBiggerByThis, combinedScreenHeight));
resize(cameraMiddleFrameMirrored, cameraMiddleFrameMirroredResize, cv::Size(combinedScreenWidthHalv + 2 * rearCameraBiggerByThis, combinedScreenHeight));
resize(cameraLeftFrameMirrored, cameraLeftFrameMirroredResize, cv::Size(combinedScreenWidthHalv - rearCameraBiggerByThis, combinedScreenHeight));
camRightSize = cameraRightFrameMirroredResize.size();
camMiddleSize = cameraMiddleFrameMirroredResize.size();
camLeftSize = cameraLeftFrameMirroredResize.size();
std::cout << "----------------------> resized camRightSize " << camRightSize.width << "x" << camRightSize.height << std::endl;
std::cout << "----------------------> resized camMiddletSize " << camMiddleSize.width << "x" << camMiddleSize.height << std::endl;
std::cout << "----------------------> resized camLeftSize " << camLeftSize.width << "x" << camLeftSize.height << std::endl;
// Compilation
cv::Mat3b combinedFrame(camRightSize.height, camLeftSize.width + camMiddleSize.width + camRightSize.width);
// LEFT
std::cout << "LEFT: 0,0," << camLeftSize.width << "," << camLeftSize.height << std::endl;
cv::Mat3b leftSideOfScreen(combinedFrame, cv::Rect(0, 0, camLeftSize.width, camLeftSize.height));
cameraLeftFrameMirroredResize.copyTo(leftSideOfScreen);
// MIDDLE
std::cout << "MIDDLE: " << camLeftSize.width << ",0," << camMiddleSize.width << "," << camMiddleSize.height << std::endl;
cv::Mat3b middleSideOfScreen(combinedFrame, cv::Rect(camLeftSize.width, 0, camMiddleSize.width, camMiddleSize.height));
cameraRightFrameMirroredResize.copyTo(middleSideOfScreen);
// RIGHT
std::cout << "RIGHT: " << camLeftSize.width + camMiddleSize.width << ",0," << camRightSize.width << "," << camRightSize.height << std::endl;
cv::Mat3b rightSideOfScreen(combinedFrame, cv::Rect(camLeftSize.width + camMiddleSize.width, 0, camRightSize.width, camRightSize.height));
cameraMiddleFrameMirroredResize.copyTo(rightSideOfScreen);
// declare windows
cv::namedWindow("Combined", CV_WINDOW_NORMAL);
cv::setWindowProperty("Combined", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
cv::putText(combinedFrame, "LEFT", cv::Point(250, 50), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 255), 2);
cv::putText(combinedFrame, "REAR", cv::Point(650, 50), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 255), 2);
cv::putText(combinedFrame, "RIGHT", cv::Point(1050, 50), cv::FONT_HERSHEY_PLAIN, 2.0, cv::Scalar(255, 255, 255), 2);
cv::imshow("Combined", combinedFrame); // 1440 x 540 Screen size
//cv::imshow("Right Cam", cameraRightFrame);
//cv::imshow("Middle Cam", cameraMiddleFrame);
//cv::imshow("Left Cam", cameraLeftFrame);
//wait for 40 milliseconds
int c = cvWaitKey(1);
//exit the loop if user press "Esc" key (ASCII value of "Esc" is 27)
if (27 == char(c)) {
break;
}
}
return 0;
}

Bug in cv::warpAffine?

I think the following examples shows a bug in warpAffine (OpenCV 3.1 with precompiled Win64 dlls):
Mat x(1,20, CV_32FC1);
for (int iCol(0); iCol<x.cols; iCol++) { x.col(iCol).setTo(iCol); }
Mat crop;
Point2d c(10., 0.);
double scale(1.3);
int cropSz(11);
double vals[6] = { scale, 0.0, c.x-(cropSz/2)*scale, 0.0, scale, c.y };
Mat map(2, 3, CV_64FC1, vals);
warpAffine(x, crop, map, Size(cropSz, 1), WARP_INVERSE_MAP | INTER_LINEAR);
float dx = (crop.at<float>(0, crop.cols-1) - crop.at<float>(0, 0))/(crop.cols-1);
Mat constGrad = crop.clone().setTo(0);
for (int iCol(0); iCol<constGrad.cols; iCol++) {
constGrad.col(iCol) = c.x + (iCol-cropSz/2)*scale;
}
Mat diff = crop - constGrad;
double err = norm(diff, NORM_INF);
if (err>1e-4) {
cout << "Problem:" << endl;
cout << "computed output: " << crop << endl;
cout << "expected output: " << constGrad << endl;
cout << "difference: " << diff << endl;
Mat dxImg;
Mat dxFilt(1, 2, CV_32FC1);
dxFilt.at<float>(0) = -1.0f;
dxFilt.at<float>(1) = 1.0f;
filter2D(crop, dxImg, crop.depth(), dxFilt);
cout << "x-derivative in computed output: " << dxImg(Rect(1,0,10,1)) << endl;
cout << "Note: We expect a constant difference of 1.3" << endl;
}
Here is the program output:
Problem:
computed output: [3.5, 4.8125, 6.09375, 7.40625, 8.6875, 10, 11.3125, 12.59375, 13.90625, 15.1875, 16.5]
expected output: [3.5, 4.8000002, 6.0999999, 7.4000001, 8.6999998, 10, 11.3, 12.6, 13.9, 15.2, 16.5]
difference: [0, 0.012499809, -0.0062499046, 0.0062499046, -0.012499809, 0, 0.012499809, -0.0062503815, 0.0062503815, -0.012499809, 0]
x-derivative in computed output: [1.3125, 1.28125, 1.3125, 1.28125, 1.3125, 1.3125, 1.28125, 1.3125, 1.28125, 1.3125]
Note: We expect a constant difference of 1.3
I create an image with entries 0, 1, 2, ...n-1, and cut a region around (10,0) with scale 1.3. I also create an expected image constGrad. However, they are not the same. Even more, since the input image has a constant derivative in x-direction and the mapping is affine, I expect also a constant gradient in the resulting image.
The problem is not a boundary stuff problem, the same happens at the inner of an image. It's also not related to WARP_INVERSE_MAP.
Is this a known issue? Any comments on this?

How to access each pixel value of RGB image?

I am trying to read RGB image. However, I can only access with Vec3b type, not each channel.
I am sure what is the problem. Would like to help me out of misery?
imgMod = imread("rgb.png");
for (int iter_x = 0; iter_x < imgMod.cols; ++iter_x)
{
for (int iter_y = 0; iter_y < imgMod.rows; ++iter_y)
{
cout << imgMod.at<cv::Vec3b>(iter_y, iter_x) << "\t";
cout << imgMod.at<cv::Vec3b>(iter_y, iter_x)[0] << "\t";
cout << imgMod.at<cv::Vec3b>(iter_y, iter_x)[1] << "\t";
cout << imgMod.at<cv::Vec3b>(iter_y, iter_x)[2] << endl;
}
}
Here is a result for pixel value of RGB image.
[153, 88, 81] X Q
[161, 94, 85] 。 ^ T
...
Your access is fine.
The type returned by the [] operator is char so the value gets printed as a char - a text character. Just cast it to int to see the grey value as an integer:
cout << int(imgMod.at<cv::Vec3b>(iter_y, iter_x)[0]) << "\t";
A (more readable and explicit) C++ way to do it would be this:
static_cast<int>(imgMod.at<cv::Vec3b>(iter_y, iter_x)[0]) << "\t";
Even more cool is this (obscure?) little trick - note the +:
cout << +imgMod.at<cv::Vec3b>(iter_y, iter_x)[0] << "\t";
// ^

Resources