Bug in cv::warpAffine? - opencv

I think the following examples shows a bug in warpAffine (OpenCV 3.1 with precompiled Win64 dlls):
Mat x(1,20, CV_32FC1);
for (int iCol(0); iCol<x.cols; iCol++) { x.col(iCol).setTo(iCol); }
Mat crop;
Point2d c(10., 0.);
double scale(1.3);
int cropSz(11);
double vals[6] = { scale, 0.0, c.x-(cropSz/2)*scale, 0.0, scale, c.y };
Mat map(2, 3, CV_64FC1, vals);
warpAffine(x, crop, map, Size(cropSz, 1), WARP_INVERSE_MAP | INTER_LINEAR);
float dx = (crop.at<float>(0, crop.cols-1) - crop.at<float>(0, 0))/(crop.cols-1);
Mat constGrad = crop.clone().setTo(0);
for (int iCol(0); iCol<constGrad.cols; iCol++) {
constGrad.col(iCol) = c.x + (iCol-cropSz/2)*scale;
}
Mat diff = crop - constGrad;
double err = norm(diff, NORM_INF);
if (err>1e-4) {
cout << "Problem:" << endl;
cout << "computed output: " << crop << endl;
cout << "expected output: " << constGrad << endl;
cout << "difference: " << diff << endl;
Mat dxImg;
Mat dxFilt(1, 2, CV_32FC1);
dxFilt.at<float>(0) = -1.0f;
dxFilt.at<float>(1) = 1.0f;
filter2D(crop, dxImg, crop.depth(), dxFilt);
cout << "x-derivative in computed output: " << dxImg(Rect(1,0,10,1)) << endl;
cout << "Note: We expect a constant difference of 1.3" << endl;
}
Here is the program output:
Problem:
computed output: [3.5, 4.8125, 6.09375, 7.40625, 8.6875, 10, 11.3125, 12.59375, 13.90625, 15.1875, 16.5]
expected output: [3.5, 4.8000002, 6.0999999, 7.4000001, 8.6999998, 10, 11.3, 12.6, 13.9, 15.2, 16.5]
difference: [0, 0.012499809, -0.0062499046, 0.0062499046, -0.012499809, 0, 0.012499809, -0.0062503815, 0.0062503815, -0.012499809, 0]
x-derivative in computed output: [1.3125, 1.28125, 1.3125, 1.28125, 1.3125, 1.3125, 1.28125, 1.3125, 1.28125, 1.3125]
Note: We expect a constant difference of 1.3
I create an image with entries 0, 1, 2, ...n-1, and cut a region around (10,0) with scale 1.3. I also create an expected image constGrad. However, they are not the same. Even more, since the input image has a constant derivative in x-direction and the mapping is affine, I expect also a constant gradient in the resulting image.
The problem is not a boundary stuff problem, the same happens at the inner of an image. It's also not related to WARP_INVERSE_MAP.
Is this a known issue? Any comments on this?

Related

How to compute the result after two perspective transformations?

I am doing an image stitching project using OpenCV. Now I have the homography H1 between img1 and img2, and the homography H2 between img2 and img3. Now I need to compute the homography between img1 and img3, simply multiply H1*H2 is not working.
Are any ideas to calculate the new homography between img1 and img3?
for me computing H1 * H2 works well and gives the right results.
Here, H1 = H2_1 since it warps from image2 to image1.
H2 = H3_2 since it warps from image3 to image2.
H1 * H2 = H3_1 since it warps from image3 to image 1.
int main()
{
try {
cv::Mat H2_1 = (cv::Mat_<double>(3, 3) << 1.0e+00, 0.0e+00, 1.8e+03, 0.0e+00, 1.0e+00, 5.0e+01, 0.0e+00, 0.0e+00, 1.0e+00);
cv::Mat H3_2 = (cv::Mat_<double>(3, 3) << 0.949534471, -0.00581765975, 66.7917766, -0.0113810490, 0.981450515, -0.672362563, -0.000147974585, 0.00000992770511, 1.0);
cv::Mat H3_1 = H2_1 * H3_2;
cv::Point2f p3(500, 500);
std::vector<cv::Point2f> src3;
src3.push_back( p3 );
src3.push_back(p3);
std::vector<cv::Point2f> dst2;
cv::perspectiveTransform(src3, dst2, H3_2);
std::cout << "p1: " << p3 << std::endl;
std::cout << "p2: " << dst2[0] << std::endl;
std::vector<cv::Point2f> dst1;
cv::perspectiveTransform(dst2, dst1, H2_1);
std::cout << "p1 from p2: " << dst1[0] << std::endl;
dst1.clear();
cv::perspectiveTransform(src3, dst1, H3_1);
std::cout << "p1 from p3: " << dst1[0] << std::endl;
}
catch (std::exception& e)
{
std::cout << e.what() << std::endl;
}
std::cin.get();
}
results:

How to use clEnqueueWriteBufferRect in OpenCL

I want to use clEnqueueReadBufferRect in OpenCL. To do it, I need to define the region as one of its passing arguement. But there is a inconsistency between references of OpenCL
In online reference, it is mention that
The (width, height, depth) in bytes of the 2D or 3D rectangle being read or written. For a 2D rectangle copy, the depth value given by region [2] should be 1.
but in the reference book, page 77, it is mentioned that
region defines the (width in bytes, height in rows, depth in slices) of the 2D or 3D rectangle being read or written. For a 2D rectangle copy, the depth value given by region [2] should be 1. The values in region cannot be 0
but unfortunately, none of those guides worked for me and I should provide region in (width in columns, height in rows, depth in slices), otherwise, when I defined them as byte not rows/columns, I got the error CL_INVALID_VALUE. Now which one is correct?
#define WGX 16
#define WGY 16
#include "misc.hpp"
int main(int argc, char** argv)
{
int i;
int n = 1000;
int filterWidth = 3;
int filterRadius = (int) filterWidth/2;
int padding = filterRadius * 2;
double h = 1.0 / n;
int width_x[2];
int height_x[2];
int deviceWidth[2];
int deviceHeight[2];
int deviceDataSize[2];
for (i = 0; i < 2; ++i)
{
set_domain_length(n, n, height_x[i], width_x[i], i);
}
float* x = new float [height_x[0] * width_x[0]];
init_unknown(x, height_x[0], width_x[0], 0);
set_bndryCond(x, width_x[0], h);
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
assert(platforms.size() > 0);
cl::Platform myPlatform = platforms[0];
std::vector<cl::Device> devices;
myPlatform.getDevices(CL_DEVICE_TYPE_GPU, &devices);
assert(devices.size() > 0);
cl::Device myDevice = devices[0];
cl_display_info(myPlatform, myDevice);
cl::Context context(myDevice);
std::ifstream kernelFile("iterative_scheme.cl");
std::string src(std::istreambuf_iterator<char>(kernelFile), (std::istreambuf_iterator<char>()));
cl::Program::Sources sources(1,std::make_pair(src.c_str(),src.length() + 1));
cl::Program program(context, sources);
cl::CommandQueue queue(context, myDevice);
deviceWidth[0] = roundUp(width_x[0], WGX);
deviceHeight[0] = height_x[0];
deviceDataSize[0] = deviceWidth[0] * deviceHeight[0] * sizeof(float);
cl::Buffer buffer_x;
try
{
buffer_x = cl::Buffer(context, CL_MEM_READ_WRITE, deviceDataSize[0]);
} catch (cl::Error& error)
{
std::cout << " ---> Problem in creating buffer(s) " << std::endl;
std::cout << " ---> " << getErrorString(error) << std::endl;
exit(0);
}
cl::size_t<3> buffer_origin;
buffer_origin[0] = 0;
buffer_origin[1] = 0;
buffer_origin[2] = 0;
cl::size_t<3> host_origin;
host_origin[0] = 0;
host_origin[1] = 0;
host_origin[2] = 0;
cl::size_t<3> region;
region[0] = (size_t)(deviceWidth[0] * sizeof(float));
region[1] = (size_t)(height_x[0]);
region[2] = 1;
std::cout << "===> Start writing data to device" << std::endl;
try
{
queue.enqueueWriteBufferRect(buffer_x, CL_TRUE, buffer_origin, host_origin, region,
deviceWidth[0] * sizeof(float), 0, width_x[0] * sizeof(float), 0, x);
} catch (cl::Error& error)
{
std::cout << " ---> Problem in writing data from Host to Device: " << std::endl;
std::cout << " ---> " << getErrorString(error) << std::endl;
exit(0);
}
// Build the program
std::cout << "===> Start building program" << std::endl;
try
{
program.build("-cl-std=CL2.0");
std::cout << " ---> Build Successfully " << std::endl;
} catch(cl::Error& error)
{
std::cout << " ---> Problem in building program " << std::endl;
std::cout << " ---> " << getErrorString(error) << std::endl;
std::cout << " ---> " << program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(myDevice) << std::endl;
exit(0);
}
std::cout << "===> Start reading data from device" << std::endl;
// read result y and residual from the device
buffer_origin[0] = (size_t)(filterRadius * sizeof(float));
buffer_origin[1] = (size_t)filterRadius;
buffer_origin[2] = 0;
host_origin[0] = (size_t)(filterRadius * sizeof(float));
host_origin[1] = (size_t)filterRadius;
host_origin[2] = 0;
// region of x
region[0] = (size_t)((width_x[0] - padding) * sizeof(float));
region[1] = (size_t)(height_x[0] - padding);
region[2] = 1;
try
{
queue.enqueueReadBufferRect(buffer_x, CL_TRUE, buffer_origin, host_origin,
region, deviceWidth[0] * sizeof(float), 0, deviceWidth[0] * sizeof(float), 0, x);
} catch (cl::Error& error)
{
std::cout << " ---> Problem reading buffer in device: " << std::endl;
std::cout << " ---> " << getErrorString(error) << std::endl;
exit(0);
}
delete[] (x);
return 0;
}
The online reference link you provided says:
region
The (width in bytes, height in rows, depth in slices) of the 2D or 3D rectangle being read or written. For a 2D rectangle copy, the depth value given by region[2] should be 1. The values in region cannot be 0.
This is consistent with what you quoted later as "reference book". That's because your first link points to OpenCL 2.0 while the second link to 1.2.
The inconsistency you mention exist between online manual of 1.2 and the PDF of 1.2, but the online manual of 2.0 is consistent with the PDF. So i assume it was a bug in 1.2 online manual which was fixed in 2.0
otherwise, when I defined them as byte not rows/columns
What's a "column", and how is it different from bytes ?
The "elements" of buffer rect copy are always bytes. If you're reading/writing a 1D rect from a buffer, it simply transfers region[0] bytes. The reason why the API has "rows" and "slices" is because if using 2D/3D regions, you can have padding between data; but you can't have padding between elements in a 1D region.
I found out what is the reason of the problem, that's according to the online reference
CL_INVALID_VALUE if host_row_pitch is not 0 and is less than region[0].
so enqueueWriteBufferRect should change as follow:
queue.enqueueWriteBufferRect(buffer_x, CL_TRUE, buffer_origin, host_origin, region,
deviceWidth[0] * sizeof(float), 0, deviceWidth[0] * sizeof(float), 0, x);
which means host_row_pitch = deviceWidth[0] * sizeof(float) instead of host_row_pitch = width_x[0] * sizeof(float).

Application error(0xc000007b) when runnning Opencv example in Visual Studio 2015

I am running canny edge example in Visual Studio 2015 and i got this error.
The application was unable to start correctly (0xc000007b).
And then visual studio show to this error.
Unhandled exception at 0x77A2D5B2 (ntdll.dll) in Canny Edge.exe: 0xC000007B: %hs is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vendor for support. Error status 0x.
I quite sure this coding is working as i ran this coding before in Visual Studio 2013. Here is my coding.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <algorithm>
using namespace cv;
using namespace std;
void help()
{
cout << "\nThis program demonstrates line finding with the Hough transform.\n"
"Usage:\n"
"./houghlines <image_name>, Default is pic1.jpg\n" << endl;
}
bool less_by_y(const cv::Point& lhs, const cv::Point& rhs)
{
return lhs.y < rhs.y;
}
int main(int argc, char** argv)
{
const char* filename = argc >= 2 ? argv[1] : "pic1.jpg";
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Rect roi;
Mat src = imread("test_4_1.png");
if (src.empty())
{
help();
cout << "can not open " << filename << endl;
return -1;
}
Mat dst, cdst;
Canny(src, dst, 50, 200, 3);
cvtColor(dst, cdst, CV_GRAY2BGR);
findContours(dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
//vector<Vec2f> lines;
//HoughLines(dst, lines, 1, CV_PI / 180, 50, 0, 0);
//for (size_t i = 0; i < lines.size(); i++)
//{
// float rho = lines[i][0], theta = lines[i][1];
// Point pt1, pt2;
// double a = cos(theta), b = sin(theta);
// double x0 = a*rho, y0 = b*rho;
// pt1.x = cvRound(x0 + 1000 * (-b));
// pt1.y = cvRound(y0 + 1000 * (a));
// pt2.x = cvRound(x0 - 1000 * (-b));
// pt2.y = cvRound(y0 - 1000 * (a));
// line(cdst, pt1, pt2, Scalar(0, 0, 255), 1, CV_AA);
// cout << pt1 << " " << pt2 << endl;
//}
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI / 180, 30, 50, 10);
for (size_t i = 0; i < lines.size(); i++)
{
Vec4i l = lines[i];
line(cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0, 0, 255), 1, CV_AA);
cout << l << endl;
}
cout << endl << lines.size() << endl;
cout << arcLength(contours[0], true) << endl;
cout << dst.size() << endl << endl;
for (int a = 0; a < contours[0].size(); a++){
cout << contours[0][a] << " ";
}
vector<Point> test = contours[0];
auto mmx = std::minmax_element(test.begin(), test.end(), less_by_y);
cout << endl << *mmx.first._Ptr << endl << *mmx.second._Ptr;
vector<Point> test2 = contours[1];
auto mmx_1 = std::minmax_element(test2.begin(), test2.end(), less_by_y);
cout << endl << *mmx_1.first._Ptr << endl << *mmx_1.second._Ptr;
imshow("source", src);
imshow("detected lines", cdst);
/* ROI by creating mask for the parallelogram */
Mat mask = cvCreateMat(dst.size().height, dst.size().width, CV_8UC1);
// Create black image with the same size as the original
for (int i = 0; i < mask.cols; i++)
for (int j = 0; j < mask.rows; j++)
mask.at<uchar>(Point(i, j)) = 0;
cout <<endl<<endl<< *mmx.first._Ptr << *mmx.second._Ptr << *mmx_1.first._Ptr << *mmx_1.second._Ptr << endl;
// Create Polygon from vertices
vector<Point> ROI_Vertices = { *mmx.first._Ptr, *mmx.second._Ptr, *mmx_1.first._Ptr, *mmx_1.second._Ptr};
vector<Point> ROI_Poly;
approxPolyDP(ROI_Vertices, ROI_Poly, 1.0, false);
// Fill polygon white
fillConvexPoly(mask, &ROI_Poly[0], ROI_Poly.size(), 255, 8, 0);
cout << ROI_Poly.size() << endl;
// Create new image for result storage
Mat imageDest = cvCreateMat(dst.size().height, dst.size().width, CV_8UC3);
// Cut out ROI and store it in imageDest
src.copyTo(imageDest, mask);
imshow("mask", mask);
imshow("image", imageDest);
waitKey();
return 0;
}
Actually my comment is the answer, with some additions
What OpenCV Libs are you linking to? Are you linking to vs12? Because
you need to upgrade your linker to vs13 for MSVS 2015
OpenCV Doesn't come with Visual Studio 15 pre-builds, so you need to build OpenCV yourself for VS2015
This person seems to have had a similar problem and talks you through how to compile for VS2015

3D rotation matrix between two 3D points

I have 2 known 3d points OC1 and OC2 which are the origin of 2 axis plot in the space and I need to compute the 3D rotation matrix between them.
I know that using R1&T1 I can get to Oc1 and using R2&T2 I can get to Oc2 but I need to compute the 3D rotation matrix between Oc1 and Oc2. I just thought about this rule:
oMc1=(R1 | T1) and oMc2=(R2 | T2) and what I want is:
c1Mc2 = (oMc1)^-1 oMc2
So I tried to implement it and here is my code:
vector <Point3f> listOfPointsOnTable;
cout << "******** DATA *******" << endl;
listOfPointsOnTable.push_back(Point3f(0,0,0));
listOfPointsOnTable.push_back(Point3f(100,0,0));
listOfPointsOnTable.push_back(Point3f(100,100,0));
listOfPointsOnTable.push_back(Point3f(0,100,0));
cout << endl << "Scene points :" << endl;
for (int i = 0; i < listOfPointsOnTable.size(); i++)
{
cout << listOfPointsOnTable[i] << endl;
}
//Define the optical center of each camera
Point3f centreOfC1 = Point3f(23,0,50);
Point3f centreOfC2 = Point3f(0,42,20);
cout << endl << "Center Of C1: " << centreOfC1 << " , Center of C2 : " << centreOfC2 << endl;
//Define the translation and rotation between main axis and the camera 1 axis
Mat translationOfC1 = (Mat_<double>(3, 1) << (0-centreOfC1.x), (0-centreOfC1.y), (0-centreOfC1.z));
float rotxC1 = 0, rotyC1 = 0, rotzC1 = -45;
int focaleC1 = 2;
Mat rotationOfC1 = rotation3D(rotxC1, rotyC1,rotzC1);
cout << endl << "Translation from default axis to C1: " << translationOfC1 << endl;
cout << "Rotation from default axis to C1: " << rotationOfC1 << endl;
Mat transformationToC1 = buildTransformationMatrix(rotationOfC1, translationOfC1);
cout << "Transformation from default axis to C1: " << transformationToC1 << endl << endl;
//Define the translation and rotation between main axis and the camera 2 axis
Mat translationOfC2 = (Mat_<double>(3, 1) << (0-centreOfC2.x), (0-centreOfC2.y), (0-centreOfC2.z));
float rotxC2 = 0, rotyC2 = 0, rotzC2 = -90;
int focaleC2 = 2;
Mat rotationOfC2 = rotation3D(rotxC2, rotyC2,rotzC2);
cout << endl << "Translation from default axis to C2: " << translationOfC2 << endl;
cout << "Rotation from default axis to C2: " << rotationOfC2 << endl;
Mat transformationToC2 = buildTransformationMatrix(rotationOfC2, translationOfC2);
cout << "Transformation from default axis to C2: " << transformationToC2 << endl << endl;
Mat centreOfC2InMat = (Mat_<double>(3, 1) << centreOfC2.x, centreOfC2.y, centreOfC2.z);
Mat centreOfC2InCamera1 = rotationOfC1 * centreOfC2InMat + translationOfC1;
Mat translationBetweenC1AndC2 = -centreOfC2InCamera1;
cout << endl << "****Translation from C2 to C1" << endl;
cout << translationBetweenC1AndC2 << endl;
Mat centreOfC1InMat = (Mat_<double>(3, 1) << centreOfC1.x, centreOfC1.y, centreOfC1.z);
Mat centreOfC1InCamera2 = rotationOfC2 * centreOfC1InMat + translationOfC2;
Mat translationBetweenC2AndC1 = -centreOfC1InCamera2;
cout << "****Translation from C1 to C2" << endl;
cout << translationBetweenC2AndC1 << endl;
cout << "Tran1-1 * Trans2 = " << transformationToC1.inv() * transformationToC2 << endl;
cout << "Tran2-1 * Trans1 = " << transformationToC2.inv() * transformationToC1 << endl;
Mat rotation3D(int alpha, int beta, int gamma)
{
// Rotation matrices around the X, Y, and Z axis
double alphaInRadian = alpha * M_PI / 180.0;
double betaInRadian = beta * M_PI / 180.0;
double gammaInRadian = gamma * M_PI / 180.0;
Mat RX = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cosf(alphaInRadian), sinf(alphaInRadian),
0, -sinf(alphaInRadian), cosf(alphaInRadian));
Mat RY = (Mat_<double>(3, 3) <<
cosf(betaInRadian), 0, sinf(betaInRadian),
0, 1, 0,
-sinf(betaInRadian), 0, cosf(betaInRadian));
Mat RZ = (Mat_<double>(3, 3) <<
cosf(gammaInRadian), sinf(gammaInRadian), 0,
-sinf(gammaInRadian),cosf(gammaInRadian), 0,
0, 0, 1);
// Composed rotation matrix with (RX, RY, RZ)
Mat R = RX * RY * RZ;
return R;
}
Mat buildTransformationMatrix(Mat rotation, Mat translation)
{
Mat transformation = (Mat_<double>(4, 4) <<
rotation.at<double>(0,0), rotation.at<double>(0,1), rotation.at<double>(0,2), translation.at<double>(0,0),
rotation.at<double>(1,0), rotation.at<double>(1,1), rotation.at<double>(1,2), translation.at<double>(1,0),
rotation.at<double>(2,0), rotation.at<double>(2,1), rotation.at<double>(2,2), translation.at<double>(2,0),
0, 0, 0, 1);
return transformation;
}
here is the output:
//Origin of 3 axis
O(0,0,0), OC1 (23, 0, 50), OC2 (0, 42, 20)
Translation from default axis to OC1: [-23;
0;
-50]
Rotation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0;
0.7071067690849304, 0.7071067690849304, 0;
0, 0, 1]
Trans1 = Transformation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0, -23;
0.7071067690849304, 0.7071067690849304, 0, 0;
0, 0, 1, -50;
0, 0, 0, 1]
Translation from default axis to OC2: [0;
-42;
-20]
Rotation from default axis to OC2: [-4.371138828673793e-08, -1, 0;
1, -4.371138828673793e-08, 0;
0, 0, 1]
Trans2 = Transformation from default axis to OC2: [-4.371138828673793e-08, -1, 0, 0;
1, -4.371138828673793e-08, 0, -42;
0, 0, 1, -20;
0, 0, 0, 1]
(Trans1)-1 * (Trans2) = [0.7071067623795453, -0.7071068241967844, 0, -13.43502907247513;
0.7071068241967844, 0.7071067623795453, 0, -45.96194156373071;
0, 0, 1, 30;
0, 0, 0, 1]
(Trans2)-1 * (Trans1) = [0.7071067381763105, 0.7071067999935476, 0, 42.00000100536185;
-0.7071067999935475, 0.7071067381763104, -0, 22.99999816412165;
0, 0, 1, -30;
0, 0, 0, 1]
//Calculation of translation between OC1 and OC2:
****Translation from C2 to C1
[52.69848430156708;
-29.69848430156708;
30]
****Translation from C1 to C2
[1.005361930594972e-06;
19;
-30]
As you can see above, the 4th column of (Trans1)-1 * (Trans2) is not equal neither to the translation from C2->C1 nor to the translation from C2->C1
it lets me think that c1Mc2 = (oMc1)^-1 oMc2 does not get what I want but I don't really understand why? Is there any other solution to get what I want?
The rotation matrix of Oc1 is by definition the component of the local XYZ axes:
| Xc1[0] Yc1[0] Zc1[0] |
R_1 = | Xc1[1] Yc1[1] Zc1[1] |
| Xc1[2] Yc1[2] Zc1[2] |
and similarly for R_2 of Oc2.
If the relative rotation between them is R then you can define
R_2 = R_1*R
and thus:
R = transpose(R_1)*R_2
that is all you need.

Opencv Vec3b vs uchar different output

I am trying to get pixel value before doing some calculation. When i run the below code at loc (10,10)i get the below answers. I am not sure why the color value is different than others. When do the same for loc (0,0) i get same answers for all. Image is black and white with 3 channels. Or is it that for 3 channels i should always use Vec3b?
char imageName[] = "0001.png";
cv::Mat img = cv::imread(imageName);
cout << "Image rows is: " << img.rows << " cols is : " << img.cols << endl;
cout << "Image channels " << img.channels() << endl;
cout << "Image depth " << img.depth() << endl; //CV_8U == 0
cout << (int)img.at<uchar>(10, 10) << endl;
cout << (int)img.ptr(10)[10] << endl;
Scalar intensity = img.at<uchar>(10, 10);
cout << intensity << endl;
Vec3b color = img.at<Vec3b>(10, 10);
cout << color << endl;
output:
Image rows is: 96 cols is : 72
Image channels 3
Image depth 0
103
103
[103, 0, 0, 0]
[164, 164, 164]

Resources