It seems that given a multi-channel image img I cannot do this:
img *= cv::Scalar(1.5,0.5,2.1);
I'd like to scale each channel by a different float factor.
Is there a simple way to do this?
I could use cv::transform() but that seems like overkill (I also obviously don't want to manually and explicitly iterate on all the pixels).
Any suggestions?
You can use multiply:
cv::Mat3b m = ... ;
cv::multiply(m, cv::Scalar(2, 3, 4), m);
or, as suggested by #AdiShavit:
cv::Mat3b m = ... ;
m = m.mul(cv::Scalar(2, 3, 4));
Related
I try to use cvCalibrateCamera2, but I get error that rotation matrix is not properly defined:
...calibration.cpp:1495: error: (-5) the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views
I have already tried all possibilities from that info but I still get this error.
My code:
CvMat *object_points = cvCreateMat((int)pp.object_points.size(), 1, CV_32FC3);
CvMat *image_points = cvCreateMat((int)pp.image_points.size(), 1, CV_32FC2);
const CvMat point_counts = cvMat((int)pp.point_counts.size(), 1, CV_32SC1, &pp.point_counts[0]);
for (size_t i=0; i<pp.object_points.size(); i++)
{
object_points->data.fl[i*3+0] = (float)pp.object_points[i].x;
object_points->data.fl[i*3+1] = (float)pp.object_points[i].y;
object_points->data.fl[i*3+2] = (float)pp.object_points[i].z;
image_points->data.fl[i*2+0] = (float)pp.image_points[i].x;
image_points->data.fl[i*2+1] = (float)pp.image_points[i].y;
}
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
cvCalibrateCamera2(object_points, image_points, &point_counts,
cvSize(pp.width, pp.height), camera->m_calib_K,
camera->m_calib_D, tempR, &tempData->m_calib_T,
CV_CALIB_USE_INTRINSIC_GUESS)
// camera->calib_T is defined as:
// double m_calib_T_data[3];
// cvMat(3, 1, CV_64F, camera->m_calib_T_data);
I thought that rotation matrix used in cvCalibrateCamera2 should be 1x3 (then I want to use Rodrigues function to get 3x3 matrix) but it doesn't work as any other combination mentioned in error.
Any ideas?
And I use opencv 2.4.0 (maybe there is bug in that method, but for some reasons I can't use later version of opencv)
I think the statement is clear. I am not confident with C# but I know it requires a strong initialization.
The problem in line
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
is that tempR should have a line 1x3 for every N objects point you use. In this sense, the statement becomes clear
...calibration.cpp:1495: error: (-5) the output array of rotation
vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9
array, where n is the number of views
You must create a tempR like that (more or less, I do not know how to calculate N in C#)
CvMat* tempR = cvCreateMat(N, 3, CV_32F);
Try to extract N from dimensions of object.point.size. If it does not work, try image.point.size
I'm trying to implement the Sauvola & Pietaksinen method to perform a binarization in an image via local thresholding.
The method defines the threshold of each pixel (x,y) as T(x,y) = mean(x,y)*[1+k(std(x,y)/R-1)], as in the arcticle ”Adaptive Document Image Binarization”. The mean and the standard deviation are calculated in a neighbourhood of (x,y). k and R are suggested to be 0.5 and 128, respectively.
This is what my code looks like:
filtered = colfilt(image, [n n], "sliding", #(x) (mean(x).*(1+0.5*(std(x)/128 - 1))));
image(image < filtered) = 0;
image(image >= filtered) = 255;
However, for all images I tested, the result is a entirely blank image, which is obviously incorrect. I think I must be misusing some element in the colfilt function, but I'm too newbie at Octave and couldn't find it until now.
Could someone please give me a hand?
Thanks in advance.
I can't see a problem. You really should include your source and perhaps also your input image and parameter for n. Btw, you shouldn't overwrite function names (like image in your case).
Input image:
pkg load image
img = imread ("lenna256.jpg");
k = 0.5;
R = 128;
n = 5;
filtered = colfilt(img, [n n], "sliding", #(x) (mean(x).*(1+0.5*(std(x)/128 - 1))));
img(img < filtered) = 0;
img(img >= filtered) = 255;
image (img)
imwrite (img, "lenna_out.png")
which creates
Is there any easy way to multiplicate Mat and Vec_? (Provided, that they have proper sizes, e.g.:
Mat_<double> M = Mat(3,3,CV_32F);
Vec3f V=(1,2,3);
result = M*V //?
Maybe there is some easy method of creating row (or col) Mat based on Vec3?
You can't just multiply Mat and Vec (or, more generally, Matx_) elements. Cast the Vec object to Mat:
Mat_<float> M = Mat::eye(3,3,CV_32F);
Vec3f V=(1,2,3);
Mat result = M*Mat(V);
Also, I noticed an error in your code: when constructing M, the type CV_32F corresponds to float elements, not double. This is also corrected in my code example.
Hope that it helps.
I have a Point3f and I want to normalize it, e.g. divide with the last (z) element.
Point3f C_tmp;
I can print it out like this,
cout << "C_tmp= " << C_tmp << endl;
But, I cannot just do
C_tmp=C_tmp/C_tmp[3];
I use C++ interface.
I couldn't find something helpful in the documentation.
Any idea?
EDIT: Vector case:
int i;
for (i=begin; i<end; i++) {
threeD=...[i]..;
threeDVector.push_back(threeD);
twoD.x= threeD.x / threeD.z;
twoD.y= threeD.y / threeD.z;
twoDVector.push_back(twoD);
}
Point3f has the fields x, y, and z:
Point3f threeD(2, 3, 4);
Point2f twoD(threeD.x / threeD.z, threeD.y / threeD.z);
You can also (implicitly) convert a Point3f to a Vec3f and do the trick your way (be away, c++ uses 0-based array):
...
Vec3f threeDVector = threeD;
threeDVector /= threeDVector[2];
Last, I think the best way to explore the functionality of such structures is simply just to read the opencv header files (in this case opencv2/core/types.hpp)
I have been trying to achieve something which should pretty trivial and is trivial in Matlab.
Using methods of OpenCV, I want to simply achieve something such as:
cv::Mat sample = [4 5 6; 4 2 5; 1 4 2];
sample = 5*sample;
After which sample should just be:
[20 25 30; 20 10 25; 5 20 10]
I have tried scaleAdd, Mul, Multiply and neither allow a scalar multiplier and require a matrix of the same "size and type". In this scenario I could create a Matrix of Ones and then use the scale parameter but that seems so very extraneous
Any direct simple method would be great!
OpenCV does in fact support multiplication by a scalar value with overloaded operator*. You might need to initialize the matrix correctly, though.
float data[] = {1 ,2, 3,
4, 5, 6,
7, 8, 9};
cv::Mat m(3, 3, CV_32FC1, data);
m = 3*m; // This works just fine
If you are mainly interested in mathematical operations, cv::Matx is a little easier to work with:
cv::Matx33f mx(1,2,3,
4,5,6,
7,8,9);
mx *= 4; // This works too
For java there is no operator overloading, but the Mat object provides the functionality with a convertTo method.
Mat dst= new Mat(src.rows(),src.cols(),src.type());
src.convertTo(dst,-1,scale,offset);
Doc on this method is here
For big Mats you should use forEach.
If C++11 is available:
m.forEach<double>([&](double& element, const int position[]) -> void
{
element *= 5;
}
);
something like this.
Mat m = (Mat_<float>(3, 3)<<
1, 2, 3,
4, 5, 6,
7, 8, 9)*5;
Mat A = //data;//source Matrix
Mat B;//destination Matrix
Scalar alpha = new Scalar(5)//factor
Core.multiply(A,alpha,b);