I want to create a 1-D array of exactly 100 values and at each index store the index of another array. If I am to use std::vector<int16_t> someVector, how do I ensure that someVector has only 100 values and maybe add the first value at location 48 like someVector[48] = 29322, and so on.
As an alternative I tried creating a 1-D mat of Mat someArray(1,100,CV_16UC1,Scalar(9999)). Now when I try to retrieve the value at index 48, by using int retrievedValue = someArray.row(0).col(48), it says cannot convert from Mat to int.
I realize I'm doing something crazy for something very simple, but please help.
When you initialize vector you can set its size:
std::vector<int16_t> someVector(100);
This way it will be created with as array of 100 elements. But don't forget that size of vector may be changed later.
As for Mat, operators like row() or col() give you sub-matrix of initial matrix. So the code you created will return you a 1x1 matrix, not a short. If you want to access an element in matrix it should be:
int retrievedValue = someArray.at<ushort>(0,48);
Related
I want to mutiply 2 with each element of vec3 in opencv as we do in Matlab simplt by ".*". I searched alot but didn't find any command is their any command for this or not in opencv? thanks in advance for any help.
This answer would suggest you can just use the * assignment operator in C++.
If you are using Java I don't think this is possible, you can only multiply a Mat by another Mat.
So you would need to create a new Mat instance of the same size and type, initialised with the scalar value you want to multiply by.
You can easily create a funcion to do this:
public Mat multiplyScalar(Mat m, double i)
{
return m = m.mul(new Mat((int)m.size().height, (int)m.size().width, m.type(), new Scalar(i)));
}
Then x = multiplyScalar(x, 5); will multiply each element by 5.
I am new to opencv, so please help me in solving this basic query. I am trying to find the max. value of a Mat variable. I tried to use the max_element and minMaxLoc, but end up facing errors, as the function keeps saying the datatype matched is wrong. I checked it over and over again, but am not successful. here is my code.
ABS_DST is the MAT variable
double *estimate,*min;
CvPoint *minLoc,*maxLoc;
Size s = abs_dst.size();
int rows = s.height;
int cols = s.width;
double imagearray[rows][cols] = abs_dst.data();
minMaxLoc(imagearray,min,estimate,minLoc,maxLoc);
I even tried giving the Mat variable abs_dst directly. But have not succeeded. there's an optional input mask array, which I have ignored as I do not require that.
Do next:
Point[] Mat_To_Point = Your_Mat_Variable.toArray();
And next you can to sort your array
I think I got the answer. Thanks for your efforts. The problem is minMaxLoc doesn't take RGB images array, as it is 3 channel. I had to convert the ABS_DST to Gray scale.
Secondly,
it is not
CvPoint *minLoc,maxLoc;
it is
Point *minLoc,*maxLoc;
I need not convert it to array, as converting to Gray Scale will directly give me a 1D channel, enough for the minMaxLoc. My apologies for my own mistakes and thanks once again for your efforts.
I am new in Emgu CV . I need a matrix array to store pixel values of gray images. Is it possible to declare a matrix array .
I code like this for matrix array But is gives "Error"
public Matrix<Double>[] Myimgmatrix = new Matrix<Double>[5](100,80);
Error:"Method name expected"
Any one Please Help.
Do it like that:
private Matrix<Double>[] Myimgmatrix = new Matrix<Double>[5];
And then, on your class constructor, initialize every matrix on the array individually:
for(int i = 0; i < Myimgmatrix.Length; i++)
Myimgmatrix[i] = new Matrix<Double>(100,80);
As far as I know, you can't instantiate the array and its elements at the same time.
You can also create a matrix list, if you don't want to be flexible with the size of your array:
private List<Matrix<Double>> matrixList = new List<Matrix<Double>>();
and then, when you need a new matrix, just add it to your list, on the code:
matrixList.Add(new Matrix<Double>(100,80));
Actually you can directly access gray pixel values from the image data in emgucv. You can check the implementation in emgu cv from this link work with matrix
can any one help me about how to get the absolute value of a complex matrix.the matrix contains real value in one channel and imaginary value in another one channel.please help me
if s possible means give me some example.
Thanks in advance
Arangarajan
Let's assume you have 2 components: X and Y, two matrices of the same size and type. In your case it can be real/im values.
// n rows, m cols, type float; we assume the following matrices are filled
cv::Mat X(n,m,CV_32F);
cv::Mat Y(n,m,CV_32F);
You can compute the absolute value of each complex number like this:
// create a new matrix for storage
cv::Mat A(n,m,CV_32F,cv::Scalar(0.0));
for(int i=0;i<n;i++){
// pointer to row(i) values
const float* rowi_x = X.ptr<float>(i);
const float* rowi_y = Y.ptr<float>(i);
float* rowi_a = A.ptr<float>(i);
for(int j=0;j<=m;j++){
rowi_a[j] = sqrt(rowi_x[j]*rowi_x[j]+rowi_y[j]*rowi_y[j]);
}
}
If you look in the OpenCV phasecorr.cpp module, there's a function called magSpectrums that does this already and will handle conjugate symmetry-packed DFT results too. I don't think it's exposed by the header file, but it's easy enough to copy it. If you care about speed, make sure you compile with any available SIMD options turned on too because they can make a big difference with this calculation.
Currently, I'm working on a project in medical engineering. I have a big image with several sub-images of the cell, so my first task is to divide the image.
I thought about the next thing:
Convert the image into binary
doing a projection of the brightness pixels into the x-axis so I can see where there are gaps between brightnesses values and then divide the image.
The problem comes when I try to reach the second part. My idea is using a vector as the projection and sum all the brightnesses values all along one column, so the position number 0 of the vector is the sum of all the brightnesses values that are in the first column of the image, the same until I reach the last column, so at the end I have the projection.
This is how I have tried:
void calculo(cv::Mat &result,cv::Mat &binary){ //result=the sum,binary the imag.
int i,j;
for (i=0;i<=binary.rows;i++){
for(j=0;j<=binary.cols;j++){
cv::Scalar intensity= binaria.at<uchar>(j,i);
result.at<uchar>(i,i)=result.at<uchar>(i,i)+intensity.val[0];
}
cv::Scalar intensity2= result.at<uchar>(i,i);
cout<< "content" "\n"<< intensity2.val[0] << endl;
}
}
When executing this code, I have a violation error. Another problem is that I cannot create a matrix with one unique row, so...I don't know what could I do.
Any ideas?! Thanks!
At the end, it does not work, I need to sum all the pixels in one COLUMN. I did:
cv::Mat suma(cv::Mat& matrix){
int i;
cv::Mat output(1,matrix.cols,CV_64F);
for (i=0;i<=matrix.cols;i++){
output.at<double>(0,i)=norm(matrix.col(i),1);
}
return output;
}
but It gave me a mistake:
Assertion failed (0 <= colRange.start && colRange.start <= colRange.end && colRange.end <= m.cols) in Mat, file /home/usuario/OpenCV-2.2.0/modules/core/src/matrix.cpp, line 276
I dont know, any idea would be helpful, anyway many thanks mevatron, you really left me in the way.
If you just want the sum of the binary image, you could simply take the L1-norm. Like so:
Mat binaryVectorSum(const Mat& binary)
{
Mat output(1, binary.rows, CV_64F);
for(int i = 0; i < binary.rows; i++)
{
output.at<double>(0, i) = norm(binary.row(i), NORM_L1);
}
return output;
}
I'm at work, so I can't test it out, but that should get you close.
EDIT : Got home. Tested it. It works. :) One caveat...this function works if your binary matrix is truly binary (i.e., 0's and 1's). You may need to scale the norm output with the maximum value if the binary matrix is say 0's and 255's.
EDIT : If you don't have using namespace cv; in your .cpp file, then you'll need to declare the namespace to use NORM_L1 like this cv::NORM_L1.
Have you considered transposing the matrix before you call the function? Like this:
sumCols = binaryVectorSum(binary.t());
vs.
sumRows = binaryVectorSum(binary);
EDIT : A bug with my code :)
I changed:
Mat output(1, binary.cols, CV_64F);
to
Mat output(1, binary.rows, CV_64F);
My test case was a square matrix, so that bug didn't get found...
Hope that is helpful!