How to manipulate a image like a Matrix using OpenCV? - image-processing

I need to develop a method to find the distance of a red line and the bottom of the image.
I already isolate the red line in hsv using some examples...
I know how to do this using MatLab but now i have to use the opencv :s
Someone can tell me how to do this?

You can access each pixel value of the image in following way.
IplImage* img=cvLoadImage("your_image.jpg");
int pixelVal;
for(int x=0;x<img->height;x++){
for(int y=0;y<img->width;y++){
pixelVal=((uchar*)(img->imageData + img->widthStep*x))[y];
}
}
in here,
img->imageData returns a pointer to starting memory location of the image
img->widthStep is the number of bytes in a image row
we access the each pixel value and cast using unsigned char pointer to get int value.

Related

What format should I use for a Opencv image if I need to access the underlaying data?

I've made a program that creates images using OpenCL and in the OpenCL code I have to access the underlaying data of the opencv-image and modify it directly but I don't know how the data is arranged internally.
I'm currently using CV_8U because the representation is really simple 0 black 255 white and everything in between but I want to add color and I don't know what format to use.
This is how I currently modify the image A[y*width + x] = 255;.
Since your A[y*width + x] = 255; works fine, then the underlaying image data A must be a 1D pixel array of size width * height, each element is a cv_8u (8 bit unsigned int).
The color values of a pixel, in the case of OpenCV, will be arranged B G R in memory. RGB order would be more common but OpenCV likes them BGR.
Your data ought to be CV_8UC3, which is the case if you use imread or VideoCapture. if it isn't that, the following information needs to be interpreted accordingly.
Your array index math needs to expand to account for the data's layout:
[(y*width + x)*3 + channel]
3 because 3 channels. channel is 0..2, x and y as you expect.
As mentioned in other answers, you'd need to convert this single-channel image to a 3-channel image to have color. The 3 channels are Blue, Green, Red (BGR).
OpenCV has a method that does just this, cv2.cvtColor(), this method takes an input image (in this case the single channel image that you have), and a conversion code (see here for more).
So the code would be like the following:
color_image = cv2.cvtColor(source_image, cv2.COLOR_GRAY2BGR)
Then you can modify the color by accessing each of the color channels, e.g.
color_image[y, x, 0] = 255 # this changes the first channel (Blue)

Changing widthstep of image opencv

I am working on project related to face recognition. For my program to work each image should satisfy the condition img->widthStep = 3 * img->width.
I am trying my code on database in which each image is of size 250x250. But the widthstep for the database is 752 hence the above condition does not satisfy. The function of widthstep is in accessing the pixel (http://opencv-users.1802565.n2.nabble.com/What-is-widthstep-td2679559.html).
Can I change the widthstep parameter to 750 without affecting other parameters of image?
Or else is there other way to achieve the condition zimg->widthStep = 3 * img->widthz?
I tried copying the 250x250 to 260x260 image as follows
Mat img1, img2=Mat::zeros(Size(260,260),CV_8UC3);
img1 = imread(ch);
img1.copyTo(img2.colRange(1,250).rowRange(1,250));
But it shows this error:
OpenCV Error: Assertion failed
(!fixedSize() || ((Mat*)obj)->size.operator()() =
= Size(cols, rows)) in unknown function, file D:\opencv2.4.5\opencv\modules\core
\src\matrix.cpp, line 1372
Can anyone help me out.
Thank you!
Since you are using term widthStep I guess you are using IplImage. IplImage was taken from Intel Performance Primitives (IPP) library. In order to have good performance it is required that widthStep of each row should be multiple of 4. To enforce this condition rows are padded with addition bytes. So as long as you are using IplImage you won't be able to have widthStep equal to 750 which is not multiple of 4.
OpenCV 1 was based on IplImage, but OpenCV 2 is based on Mat. Its been a years since IplImage was deprecated.
Mat has no such limitation. By default its step will be 750.
After edit of the question:
colRange(1,250) means 249 columns, not 250. Same for rowRange(1,250). When size of the image being copied is different from size of target image, target image is reallocated. But since colRange and rowRange return constant temporary image it can't be reallocated and the program crashes.

locating a change between two images

I have two images that are similar, but one has a additional change on it. What I need to be able to do is locate the change between the two images. Both images have white backgrounds and the change is a line being draw. I don't need anything as complex as openCV I'm looking for a "simple" solution in c or c++.
If you just want to show the differences, so you can use the code below.
FastBitmap original = new FastBitmap(bitmap);
FastBitmap overlay = new FastBitmap(processedBitmap);
//Subtract the original with overlay and just see the differences.
Subtract sub = new Subtract(overlay);
sub.applyInPlace(original);
// Show the results
JOptionPane.showMessageDialog(null, original.toIcon());
For compare two images, you can use ObjectiveFideliy class in Catalano Framework.
Catalano Framework is in Java, so you can port this class in another LGPL project.
https://code.google.com/p/catalano-framework/
FastBitmap original = new FastBitmap(bitmap);
FastBitmap reconstructed = new FastBitmap(processedBitmap);
ObjectiveFidelity of = new ObjectiveFidelity(original, reconstructed);
int error = of.getTotalError();
double errorRMS = of.getErrorRMS();
double snr = of.getSignalToNoiseRatioRMS();
//Show the results
Disclaimer: I am the author of this framework, but I thought this would help.
Your description leaves me with a few unanswered questions. It would be good to see some example before/after images.
However at the face of it, assuming you just want to find the parameters of the added line, it may be enough to convert the frames to grey-scale, subtract them from one another, segment the result to black & white and then perform line segment detection.
If the resulting image only contains one straight line segment, then it might be enough to find the bounding box around the remaining pixels, with a simple check to determine which of the two possible line segments you have.
However it would probably be simpler to use one of the Hough Transform methods provided by OpenCV.
You can use memcmp() (Ansi C function to compare 2 memory blocks, much like strcmp()). Just activate it on the Arrays of pixels and it returns whether they are identical or not.
You can add a little tweak that you get as result the pointer to the memory block where the first change occurred. This will give you a pointer to the first pixel. You can than just go along its neighbors to find all the non white pixels (representing your line).
bool AreImagesDifferent(const char*Im1, const char* Im2, const int size){
return memcmp(Im1,Im2,size);
}
const char* getFirstDifferentPixel(const char*Im1, const char* Im2, const int size){
const char* Im1end = Im1+size;
for (;Im1<Im1end; Im1++, Im2++){
if ((*Im1)!=(*Im2))
return Im1;
}
}

How to convert a cv::Mat Image to an std::vector if the Image is not continuous?

I tried looking for solutions and came across this link
Converting a row of cv::Mat to std::vector .However, the answer here will only work if the image is continuous i.e length of a row = step size . Am I right or am I missing something? If I'm right, what is the most efficient way of copying a non-continuous image to a std::vector?
I am looking for something more efficient than,
for(int ii=0;ii<no. of image rows;ii++)
{
ptr = pointer to ii'th row
copy from ptr to (ptr+row length) into std::vector
}
Thanks,
Vishy
For a single channel 8-bit image,
std::vector<uchar> dstVec;
dstVec.assign(srcVec.begin(),srcVec.end());

Convert image color space and output separate channels in OpenCV

I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this:
cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData);
cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3);
cv::cvtColor(input, output, CV_BGR2YCrCb);
cv::Mat outputArr[3];
outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData);
outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData);
outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData);
split(output,outputArr);
But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.
Are you sure that copying the image data is the limiting step?
How are you producing the Y ? Cr / Cb cv::mats?
Can you just rewrite this function to write the results into the three separate images?
There is no calling option for cv::cvtColor, that gives it result as three seperate cv::Mats (one per channel).
dst – output image of the same size and depth as src.
source: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
You have to copy the pixels from the result (as you are already doing) or write such a conversion function yourself.
Use split. This splits the image into 3 different channels or arrays.
Now converting them back to UIImage is where I am having trouble. I get three grayscale images, one in each array. I am convinced they are the proper channels in cvMat format but when I convert them to UIImage they are grayscale but different grayscale values in each image. If you can use imread and imshow then it should display the images for you after the split. My problem is trying to use the ios.h methods and I believe it reassembles the arrays, instead of transferring the single array. Here is my code using a segmented control to choose which layer, or array, you want to display. Like I said, I get 3 grayscale images but with completely different values. I need to keep the one layer and abandon the rest. Still working on that part of it.
UIImageToMat(_img, cvImage);
cv::cvtColor(cvImage, RYB, CV_RGB2BGRA);
split(RYB, layers);
if (_segmentedRGBControl.selectedSegmentIndex == 0) {
// cv::cvtColor(layers[0], RYB, CV_8UC1);
RYB = layers[0];
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 1) {
RYB = (layers[1]);
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 2) {
RYB = (layers[2]);
_imageProcessView.image = MatToUIImage(RYB);
}
}

Resources