Comparing each pixel from a given image to another pixel from another image with color - opencv

Can I compare a pixel from the original image to another same coordinate pixel from a colored image to check what is the color of the pixel from the colored image to the original image? Is there a way to do that?

You can take the absolute difference between them. The result will be the difference of each pixel.
cv::Mat first,second,result;
//first= some image
//second =other image
cv::absdiff(first,second,result);
EDIT:
By the previous step, you have got the difference map. You can simply now do this:
auto some_pixle_difference=result.at<vec3b>(cv::Point(x,y));
some_pixle_difference will contain the differences between pixel x,y from both images.

Related

Normalize an image using the mean pixel value in a ROI

I want to normalize several images in imageJ using the mean pixel value in a ROI, so that after normalization the mean in this ROI has the same value in all the images. How can I do it? Thanks
It is hard to say with out a particular example but a priory I would select the ROI and press control + Mto measure the region. If it is grey scale image you should obtain the gray mean of the grey pixels. You can use then this value to divide all the pixels in you image using Divide function under the Process > Math menu. If you calculate the mean for each image and use that value to divide each corresponding image, your ROI should have the same mean value for all ROI in your pictures.
I hope it helps!

opencv - crop according to coloured points in an image

Given an image with say just two coloured points in it.. Is it possible to crop the image from the coordinates of the first colured point to the coordinates of the second coloured point .
A sample image where i have to crop between two green points
This is possible, if the colored points have a distinct range of color when compared with the rest of the image.
Algorithm:
1. Convert the image to HSV color space
2. Scan the image while looking for pixels in the range of hue and saturation of the color/s of the points.
3. Record minimum and maximum X,Y coordinates of the points that match.
4. calculate the bounding box of the region using the coordinates.
5. Crop the image using the bounding box.
You can try to follow these steps and edit the question with code if/when you come up with errors. Uploading a sample image somewhere and linking to it will help us provide better answers.

Disparity Map vs Left Image(reference image)

I'm really having a hard time matching every pixel of the image with its corresponding disparity value, as the disparity map is way more shifted to the right than the original image. And it looks more stretched than the original.
As you can see, the object in the original image is way thinner than its disparity. How can I fix this?
Since I cannot figure out how to retrieve the original (x,y) of points from the original image using when using reprojectImageTo3D

How to overlay an picture with a given mask

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

Creating contour and then perform pixel analysis (OpenCV)

If I have an RGB image and a binary mask (1 channel), and I want to create contours for the RGB image based on the connected pixels of the binary mask. After that I want to compare the pixel values (e.g. check if each pixel in the contours is having a blue value > 150), then how can I implement the above by using OpenCV?
Thanks a lot!
Assuming the images are the same size and shape then simply scan over the pixels in the binary image looking for the contours and check the pixel values at the same row/col in the color image
See Fastest way to extract individual pixel data? for details

Resources