I have a CvVideoCamera and I'm trying to detect the blue color in each frame, and the output frames should contain only the blue objects, like here. I'm doing this in the delegate method:
- (void)processImage:(cv::Mat&)image
{
cv::Mat bgrMat;
cvtColor(image, bgrMat, CV_BGRA2BGR);
// Covert color space to HSV
cv::Mat hsvMat;
cvtColor(bgrMat, hsvMat, CV_BGR2HSV);
// Threshold the HSV image
cv::Mat blueMask;
cv::Scalar lower_blue(110, 50, 50);
cv::Scalar upper_blue(130, 255, 255);
cv::inRange(hsvMat, lower_blue, upper_blue, blueMask);
bitwise_and(bgrMat, bgrMat, image, blueMask);
}
Original image:
Result:
The blue color detection seems to be working fine, but the final result is red instead of blue. Any ideas why? Am I using the bitwise_and correctly?
[Edit]
These lines do the trick:
cv::Mat output;
image.copyTo(output, blueMask);
output.copyTo(image);
instead of:
bitwise_and(bgrMat, bgrMat, image, blueMask);
Thanks to karlphillip for the suggestion. For some reason the bgrMat gets 'altered' along the way, so I'm using the original image instead.
I think what you are trying to accomplish is to copy the pixels from the input image using a blue mask, right? Adjust your code at the end to:
cv::inRange(hsvMat, lower_blue, upper_blue, blueMask);
cv::Mat output;
bgrMat.copyTo(output, blueMask);
Related
I have an input image that looks like this:
Notice that there are 6 boxes with black borders. I need to detect the location (upper-left hand corder) of each box. Normally I would use something like template matching but the contents (the colored area inside the black border) of each box is distinct.
Is there a version of template matching that can configured to ignore the inner area of each box? Is the an algorithm better suited to this situation?
Also note, that I have to deal with several different resolutions... thus the actual size of the boxes will be different from image to image. That said, the ratio (length to width) will always be the same.
Real-world example/input image per request:
You can do this finding the bounding box of connected components.
To find connected components you can convert to grayscale, and keep all pixels with value 0, i.e. the black border of the rectangles.
Then you can find the contours of each connected component, and compute its bounding box. Here the red bounding boxes found:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the image, as BGR
Mat3b img = imread("path_to_image");
// Convert to gray scale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask
Mat1b binary = (gray == 0);
// Find contours of connected components
vector<vector<Point>> contours;
findContours(binary.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
// For each contour
for (int i = 0; i < contours.size(); ++i)
{
// Get the bounding box
Rect box = boundingRect(contours[i]);
// Draw the box on the original image in red
rectangle(img, box, Scalar(0, 0, 255), 5);
}
// Show result
imshow("Result", img);
waitKey();
return 0;
}
From the image posted in chat, this code produces:
In general, this code will correctly detect the cards, as well as noise. You just need to remove noise according to some criteria. Among others: size or aspect ratio of boxes, colors inside boxes, some texture information.
I want to compare two images and find same and different parts of images. I tired "cv::compare and cv::absdiff" methods but confused which one can good for my case. Both show me different results. So how i can achieve my desired task ?
Here's an example how you can use cv::absdiff to find image similarities:
int main()
{
cv::Mat input1 = cv::imread("../inputData/Similar1.png");
cv::Mat input2 = cv::imread("../inputData/Similar2.png");
cv::Mat diff;
cv::absdiff(input1, input2, diff);
cv::Mat diff1Channel;
// WARNING: this will weight channels differently! - instead you might want some different metric here. e.g. (R+B+G)/3 or MAX(R,G,B)
cv::cvtColor(diff, diff1Channel, CV_BGR2GRAY);
float threshold = 30; // pixel may differ only up to "threshold" to count as being "similar"
cv::Mat mask = diff1Channel < threshold;
cv::imshow("similar in both images" , mask);
// use similar regions in new image: Use black as background
cv::Mat similarRegions(input1.size(), input1.type(), cv::Scalar::all(0));
// copy masked area
input1.copyTo(similarRegions, mask);
cv::imshow("input1", input1);
cv::imshow("input2", input2);
cv::imshow("similar regions", similarRegions);
cv::imwrite("../outputData/Similar_result.png", similarRegions);
cv::waitKey(0);
return 0;
}
Using those 2 inputs:
You'll observe that output (black background):
Good day
I am try to filter video by subtracting some colors in specified range.
but while the recorded image is still or not changed but the HSV filtered image looks shaken and not stable.
this shake or instability cause lot's of problem in my processing.
is there any way that I can filter image in stable way
this is sample code of my filter ... part of the code
while (1)
{
//first frame read
cap.read(origonal1);
morphOps(origonal1);
cvtColor(origonal1, HSV1, COLOR_BGR2HSV);
inRange(HSV1, Scalar(0, 129,173), Scalar(26,212, 255), thresholdImage1);
waitKey(36);
//second image read and convert it to HSV
cap.read(origonal2);
morphOps(origonal2);
cvtColor(origonal2, HSV2, COLOR_BGR2HSV);
inRange(HSV2, Scalar(28, 89, 87), Scalar(93, 255, 255),thresholdImage2);
morphOps(thresholdImage1);
morphOps(thresholdImage2);
//create a mask so that i only detect motion of certain color range and don't
//care about other colors motion detection
maskImage = thresholdImage1 | thresholdImage2;
//make the difference between images
absdiff(thresholdImage1,thresholdImage2,imageDifference);
imageDifference = imageDifference&maskImage;
morphOps(imageDifference);
imshow("threshold Image", imageDifference);
//search for movement now update the origonal image
searchForMovement(thresholdImage1, origonal1);
imshow("origonal", origonal1);
imshow("HSV", HSV1);
imshow("threshold1", thresholdImage1);
imshow("threshold2", thresholdImage2);
//wait for a while give a break to the processor
//waitKey(1000);
}
Thanks in advance.
try this function
fastNlMeansDenoisingColored( frame, frame_result, 3, 3, 7, 21 );
it's too slow but good for trying.
I would like to know what is the problem in below code, since it only appears only part of the Gray image as Binary image!
cv::Mat gry = cv::imread("image_gray.jpg");
cv::Mat bin(gry.size(), gry.type());
for (int i=0; i<gry.rows ;i++)
{
for (int j=0; j<gry.cols ;j++)
{
if (gry.at<uchar>(i,j)>=100)
bin.at<uchar>(i,j)=255;
else
bin.at<uchar>(i,j)=0;
}
}
cv::namedWindow("After", cv::WINDOW_AUTOSIZE);
cv::imshow("After",bin);
waitKey(0);
cvDestroyWindow( "After" );
imwrite("binary_image.bmp", bin);
Your problem is in cv::imread.
The function assumes it should load the image as a color image, if you want to load it as a garyscale image, you should call the function as follows:
cv::imread(fileName, CV_LOAD_IMAGE_GRAYSCALE)
By the way, the reason you only see part of the image, is because the image is simply bigger than a uchar for each pixel. (and you end up iterating only over part of it).
it would be easier if you use use the OpenCV function:
cv::threshold(image_src, image_dst, 200, 255, cv::THRESH_BINARY);
This piece of code set as black value (255) all those pixels which have as original value 200.
Let me start by saying that I'm still a beginner using OpenCV. Some things might seem obvious and once I learn them hopefully they also become obvious to me.
My goal is to use the floodFill feature to generate a separate image containing only the filled area. I have looked into this post but I'm a bit lost on how to convert the filled mask into an actual BGRA image with the filled color. Besides that I also need to crop the newly filled image to contain only the filled area. I'm guessing OpenCV has some magical function that could do the trick.
Here is what I'm trying to achieve:
Original image:
Filled image:
Filled area only:
UPDATE 07/07/13
Was able to do a fill on a separate image using the following code. However, I still need to figure out the best approach to get only the filled area. Also, my floodfill solution has an issue with filling an image that contains alpha values...
static int floodFillImage (cv::Mat &image, int premultiplied, int x, int y, int color)
{
cv::Mat out;
// un-multiply color
unmultiplyRGBA2BGRA(image);
// convert to no alpha
cv::cvtColor(image, out, CV_BGRA2BGR);
// create our mask
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
// floodfill the mask
cv::floodFill(
out,
mask,
cv::Point(x,y),
255,
0,
cv::Scalar(),
cv::Scalar(),
+ (255 << 8) + cv::FLOODFILL_MASK_ONLY);
// set new image color
cv::Mat newImage(image.size(), image.type());
cv::Mat maskedImage(image.size(), image.type());
// set the solid color we will mask out of
newImage = cv::Scalar(ARGB_BLUE(color), ARGB_GREEN(color), ARGB_RED(color), ARGB_ALPHA(color));
// crop the 2 extra pixels w and h that were given before
cv::Mat maskROI = mask(cv::Rect(1,1,image.cols,image.rows));
// mask the solid color we want into new image
newImage.copyTo(maskedImage, maskROI);
// pre multiply the colors
premultiplyBGRA2RGBA(maskedImage, image);
return 0;
}
you can get the difference of those two images to get the different pixels.
pixels with no difference will be zero and other are positive value.
cv::Mat A, B, C;
A = getImageA();
B = getImageB();
C = A - B;
handle negative values in the case.(i presume not in your case)