How to obtain the floodfilled area? - opencv

Let me start by saying that I'm still a beginner using OpenCV. Some things might seem obvious and once I learn them hopefully they also become obvious to me.
My goal is to use the floodFill feature to generate a separate image containing only the filled area. I have looked into this post but I'm a bit lost on how to convert the filled mask into an actual BGRA image with the filled color. Besides that I also need to crop the newly filled image to contain only the filled area. I'm guessing OpenCV has some magical function that could do the trick.
Here is what I'm trying to achieve:
Original image:
Filled image:
Filled area only:
UPDATE 07/07/13
Was able to do a fill on a separate image using the following code. However, I still need to figure out the best approach to get only the filled area. Also, my floodfill solution has an issue with filling an image that contains alpha values...
static int floodFillImage (cv::Mat &image, int premultiplied, int x, int y, int color)
{
cv::Mat out;
// un-multiply color
unmultiplyRGBA2BGRA(image);
// convert to no alpha
cv::cvtColor(image, out, CV_BGRA2BGR);
// create our mask
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
// floodfill the mask
cv::floodFill(
out,
mask,
cv::Point(x,y),
255,
0,
cv::Scalar(),
cv::Scalar(),
+ (255 << 8) + cv::FLOODFILL_MASK_ONLY);
// set new image color
cv::Mat newImage(image.size(), image.type());
cv::Mat maskedImage(image.size(), image.type());
// set the solid color we will mask out of
newImage = cv::Scalar(ARGB_BLUE(color), ARGB_GREEN(color), ARGB_RED(color), ARGB_ALPHA(color));
// crop the 2 extra pixels w and h that were given before
cv::Mat maskROI = mask(cv::Rect(1,1,image.cols,image.rows));
// mask the solid color we want into new image
newImage.copyTo(maskedImage, maskROI);
// pre multiply the colors
premultiplyBGRA2RGBA(maskedImage, image);
return 0;
}

you can get the difference of those two images to get the different pixels.
pixels with no difference will be zero and other are positive value.
cv::Mat A, B, C;
A = getImageA();
B = getImageB();
C = A - B;
handle negative values in the case.(i presume not in your case)

Related

Detectings small circles on game minimap

i am stuck on this problem for like 20h.
The quality is not every good because on 1080p video, the minimap is less than 300px / 300px
I want to detect the 10 heros circles on this images:
Like this:
For background removal, i can use this:
The heroes portrait circle radius are between 8 to 12 because a hero portrait is like 21x21px.
With this code
Mat minimapMat = mgcodecs.imread("minimap.png");
Mat minimapCleanMat = Imgcodecs.imread("minimapClean.png");
Mat minimapDiffMat = new Mat();
Core.subtract(minimapMat, minimapCleanMat, minimapDiffMat);
I obtain this:
Now i apply circles detection on it:
findCircles(minimapDiffMat);
public static void findCircles(Mat imgSrc) {
Mat img = imgSrc.clone();
Mat gray = new Mat();
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Mat edges = new Mat();
int lowThreshold = 40;
int ratio = 3;
Imgproc.Canny(gray, edges, lowThreshold, lowThreshold * ratio);
Mat circles = new Mat();
Vector<Mat> circlesList = new Vector<Mat>();
Imgproc.HoughCircles(edges, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 10, 5, 20, 7, 15);
double x = 0.0;
double y = 0.0;
int r = 0;
for (int i = 0; i < circles.rows(); i++) {
for (int k = 0; k < circles.cols(); k++) {
double[] data = circles.get(i, k);
for (int j = 0; j < data.length; j++) {
x = data[0];
y = data[1];
r = (int) data[2];
}
Point center = new Point(x, y);
// circle center
Imgproc.circle(img, center, 3, new Scalar(0, 255, 0), -1);
// circle outline
Imgproc.circle(img, center, r, new Scalar(0, 255, 0), 1);
}
}
HighGui.imshow("cirleIn", img);
}
Results is not ok, detecting only 2 on 10:
I have tried with knn background too:
With less success.
Any tips ? Thanks a lot in advance.
The problem is that your minimap contains highlighted parts (possibly around active players) rendering your background removal inoperable. Why not threshold the highlighted color out from the image? From what I see there are just few of them. I do not use OpenCV so I gave it a shot in C++ here is the result:
int x,y;
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
pic2.pixel_format(_pf_rgba);// convert back to RGB
pic2.save("out2.png");
So you need to find OpenCV counter parts to this. The thresholds are color distance^2 (so I do not need sqrt) and looks like 50^2 is ideal for <0,255> per channel RGB vector.
I use my own picture class for images so some members are:
xs,ys is size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) clears entire image with color
resize(xs,ys) resizes image to new resolution
bmp is VCL encapsulated GDI Bitmap with Canvas access
pf holds actual pixel format of the image:
enum _pixel_format_enum
{
_pf_none=0, // undefined
_pf_rgba, // 32 bit RGBA
_pf_s, // 32 bit signed int
_pf_u, // 32 bit unsigned int
_pf_ss, // 2x16 bit signed int
_pf_uu, // 2x16 bit unsigned int
_pixel_format_enum_end
};
color and pixels are encoded like this:
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
The bands are:
enum{
_x=0, // dw
_y=1,
_b=0, // db
_g=1,
_r=2,
_a=3,
_v=0, // db
_s=1,
_h=2,
};
Here also the distance^2 between colors I used for thresholding:
DWORD distance2(color &a,color &b)
{
DWORD d,dd;
d=DWORD(a.db[0])-DWORD(b.db[0]); dd =d*d;
d=DWORD(a.db[1])-DWORD(b.db[1]); dd+=d*d;
d=DWORD(a.db[2])-DWORD(b.db[2]); dd+=d*d;
d=DWORD(a.db[3])-DWORD(b.db[3]); dd+=d*d;
return dd;
}
As input I used your images:
pic0:
pic1:
And here the (sub) results:
out0.png:
out1.png:
out2.png:
Now just remove noise (by blurring or by erosion) a bit and apply your circle fitting or hough transform...
[Edit1] circle detector
I gave it a bit of taught and implemented simple detector. I just check circumference points around any pixel position with constant radius (player circle) and if number of set point is above threshold I found potential circle. It is better than use whole disc area as some of the players contain holes and there are more pixels to test also ... Then I average close circles together and render the output ... Here updated code:
int i,j,x,y,xx,yy,x0,y0,r=10,d;
List<int> cxy; // circle circumferece points
List<int> plr; // player { x,y } list
color c0,c1,c;
picture pic0,pic1,pic2;
// pic0 - source background
// pic1 - source map
// pic2 - output
// ensure all images are the same size
pic1.resize(pic0.xs,pic0.ys);
pic2.resize(pic0.xs,pic0.ys);
// process all pixels
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
// get both colors without alpha
c0.dd=pic0.p[y][x].dd&0x00FFFFFF;
c1.dd=pic1.p[y][x].dd&0x00FFFFFF; c=c1;
// threshold 0xAARRGGBB distance^2
if (distance2(c1,color(0x00EEEEEE))<2000) c.dd=0; // white-ish rectangle
if (distance2(c1,color(0x00889971))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x005A6443))<2000) c.dd=0; // gray-ish path
if (distance2(c1,color(0x0021A2C2))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x002A6D70))<2000) c.dd=0; // aqua water
if (distance2(c1,color(0x00439D96))<2000) c.dd=0; // aqua water
if (distance2(c1,c0 )<2500) c.dd=0; // close to background
pic2.p[y][x]=c;
}
// pic2.save("out0.png");
pic2.pixel_format(_pf_u); // convert to gray scale
pic2.smooth(); // blur a little
// pic2.save("out1.png");
pic2.threshold(0,80,765,0x00000000); // set dark pixels (<80) to black (0) and rest to white (3*255)
// compute player circle circumference points mask
x0=r-1; y0=r; x0*=x0; y0*=y0;
for (x=-r,xx=x*x;x<=r;x++,xx=x*x)
for (y=-r,yy=y*y;y<=r;y++,yy=y*y)
{
d=xx+yy;
if ((d>=x0)&&(d<=y0))
{
cxy.add(x);
cxy.add(y);
}
}
// get all potential player circles
x0=(5*cxy.num)/20;
for (y=r;y<pic2.ys-r;y+=2) // no need to step by single pixel ...
for (x=r;x<pic2.xs-r;x+=2)
{
for (d=0,i=0;i<cxy.num;)
{
xx=x+cxy.dat[i]; i++;
yy=y+cxy.dat[i]; i++;
if (pic2.p[yy][xx].dd>100) d++;
}
if (d>=x0) { plr.add(x); plr.add(y); }
}
// pic2.pixel_format(_pf_rgba);// convert back to RGB
// pic2.save("out2.png");
// average all circles too close together
pic2=pic1; // use original image again
pic2.bmp->Canvas->Pen->Color=TColor(0x0000FF00);
pic2.bmp->Canvas->Pen->Width=3;
pic2.bmp->Canvas->Brush->Style=bsClear;
for (i=0;i<plr.num;i+=2) if (plr.dat[i]>=0)
{
x0=plr.dat[i+0]; x=x0;
y0=plr.dat[i+1]; y=y0; d=1;
for (j=i+2;j<plr.num;j+=2) if (plr.dat[j]>=0)
{
xx=plr.dat[j+0];
yy=plr.dat[j+1];
if (((x0-xx)*(x0-xx))+((y0-yy)*(y0-yy))*10<=20*r*r) // if close
{
x+=xx; y+=yy; d++; // add to average
plr.dat[j+0]=-1; // mark as deleted
plr.dat[j+1]=-1;
}
}
x/=d; y/=d;
plr.dat[i+0]=x;
plr.dat[i+1]=y;
pic2.bmp->Canvas->Ellipse(x-r,y-r,x+r,y+r);
}
pic2.bmp->Canvas->Pen->Width=1;
pic2.bmp->Canvas->Brush->Style=bsSolid;
// pic2.save("out3.png");
As you can see the core of code is the same I just added the detector in the end.
I also use mine dynamic list template so:
List<double> xxx; is the same as double xxx[];
xxx.add(5); adds 5 to end of the list
xxx[7] access array element (safe)
xxx.dat[7] access array element (unsafe but fast direct access)
xxx.num is the actual used size of the array
xxx.reset() clears the array and set xxx.num=0
xxx.allocate(100) preallocate space for 100 items
And here the final result out3.png:
As you can see it is a bit messed up when the players are very near (due to circle averaging) with some tweaking you might get better results. But on second taught it might be due to that small red circle nearby ...
I used VCL/GDI for the circles render so just ignore/port the pic2.bmp->Canvas-> stuff to what ever you use.
As the populated image is lighter in the blue areas around the heroes, your background subtraction is of virtually no use.
I tried to improve by applying a gain of 3 to the clean image before subtraction and here is the result.
The background has disappeared, but the outlines of the heroes are severely damaged.
I looked at your case with other approaches and I consider that it is a very difficult one.
What I do when I want to do image processing is first open the image in a paint editor (I use Gimp). Then I manipulate the image the until I end up with something that defines the parts I want to detect.
Generally, RGB is bad for a lot of computer vision tasks, and making it gray scale solves only a part of the problem.
A good start is trying to decompose the image to HSL instead.
Doing so on the first image, and only looking at the Hue channel gives me this:
Several of the blobs are quite well defined.
Playing a bit with the contrast and brightness of the Hue and Luminance layers and multiplying them gives me this:
It enhances the ring around the markers, which might be useful.
These methods all have corresponding functionality in OpenCV.
It's a tricky task and you will most likely require several different filters and techniques to succeed. Hope this helps a bit. Good luck.

Efficiently tell if one image is entirely comprised of the pixel values of another in OpenCV

I am trying to find an efficient way to see if one image is a subset of another (meaning that each unique pixel in one image is also found in the other.) The repetition or ordering of the pixels do not matter.
I am working in Java, so I would like all of my operations to be completed in OpenCV for efficiency's sake.
My first idea was to export a list of unique pixel values, and compare it to the list from the second image.
As there is not a built in function to extract unique pixels, I abandoned this approach.
I also understand that I can find the locations of a particular color with the inclusive inRange, and findNonZero operations.
Core.inRange(image, color, color, tempMat); // inclusive
Core.findNonZero(tempMat, colorLocations);
Unfortunately, this does not provide an adequate answer, as it would need to be executed per color, and would still require extracting unique pixels.
Essentially, I'm asking if there is a clever way to use the built in OpenCV functions to see if an image is comprised of the pixels found in another image.
I understand that this will not work for slight color differences. I am working on a limited dataset, and care about the exact pixel values.
To put the question more mathematically:
Because the only think you are interested in is the pixel values i would suggest to do the following.
Compute the histogram of image 1 using hist1 = calcHist()
Compute the histogram of image 2 using hist2 = calcHist()
Calculate the difference vector diff = hist1 - hist2
Check if each bin of the hist of the subimage is less or equal than the corresponding bin in the hist of the bigger image
Thanks to Miki for the fix.
I will keep Amitay's as the accepted answer, as he absolutely lead me down the correct path. I wanted to also share my exact answer for anyone who finds this in the future.
As I stated in my question, I was looking for an efficient way to see if the RGB values of one image were a subset of the RGB values of another image.
I made a function to the following specification:
The Java code is as follows:
private boolean isSubset(Mat subset, Mat subMask, Mat superset) {
// Get unique set of pixels from both images
subset = getUniquePixels(subset, subMask);
superset = getUniquePixels(superset, null);
// See if the superset pixels encapsulate the subset pixels
// OR the unique pixels together
Mat subOrSuper = new Mat();
Core.bitwise_or(subset, superset, subOrSuper);
//See if the ORed matrix is equal to the superset
Mat notEqualMat = new Mat();
Core.compare(superset, subOrSuper, notEqualMat, Core.CMP_NE);
return Core.countNonZero(notEqualMat) == 0;
}
subset and superset are assumed to be CV_8UC3 matricies, while subMask is assumed to be CV_8UC1.
private Mat getUniquePixels(Mat img, Mat mask) {
if (mask == null) {
mask = new Mat();
}
// int bgrValue = (b << 16) + (g << 8) + r;
img.convertTo(img, CvType.CV_32FC3);
Vector<Mat> splitImg = new Vector<>();
Core.split(img, splitImg);
Mat flatImg = Mat.zeros(img.rows(), img.cols(), CvType.CV_32FC1);
Mat multiplier;
for (int i = 0; i < splitImg.size(); i++) {
multiplier = Mat.ones(img.rows(), img.cols(), CvType.CV_32FC1);
// set powTwo = to 2^i;
int powTwo = (1 << i);
// Set multiplier matrix equal to powTwo;
Core.multiply(multiplier, new Scalar(powTwo), multiplier);
// n<<i == n * 2^i;
// I'm shifting the RGB values into separate parts of the same 32bit
// integer.
Core.multiply(multiplier, splitImg.get(i), splitImg.get(i));
// Add the shifted RGB components together.
Core.add(flatImg, splitImg.get(i), flatImg);
}
// Create a histogram of the pixel values.
List<Mat> images = new ArrayList<>();
images.add(flatImg);
MatOfInt channels = new MatOfInt(0);
Mat hist = new Mat();
// 16777216 == 256*256*256
MatOfInt histSize = new MatOfInt(16777216);
MatOfFloat ranges = new MatOfFloat(0f, 16777216f);
Imgproc.calcHist(images, channels, mask, hist, histSize, ranges);
Mat uniquePixels = new Mat();
Core.inRange(hist, new Scalar(1), new Scalar(Float.MAX_VALUE), uniquePixels);
return uniquePixels;
}
Please feel free to ask questions, or point out problems!

How to get similarties and differences between two images using Opencv

I want to compare two images and find same and different parts of images. I tired "cv::compare and cv::absdiff" methods but confused which one can good for my case. Both show me different results. So how i can achieve my desired task ?
Here's an example how you can use cv::absdiff to find image similarities:
int main()
{
cv::Mat input1 = cv::imread("../inputData/Similar1.png");
cv::Mat input2 = cv::imread("../inputData/Similar2.png");
cv::Mat diff;
cv::absdiff(input1, input2, diff);
cv::Mat diff1Channel;
// WARNING: this will weight channels differently! - instead you might want some different metric here. e.g. (R+B+G)/3 or MAX(R,G,B)
cv::cvtColor(diff, diff1Channel, CV_BGR2GRAY);
float threshold = 30; // pixel may differ only up to "threshold" to count as being "similar"
cv::Mat mask = diff1Channel < threshold;
cv::imshow("similar in both images" , mask);
// use similar regions in new image: Use black as background
cv::Mat similarRegions(input1.size(), input1.type(), cv::Scalar::all(0));
// copy masked area
input1.copyTo(similarRegions, mask);
cv::imshow("input1", input1);
cv::imshow("input2", input2);
cv::imshow("similar regions", similarRegions);
cv::imwrite("../outputData/Similar_result.png", similarRegions);
cv::waitKey(0);
return 0;
}
Using those 2 inputs:
You'll observe that output (black background):

Colorizing image ignores alpha channel — why and how to fix?

Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.

OpenCV C++/Obj-C: goodFeaturesToTrack inside specific blob

Is there a quick solution to specify the ROI only within the contours of the blob I'm intereseted in?
My ideas so far:
Using the boundingRect, but it contains too much stuff I don't want to analyse.
Applying goodFeaturesToTrack to the whole image and then loop through the output coordinates to eliminate the once outside my blobs contour
Thanks in advance!
EDIT
I found what I need: cv::pointPolygonTest() seems to be the right thing, but I'm not sure how to implement it …
Here's some code:
// ...
IplImage forground_ipl = result;
IplImage *labelImg = cvCreateImage(forground.size(), IPL_DEPTH_LABEL, 1);
CvBlobs blobs;
bool found = cvb::cvLabel(&forground_ipl, labelImg, blobs);
IplImage *imgOut = cvCreateImage(cvGetSize(&forground_ipl), IPL_DEPTH_8U, 3);
if (found) {
vb::CvBlob *greaterBlob = blobs[cvb::cvGreaterBlob(blobs)];
cvb::cvRenderBlob(labelImg, greaterBlob, &forground_ipl, imgOut);
CvContourPolygon *polygon = cvConvertChainCodesToPolygon(&greaterBlob->contour);
}
"polygon" contains the contour I need.
goodFeaturesToTrack is implemented this way:
- (std::vector<cv::Point2f>)pointsFromGoodFeaturesToTrack:(cv::Mat &)_image
{
std::vector<cv::Point2f> corners;
cv::goodFeaturesToTrack(_image,corners, 100, 0.01, 10);
return corners;
}
So next I need to loop through the corners and check each point with cv::pointPolygonTest(), right?
You can create a mask over your interest region:
EDIT
How to make a mask:
Make a mask;
Mat mask(origImg.size(), CV_8UC1);
mask.setTo(Scalar::all(0));
// here I assume your contour is extracted with findContours,
// and is stored in a vector<vector<Point>>
// and that you know which contour is the blob
// if it's not the case, use fillPoly instead of drawContour();
Scalar color(255,255,255); // white. actually, it's monchannel.
drawContours(mask, contours, contourIdx, color );
// fillPoly(Mat& img, const Point** pts, const int* npts,
// int ncontours, const Scalar& color)
And now you're ready to use it. BUT, look carefully at the result - I have heard about some bugs in OpenCV regarding the mask parameter for feature extractors, and I am not sure if it's about this one.
// note the mask parameter:
void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners,
double qualityLevel, double minDistance,
InputArray mask=noArray(), int blockSize=3,
bool useHarrisDetector=false, double k=0.04 )
This will also improve the speed of your aplication - goodFeaturesToTrack eats a hoge amount of time, and if you apply it only on a smaller image, the overall gain is significant.

Resources