I want to use OpenCv methods to segment images.I have come across Grabcut algorithm but this still requires human interaction like drawing a box to circle a object.
So my question is how to I use OpenCv to do segmentation automatically? Suggestions and code snippet in either C++ or Java are much appreciated.
UPDATE: I'm trying to segment food items from plate and table.
Yes, grabCut requires human interaction, But we can minimize it, like I have personally used grabCut algorithm for segmenting faces from the given image, So it basically involves:
Detecting face in the given image using haar cascade
Generating a probability mask, which would help you in generating markers required for the segmentation.
The first part requires you to either use a pre-manufactured haar cascade, or create your own by providing sufficient training examples.
Once you have a working haar cascade, you can use it to get ROI for each input image, You may extend the ROI dimensions to include more space around the object.
So Now at this step you must be able to crop your required object from the given input image, which reduces the search domain, Now you can create a probability mask, which would indicate the probable location of object for a given ROI, the previous steps were necessary to normalize the input image, Now we can assume the input is always normalized so the object location would be somewhat consistent w.r.t ROI. Here is a sample probability mask for male Human hair:
Now you choose 4 thresholds to create mask for grabcut as:
if (pix > 220): mask = cv::GC_FGD
else if (pix > 170): mask = cv::GC_PR_FGD
else if (pix > 50): mask = cv::GC_PR_BGD
else: mask = cv::GC_BGD
Then you can pass this as mask to perform grabcut segmentation.
However there has been some recent advancements in semantic segmentation, which uses CRF as RNN technique to segment objects form the given image, it requires no normalization thing, but due to it's dependency on GPU for efficient running, it is not suitable for mobile or low end computer applications.
Related
Given an image of many items, with all of its bounding box known in pixel coordinates.
I am trying to extract a region (surrounding) around each of the items, calculate its descriptors and features using AKAZE, to do comparison with one another.
However I realised that this might be too slow, since it involves:
1) cropping each and every single item to generate many images then,
2) detecting and computing on each image to generate the keypoints and descriptors.
Alternatively, to speed things up, I was thinking of:
1) Resizing the entire image, then perform the detecting and computing of keypoints once.
2) Then to obtain the keypoint of a particular object, we simply retrieve the set of precalculated keypoints corresponding to the objects location.
My question is this method functionally sound, and that if there are any consequences to this?
Yes this second strategy is a fine way to go. To do this efficiently you should supply a mask argument in the call to OpenCV's detectAndCompute (or detect if you're using that). Your mask should be the same size as your image. In each pixel of the mask you would have zero for that pixel if it does not lie within at least one detection region, otherwise its value is positive (255 for a uchar mask).
In fact with the first strategy you can have a problem at the borders of your detection regions, where feature points can be missed. This is because feature detection and descriptor computation require processing a small window of pixels around each pixel (which are not available at the borders). To correctly handle this you would need to enlarge the detection regions before cropping.
Concerning efficiency you should be aware that there is an overhead with the second approach, which is that the full image will undergo some image pre-processing before feature detection. For AKAZE this is nonlinear diffusion and for others such as SIFT and SURF this is image convolution. These are needed to built so-called image pyramids. In situations where you only have a few detections the first strategy can be more efficient (the overhead of image cropping is tiny relative to the image pre processing).
I'm trying to use a pretrained VGG16 as an object localizer in Tensorflow on ImageNet data. In their paper, the group mentions that they basically just strip off the softmax layer and either toss on a 4D/4000D fc layer for bounding box regression. I'm not trying to do anything fancy here (sliding windows, RCNN), just get some mediocre results.
I'm sort of new to this and I'm just confused about the preprocessing done here for localization. In the paper, they say that they scale the image to 256 as its shortest side, then take the central 224x224 crop and train on this. I've looked all over and can't find a simple explanation on how to handle localization data.
Questions: How do people usually handle the bounding boxes here?...
Do you use something like the tf.sample_distorted_bounding_box command, and then rescale the image based on that?
Do you just rescale/crop the image itself, and then interpolate the bounding box with the transformed scales? Wouldn't this result in negative box coordinates in some cases?
How are multiple objects per image handled?
Do you just choose a single bounding box from the beginning ,crop to that, then train on this crop?
Or, do you feed it the whole (centrally cropped) image, and then try to predict 1 or more boxes somehow?
Does any of this generalize to the Detection or segmentation (like MS-CoCo) challenges, or is it completely different?
Anything helps...
Thanks
Localization is usually performed as an intersection of sliding windows where the network identifies the presence of the object you want.
Generalizing that to multiple objects works the same.
Segmentation is more complex. You can train your model on a pixel mask with your object filled, and you try to output a pixel mask of the same size
I was reading about image segmentation, and I understood that it is the first step in image analysis. But I also read that if I am using SURF or SIFT to detect and extract features there is no need for segmentation. Is that true? Is there a need for segmentation if I am using SURF?
The dependency between segmentation and recognition is a bit more complex. Clearly, knowing which pixels of the image belong to your object makes recognition easier. However, this relationship works also in the other direction: knowing what is in the image makes it easier to do segmentation. However, for simplicity, I will only speak about a simple pipeline where segmentation is performed first (for instance based on some simple color model) and each of the segments is then processed.
Your question specifically asks about the SURF features. However, in this context, what is important is that SURF is a local descriptor, i.e. it describes small image patches around detected keypoints. Keypoints should be points in the image where information relevant to your recognition problem can be found (interesting parts of the image), but also points that can reliably be detected in a repeatable fashion on all images of objects belonging to the class of interest. As a result, a local descriptor only cares about the pixels around points selected by the keypoint detector and for each such keypoint extracts a small feature vector. On the other hand a global descriptor will consider all pixels within some area, typically a segment, or the whole image.
Therefore, to perform recognition in an image using a global descriptor, you need to first select the area (segment) from which you want your features to be extracted. These features would then be used to recognize what is the content of the segment. The situation is a bit different with a local descriptor, since it describes local patches that the keypoint detector determines as relevant. As a result, you get multiple feature vectors for multiple points in the image, even if you do not perform segmentation. Each of these feature vectors tells you something about the content of the image and you can try to assign each such local feature vector to a "class" and gather their statistics to understand the content of the image. Such simple model is called the Bag-of-words model.
I have some conceptual issues in understanding SURF and SIFT algorithm All about SURF. As far as my understanding goes SURF finds Laplacian of Gaussians and SIFT operates on difference of Gaussians. It then constructs a 64-variable vector around it to extract the features. I have applied this CODE.
(Q1 ) So, what forms the features?
(Q2) We initialize the algorithm using SurfFeatureDetector detector(500). So, does this means that the size of the feature space is 500?
(Q3) The output of SURF Good_Matches gives matches between Keypoint1 and Keypoint2 and by tuning the number of matches we can conclude that if the object has been found/detected or not. What is meant by KeyPoints ? Do these store the features ?
(Q4) I need to do object recognition application. In the code, it appears that the algorithm can recognize the book. So, it can be applied for object recognition. I was under the impression that SURF can be used to differentiate objects based on color and shape. But, SURF and SIFT find the corner edge detection, so there is no point in using color images as training samples since they will be converted to gray scale. There is no option of using colors or HSV in these algorithms, unless I compute the keypoints for each channel separately, which is a different area of research (Evaluating Color Descriptors for Object and Scene Recognition).
So, how can I detect and recognize objects based on their color, shape? I think I can use SURF for differentiating objects based on their shape. Say, for instance I have a 2 books and a bottle. I need to only recognize a single book out of the entire objects. But, as soon as there are other similar shaped objects in the scene, SURF gives lots of false positives. I shall appreciate suggestions on what methods to apply for my application.
The local maxima (response of the DoG which is greater (smaller) than responses of the neighbour pixels about the point, upper and lover image in pyramid -- 3x3x3 neighbourhood) forms the coordinates of the feature (circle) center. The radius of the circle is level of the pyramid.
It is Hessian threshold. It means that you would take only maximas (see 1) with values bigger than threshold. Bigger threshold lead to the less number of features, but stability of features is better and visa versa.
Keypoint == feature. In OpenCV Keypoint is the structure to store features.
No, SURF is good for comparison of the textured objects but not for shape and color. For the shape I recommend to use MSER (but not OpenCV one), Canny edge detector, not local features. This presentation might be useful
in my project I want to crop the ROI of an image. For this I create a map with the regions of interesst. Now I want to crop the area which has the most important pixels (black is not important, white is important).
Has someone an idea how to realize it? I think this is a maximazion problem
The red border in the image below is an example how I want to crop this image
If I understood your question correctly, you have computed a value at every point in the image. These values suggests the "importance"/"interestingness"/"saliency" of each point. The matrix/image containing these values is the "map" you are referring to. Your goal is to get the bounding box for regions of interests (ROI) with high "importance" score.
The way I think you can go about segmenting the ROIs is to apply Graph Cut based segmentation computing a "score" at each pixel using your importance map. The result of the segmentation is a binary mask that masks the "important" pixels. Next, run OpenCV's findcontours function on this binary mask to get the individual connected components. Then use OpenCV's boundingRect function on the contours returned by findContours(...) to get the bounding boxes.
The good thing about using a Graph Cut based segmentation algorithm in this way is that it will join up fragmented components i.e. the resulting binary mask will tend not to have pockets of small holes even if your "importance" map is noisy.
One Graph Cut based segmentation algorithm already implemented in OpenCV is the GrabCut algorithm. A quick hack would be to apply it on your "importance" map to get the binary mask I mentioned above. A more sophisticated approach would be to build the foreground and background (color perhaps?) model using your "importance" map and passing it as input to the function. More details on GrabCut in OpenCV can be found here: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=grabcut#void grabCut(InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, InputOutputArray fgdModel, int iterCount, int mode)
If you would like greater flexibility, you can hack your own graphcut based segmentation algorithm using the following MRF library. This library allows you to specify your custom objective function in computing the graph cut: http://vision.middlebury.edu/MRF/code/
To use the MRF library, you will need to specify the "cost" at each point in your image indicating whether that point is "foreground" or "background". You can also think of this dichotomy as "important" or "not important" instead of "foreground" vs "background".
The MRF library's goal is to return you a label at each point such that total cost of assigning those labels is as small as possible. Hence, the game is to come up with a function to compute a small cost for points you consider important and large otherwise.
Specifically, the cost at each point is composed of 2 parts: 1) The data term/function and 2) The smoothness term/function. As mentioned earlier, the smaller the data term at each point, the more likely that point will be selected. If your "importance" score s_ij is in the range [0, 1], then a common way to compute your data term would be -log(s_ij).
The smoothness terms is a way to suggest whether 2 neighboring pixels p, q, should have the same label i.e. both "foreground", "background", or one "foreground" and the other "background". Similar to the data cost, you have to construct it such that the cost is small for neighbor pixels having similar "importance" score so that they will be assigned the same label. This term is responsible for "smoothing" the resulting mask so that you will not have pixels of low "importance" sprinkled within regions of high "importance" and vice versa. If there are such regions, OpenCV's findContours(...) function mentioned above will return contours for these regions, which can be filtered out perhaps by checking their size.
Details on functions to compute the cost can be found in the GrabCut paper: GrabCut
This blog post provides a bit more detail (and code) on creating your own graphcut segmentation algorithm in OpenCV: http://www.morethantechnical.com/2010/05/05/bust-out-your-own-graphcut-based-image-segmentation-with-opencv-w-code/
Another paper showing how to perform graph cut segmentation on grayscale images (your case), with better notations, and without the complicated image matting part (not implemented in OpenCV's version) in the GrabCut paper is this: Graph Cuts and Efficient N-D Image Segmentation
Hope this helps.