How do I perform data augmentation in object localization - image-processing

Performing data augmentation for classification task is easy as most transform do not change the ground truth label of the image.
However in the case of object localization:
The position of the bounding box is relative to the crop that has been taken.
There can be the case that the bounding box is only partially in the crop window, do we perform some sort of clipping in this case.
There will also be the case that the object bounding box are not included in the crop, do we discard these examples during training.
I am unable to understand how such cases are handled in object localization. Most papers suggest the use of Multi-Scale training but dont address these issues.

The augmentation methods have to alter the content of the bounding box. In the case of Color augmentations, the pixel distribution would be changed and the coordinates of the bounding box would not change. But in the case of geometric augmentations such as cropping or scaling, not only the pixel distribution would be affected but also the coordinates of the bounding box. Those changes should be kept in the annotation files so the algorithm can read it.
Custom scripts are common to solve this problem. However, In my repository I have a library that would help you. Here is the link https://github.com/lozuwa/impy . With this library you can perform the operations I described previously.

Related

How to segment objects in a image without human interaction using OpenCV

I want to use OpenCv methods to segment images.I have come across Grabcut algorithm but this still requires human interaction like drawing a box to circle a object.
So my question is how to I use OpenCv to do segmentation automatically? Suggestions and code snippet in either C++ or Java are much appreciated.
UPDATE: I'm trying to segment food items from plate and table.
Yes, grabCut requires human interaction, But we can minimize it, like I have personally used grabCut algorithm for segmenting faces from the given image, So it basically involves:
Detecting face in the given image using haar cascade
Generating a probability mask, which would help you in generating markers required for the segmentation.
The first part requires you to either use a pre-manufactured haar cascade, or create your own by providing sufficient training examples.
Once you have a working haar cascade, you can use it to get ROI for each input image, You may extend the ROI dimensions to include more space around the object.
So Now at this step you must be able to crop your required object from the given input image, which reduces the search domain, Now you can create a probability mask, which would indicate the probable location of object for a given ROI, the previous steps were necessary to normalize the input image, Now we can assume the input is always normalized so the object location would be somewhat consistent w.r.t ROI. Here is a sample probability mask for male Human hair:
Now you choose 4 thresholds to create mask for grabcut as:
if (pix > 220): mask = cv::GC_FGD
else if (pix > 170): mask = cv::GC_PR_FGD
else if (pix > 50): mask = cv::GC_PR_BGD
else: mask = cv::GC_BGD
Then you can pass this as mask to perform grabcut segmentation.
However there has been some recent advancements in semantic segmentation, which uses CRF as RNN technique to segment objects form the given image, it requires no normalization thing, but due to it's dependency on GPU for efficient running, it is not suitable for mobile or low end computer applications.

CNN Object Localization Preprocessing?

I'm trying to use a pretrained VGG16 as an object localizer in Tensorflow on ImageNet data. In their paper, the group mentions that they basically just strip off the softmax layer and either toss on a 4D/4000D fc layer for bounding box regression. I'm not trying to do anything fancy here (sliding windows, RCNN), just get some mediocre results.
I'm sort of new to this and I'm just confused about the preprocessing done here for localization. In the paper, they say that they scale the image to 256 as its shortest side, then take the central 224x224 crop and train on this. I've looked all over and can't find a simple explanation on how to handle localization data.
Questions: How do people usually handle the bounding boxes here?...
Do you use something like the tf.sample_distorted_bounding_box command, and then rescale the image based on that?
Do you just rescale/crop the image itself, and then interpolate the bounding box with the transformed scales? Wouldn't this result in negative box coordinates in some cases?
How are multiple objects per image handled?
Do you just choose a single bounding box from the beginning ,crop to that, then train on this crop?
Or, do you feed it the whole (centrally cropped) image, and then try to predict 1 or more boxes somehow?
Does any of this generalize to the Detection or segmentation (like MS-CoCo) challenges, or is it completely different?
Anything helps...
Thanks
Localization is usually performed as an intersection of sliding windows where the network identifies the presence of the object you want.
Generalizing that to multiple objects works the same.
Segmentation is more complex. You can train your model on a pixel mask with your object filled, and you try to output a pixel mask of the same size

which algorithm to choose for object detection?

I am interested in detecting single object more precisely a fire extinguisher which has no inter class variability (all fire extinguisher looks same). However, The application is supposedly realtime i.e a robot is exploring the environment and whenever it sees the object of interest it should be able to detect it and give pixel coordinates of it.
My question is which algorithm will be good choice for this task?
1. Is this a classification problem and should we use features(sift/surf etc) + bow +svm?
2. some other solution (no idea yet).
Any kind of input will be appreciated.
Thanks.
(P.S bear with me i am newbie to computer vision and stack over flow)
update1:
Height varies all are mounted on the wall but with different height. I tried with SIFT features and bow but it is expensive to extract bow descriptors in testing part. Moreover I have no idea how to locate the object(pixel coordinates) inside the image after its been classified positive.
update 2:
I finally used sift + bow + svm and am able to classify the object. But using this technique, i only get output interms of whether the object is present in the scene or not?
How can i detect the object i.e getting the bounding box or centre of the object. what is the compatible approach with the above method for achieving these results.
Thank you all.
I would suggest using color as the main feature to look for, and only try other features as needed. The fire extinguisher red is very distinctive, and should not occur too often elsewhere in an office environment. Other, more computationally expensive tests can then be performed only in regions of the right color.
Here is a good tutorial for color detection that also explains how to find good thresholds for your desired color.
I would suggest the following approach:
denoise your image with a median filter
convert the image to HSV format (Hue, Saturation, Value)
select pixels close to that particular shade of red with InRange()
Now you have a binary image image that contains only the pixels that are red.
count the number of red pixels with CountNonZero()
If that number is too small, abort
remove noise from the binary image by morphological opening / closing
find contours of all blobs in your picture with findContours or the CvBlob library
check if there are blobs of the correct width, correct height and correct width/height ratio
since your fire extinguishers are vertical cylinders, the width/height ratio will be constant from every angle. The width and height will of course vary somewhat with distance to the camera.
if the width and height do not match, abort
repeat these steps to find the black-colored part on the bottom of the extinguisher,
abort if there is no black region with correct width/height below the red region
(perhaps also repeat these steps for the metallic top and the yellow rectangle)
These tests should all be very fast. If they are too slow, you could reduce the resolution of your input images.
Depending on your environment, it is possible that this is already a robust enough test. If not, you can proceed with sift/surf feature matching, but only in a small region around the blobs with the correct color. You also do not necessarily have to do that for each frame, each n-th frame should be be enough for confirmation.
This is a old question .. but will still like to give my recommendation to use YOLO algorithm to solve this problem.
YOLO fits very well to this scenario.

Image Registration by Manual marking of corresponding points using OpenCV

I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.

OpenCV shape matching

I'm new to OpenCV (am actually using Emgu CV C# wrapper) and am attempting to do some object detection.
I'm attempting to determine if an object matches a predefined set of objects (that I will have to define). The background is well lit and does not move. My objects that I am starting with are bottles and cans.
My current approach is:
Do absDiff with a previously taken background image to separate the background.
Then dilate 4x to make the lighter areas (in labels) shrink.
Then I do a binary threshold to get a big blog, followed by finding contours in this image.
I then take the largest contour and draw it, which becomes my shape to either save to the accepted set or compare with the accepted set.
Currently I'm using cvMatchShapes, but the double return value seems to vary widely. I'm guessing it is because it doesn't take into account rotation.
Is this approach a good one? It isn't working well for glass bottles since the edges are hard to find...
I've read about haar classifiers, but thinking that might be overkill for my task.
Maybe this link is useful too. You have the code and library for SIFT, you just need to compile it. Good luck.
http://blogs.oregonstate.edu/hess/sift-library-places-2nd-in-acm-mm-10-ossc/#more-176

Resources