TL;DR: How does RoI cropping method from Spatial Transformer Network works?
Reading PyTorch Spatial Transformer Network tutorial I saw the network uses a special RoI pooling I haven't seen before called RoI cropping.
Reading the docs for the F.affine_grid and F.grid_sample did not explain so much what is happening there, so I tried reading the network's paper in hope to understand as well as some blog post on Faster RCNN elaborating on the method with pictures, but still no help.
I feel every source has different details and can't get the idea of what exactly is happening there, as good as I understand the normal RoI pooling and align methods.
Right now, this is the big picture in my head:
1. As usual, map the suggested RoI coordinates to the feature map space.
2. Normalize the coordinates to the range of [-1, 1] (I guess that's for the following affine transformation).
3. Calculate (using the method in the picture below) the transformation values.
4. Now, I assume we apply the transformation to the RoI pixels?
5. And finally, I assume we do an interpolation (i.e. bilinear interpolation) to the final coordinates.
Could someone briefly explain the whole process of the RoI cropping method? I feel I might be missing something.
Related
When reading about classic computer vision I am confused on how multiscale feature matching works.
Suppose we use an image pyramid,
How do you deal with the same feature being detected at multiple scales? How do you decide which to make a deacriptor for?
How do you connected features between scales? For example let's say you have a feature detected and matched to a descriptor at scale .5. Is this location then translated to its location in the initial scale?
I can share something about SIFT that might answer question (1) for you.
I'm not really sure what you mean in your question (2) though, so please clarify?
SIFT (Scale-Invariant Feature Transform) was designed specifically to find features that remains identifiable across different image scales, rotations, and transformations.
When you run SIFT on an image of some object (e.g. a car), SIFT will try to create the same descriptor for the same feature (e.g. the license plate), no matter what image transformation you apply.
Ideally, SIFT will only produce a single descriptor for each feature in an image.
However, this obviously doesn't always happen in practice, as you can see in an OpenCV example here:
OpenCV illustrates each SIFT descriptor as a circle of different size. You can see many cases where the circles overlap. I assume this is what you meant in question (1) by "the same feature being detected at multiple scales".
And to my knowledge, SIFT doesn't really care about this issue. If by scaling the image enough you end up creating multiple descriptors from "the same feature", then those are distinct descriptors to SIFT.
During descriptor matching, you simply brute-force compare your list of descriptors, regardless of what scale it was generated from, and try to find the closest match.
The whole point of SIFT as a function, is to take in some image feature under different transformations, and produce a similar numerical output at the end.
So if you do end up with multiple descriptors of the same feature, you'll just end up having to do more computational work, but you will still essentially match the same pair of feature across two images regardless.
Edit:
If you are asking about how to convert coordinates from the scaled images in the image pyramid back into original image coordinates, then David Lowe's SIFT paper dedicates section 4 on that topic.
The naive approach would be to simply calculate the ratios of the scaled coordinates vs the scaled image dimensions, then extrapolate back to the original image coordinates and dimensions. However, this is inaccurate, and becomes increasingly so as you scale down an image.
Example: You start with a 1000x1000 pixel image, where a feature is located at coordinates (123,456). If you had scaled down the image to 100x100 pixel, then the scaled keypoint coordinate would be something like (12,46). Extrapolating back to the original coordinates naively would give the coordinates (120,460).
So SIFT fits a Taylor expansion of the Difference of Gaussian function, to try and locate the original interesting keypoint down to sub-pixel levels of accuracy; which you can then use to extrapolate back to the original image coordinates.
Unfortunately, the math for this part is quite beyond me. But if you are fluent in math, C programming, and want to know specifically how SIFT is implemented; I suggest you dive into Rob Hess' SIFT implementation, lines 467 through 648 is probably the most detailed you can get.
I am working on a hand detection project. There are many good project on web to do this, but what I need is a specific hand pose detection. It needs a totally open palm and the whole palm face to outwards, like the image below:
The first hand faces to inwards, so it will not be detected, and the right one faces to outwards, it will be detected. Now I can detect hand with OpenCV. but how to tell the hand orientation?
Problem of matching with the forehand belongs to the texture classification, it's a classic pattern recognition problem. I suggest you to try one of the following methods:
Gabor filters: it is good to detect the orientation and pixel intensities (as forehand has different features), opencv has getGaborKernel function, the very important params of this function is theta (orientation) and lambd: (frequencies). To make it simple you can apply this process on a cropped zone of palm (as you have already detected it, it would be easy to crop for example the thumb, or a rectangular zone around the gravity center..etc). Then you can convolute it with a small database of images of the same zone to get the a rate of matching, or you can use the SVM classifier, where you have to train your SVM on a set of images by constructing the training matrix needed for SVM (check this question), this paper
Local Binary Patterns (LBP): it's an important feature descriptor used for texture matching, you can apply it on whole palm image or on a cropped zone or finger of image, it's easy to use in opencv, a lot of tutorials with codes are available for this method. I recommend you to read this paper talking about Invariant Texture Classification
with Local Binary Patterns. here is a good tutorial
Haralick Texture: I've read that it works perfectly when a set of features quantifies the entire image (Global Feature Descriptors). it's not implemented in opencv but easy to be implemented, check this useful tutorial
Training Models: I've already suggested a SVM classifier, to be coupled with some descriptor, that can works perfectly.
Opencv has an interesting FaceRecognizer class for face recognition, it could be an interesting idea to use it replacing the face images by the palm ones, (do resizing and rotation to get an unique pose of palm), this class has three methods can be used, one of them is Local Binary Patterns Histograms, which is recommended for texture recognition. and why not to try the other models (Eigenfaces and Fisherfaces ) , check this tutorial
well if you go for a MacGyver way you can notice that the left hand has bones sticking out in a certain direction, while the right hand has all finger lines and a few lines in the hand palms.
These lines are always sort of the same, so you could try to detect them with opencv edge detection or hough lines. Due to the dark color of the lines, you might even be able to threshold them out of it. Then gather the information from those lines, like angles, regressions, see which features you can collect and train a simple decision tree.
That was assuming you do not have enough data, if you have then you go into deeplearning, just take a basic inceptionV3 model and retrain the last dense layer to classify between two classes with a softmax, or to predict the probablity if the hand being up/down with sigmoid. Check this link, Tensorflow got your back on the training of this one, pure already ready code to execute.
Questions? Ask away
Take a look at what leap frog has done with the oculus rift. I'm not sure what they're using internally to segment hand poses, but there is another paper that produces hand poses effectively. If you have a stereo camera setup, you can use this paper's methods: https://arxiv.org/pdf/1610.07214.pdf.
The only promising solutions I've seen for mono camera train on large datasets.
use Haar-Cascade classifier,
you can get the classifier model file then use it here.
Just search for 'Haarcascade detection of Palm in Google' or use below code.
import cv2
cam=cv2.VideoCapture(0)
ccfr2=cv2.CascadeClassifier('haar-cascade-files-master/palm.xml')
while True:
retval,image=cam.read()
grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
palm=ccfr2.detectMultiScale(grey,scaleFactor=1.05,minNeighbors=3)
for x,y,w,h in palm:
image=cv2.rectangle(image,(x,y),(x+w,y+h),(256,256,256),2)
cv2.imshow("Window",image)
if cv2.waitKey(1) & 0xFF==ord('q'):
cv2.destroyAllWindows()
break
del(cam)
Best of Luck for your experience using HaarCascade.
EDIT: I've now found this similar question with a very detailed answer:
proportions of a perspective-deformed rectangle
I'm using OpenCV's findHomography() and warpPerspective() methods to "de skew" a photograph of a sheet of paper. I have this largely working but I'm stuck on a detail.
The part I don't understand how to do is to calculate the optimum set of destination points to input to findHomography(). I know that I want my output to be rectangular, but I dont know the ratio of the width to height of the rectangle. I also want the output rectangle to be sized such that there is minimal scaling of the output image when I apply the transform via warpPerspective(). All I have are the four points that form the quadrilateral I want to transform in the source image. How do I calculate an optimum-sized destination rectangle?
The findHomography() method will need four points (if using Direct Linear Transform). If you want the optimal set you will need the 4-point set which DLT's homography gives the minimum reprojection error. I mean, you need a method that detects inliers/outliers for the particular mathematical model od the DLT.
THis method is RANSAC, and OpenCV has it implemented. You will find examples of findhomography() combined with RANSAC.
I personally find one problem with this and it is the number of iterations of RANSAC in OpenCV, which is too high. If you are looking for optimal speed you will have to dig into the codes.
I am looking for an efficient way to detect the small boxes around the numbers (see images)?
I already tried to use hough transformation with no success. Any ideas? I need some hints! I am using opencv...
For inspiration, you can have a look at the
Matlab video sudoku solver demo and explanation
Sudoku Grab, an Iphone App, whose author explains the computer vision part on his blog
Alternatively, if you are always hunting for the same grid you could deploy something like this:
Make a perfect artificial template of the grid and detect or save all coordinates from all corners.
In the target image, do the same thing, for example with Harris points. Be creative, you might also be able to use the distinct triangles that can be found in your images.
Using the coordinates from the template and the found harris points, determine the affine transformation x = Ax' between the template and the target image. That transformation can then be used to map the template grid onto the target image. At the very least this will give you some prior information to help guide further segmentation.
The gist of the idea and examples of the estimation of affine matrix A can be found on the site of Zissermans book Multiple View Geometry in Computer Vision and Peter Kovesi
I'd start by trying to detect the rectangular boundary of the overall sheet, then applying a perspective transform to make it truly rectangular. Crop that portion of the image out. If possible, then try to make the alternating white and grey sub-rectangles have an equal background brightness - maybe try adaptive histogram equalization.
Then the Hough transform might perform better. Alternatively, you could then take an approach that's broadly similar to this demonstration by Robert Bemis on MATLAB Central (it's analysing a DNA microarray image rather than Lotto cards, but it's essentially finding bounding boxes of items arranged in a grid). At a high level, the approach is to calculate the autocorrelation along columns and rows of pixels to detect the periodicity of the items in the grid, and use that to impose a bounding box on each item.
Sorry the above advice is mostly MATLAB-based; I'm afraid I'm not an opencv user, but hopefully it will give you some ideas at least.
For a project I've to detect a pattern and track it in space despite rotation, noise, etc.
It's highlighted with IR light and recorded with an IR camera:
Picture: https://i.stack.imgur.com/RJuVS.png
As on this picture it will be only very simple shape and we can choose which one we're gonna use.
I need direction on how to process a recognition of these shapes please.
What I do currently is thresholding and erosion to get a cleaner shape and then a contour detection and a polygon approximation.
What should I do then? I tried hu-moments but it wasn't good at all.
Could you please give me a global approach to recognize and track such pattern in space?
Can you choose which shape to project?
if so I would recomend using few concentric circles. Then using hough transform for circles you can easily find the center of the shape even when tracking is extremly hard (large movement/low frame rate).
If you must use rectangular shape then there is a good open source which does that. It is part of a project to read street signs and auto-translate them.
Here is a link: http://code.google.com/p/signfinder/
This source is not large and it would be easy to cut out the relevant part.
It uses "good features to track" of openCV in module CornerFinder.
Hope it helped
It is possible, you need following steps: thresholding image, some morphological enhancement,
blob extraction and normalization of blob size, blobs shape analysis, comparison of analysis results with pattern that you want to track.
There is many methods for blobs shape analysis. Simple methods: geometric dimensions, area, perimeter, circularity measurement; bit quads and others (for example, William K. Pratt "Digital Image Processing", chapter 18). Complex methods: spacial moments, template matching, neural networks and others.
In any event, it is very hard to answer exactly without knowledge of pattern shapes that you want to track )
hope it helped