How Viola-Jones face detector work for multiple size of faces? - haar-wavelet

I am implementing Viola-Jones face detector to detect faces in still images and it work preety for images having same size as of my training size. However I am not getting how the face detector work for multiple size faces?
If the training size of my images is 24*24 and if I want to detect faces in detector window of 30*30 how I need to rescale the haar-feature so that it will work for 30*30 size detector window working with the same threshold.
One more thing, do the position of Haar-feature also changes with different size detector window and if yes how?

Say you're representing a rectangle found inside a Haar wavelet with x, y, w and h variables where x and y represent to top left corner of the rectangle relative to the detector's top left boundaries, w its width and h its height. You can rescale the whole detector by a factor s each Haar wavelet rectangle with the following pseudo-code:
for all rectangle i in the Haar wavelet do
tempRectangle = rectangle[i];
tempRectangle.x = tempRectangle.x * s
tempRectangle.y = tempRectangle.y * s
tempRectangle.h = tempRectangle.h * s
tempRectangle.w = tempRectangle.w * s
//Read the pixels contained in tempRectangle region and
//calculate this rectangle's contribution to the feature value
//considering the respective weight of rectangle[i].
end for
So, let's assume that a single Haar-lke feature has the base size of 24x24 pixels. Such feature is composed of 2 rectangles r1=(10,15,8,4) and r2=(4, 8, 8, 4), where r=(x,y,w,h). When you rescale your detector by a factor s=1.25, this feature rectangles should become r1=(12.5, 18.75, 10, 5) and r2=(5, 10, 10, 5).

Related

Rectangle detection in noisy contours

I'm trying to build an algorithm that calculates the dimensions of slabs (in pixel units as of now). I tried masking, but there is no one HSV color range that will work for all the test cases, as the slabs are of varying colors. I tried Otsu thresholding as well but it didn't work quite well...
Now I'm trying my hand with canny edge detection. The original image, and the image after canny-edge look like this:
I used dilation to make the central region a uniform white region, and then used contour detection. I identified the contour having the maximum area as the contour of interest. The resulting contours are a bit noisy, because the canny edge detection also included some background stuff that was irrelevant:
I used cv2.boundingRect() to estimate the height and width of the rectangle, but it keeps returning the height and width of the entire image. I presume this is because it works by calculating (max(x)-min(x),max(y)-min(y)) for each (x,y) in the contour, and in my case the resulting contour has some pixels touching the edges of the image, and so this calculation simply results in (image width, image height).
I am trying to get better images to work with, but assuming all images are like this only, i.e. have noisy contours, what can be an alternate approach to detect the dimensions of the white rectangular region obtained after dilating?
To get the right points of the rectangle use this:
p = cv2.arcLength(cnt True) # cnt is the rect Contours
appr = cv2.approxPolyDP(cnt , 0.01 * p, True) # appr contains the 4 points
# draw the rect
cv2.drawContours(img, [appr], 0, (0, 255, 0), 2)
The appr var contains the turning point of the rect. You still need to do some more cleaning to get better results, but cv2.boundingRect() is not a good solution for your case.

How can I select the best set of parameters in the Canny edge detection algorithm implemented in OpenCV?

I am working with OpenCV on the Android platform. With the tremendous help from this community and techies, I am able to successfully detect a sheet out of the image.
These are the step I used.
Imgproc.cvtColor()
Imgproc.Canny()
Imgproc.GausianBlur()
Imgproc.findContours()
Imgproc.approxPolyDP()
findLargestRectangle()
Find the vertices of the rectangle
Find the vertices of the rectangle top-left anticlockwise order using center of mass approach
Find the height and width of the rectangle just to maintain the aspect ratio and do warpPerspective transformation.
After applying all these steps I can easily get the document or the largest rectangle from an image. But it highly depends on the difference in the intensities of the background and the document sheet. As the Canny edge detector works on the principle of intensity gradient, a difference in intensity is always assumed from the implementation side. That is why Canny took into the account the various threshold parameters.
Lower threshold
Higher threshold
So if the intensity gradient of a pixel is greater than the higher threshold, it will be added as an edge pixel in the output image. A pixel will be rejected completely if its intensity gradient value is lower than the lower threshold. And if a pixel has an intensity between the lower and higher threshold, it will only be added as an edge pixel if it is connected to any other pixel having the value larger than the higher threshold.
My main purpose is to use Canny edge detection for the document scanning. So how can I compute these thresholds dynamically so that it can work with the both cases of dark and light background?
I tried a lot by manually adjusting the parameters, but I couldn't find any relationship associated with the scenarios.
You could calculate your thresholds using Otsu’s method.
The (Python) code would look like this:
high_thresh, thresh_im = cv2.threshold(im, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
lowThresh = 0.5*high_thresh
Use the following snippet which I obtained from this blog:
v = np.median(gray_image)
#---- Apply automatic Canny edge detection using the computed median----
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edged = cv2.Canny(gray_image, lower, upper)
cv2.imshow('Edges',edged)
##So what am I doing here?
I am taking the median value of the gray scale image. The sigma value of 0.33 is chosen to set the lower and upper threshold. 0.33 value is generally used by statisticians for data science. So it is considered here as well.

Trilinear interpolation in HOG

I am currently implementing HOG in Matlab, but I don't understand the binning, especially the trilinear interpolation part.
What I understood is, that each pixel in a cell is dropped into a bin to form the histogram for this cell. But that's all I understand atm.
How is the magnitude computed?
What are the edges of the cube, and what are the 3D coordinates for one pixel?
Wikipedia describes the gradient (in the context of images) and shows how to obtain its x and y coordinates.
How is the magnitude computed?
r = sqrt(x*x+y*y)
what are the 3D coordinates for one pixel?
When computing the gradient, the image is considered as a height map. For a pixel at a position (x,y) with a gray scale value z it represents the height map 3D position (x,y,z).
A gradient at (x,y,z) has an orientation and magnitude. The histogram is a discretization of all possible orientations into bins. For example with 8 bins, all orientations from 0 to 45 degrees will be associated to the same bin.
The selection of bins is based on the gradient orientation and a weight is added to the bin based on the magnitude.
Wikipedia describes the steps of HOG and gives details pointers in the original paper.

How to identify different objects in an image?

I'm intending to write a program to detect and differentiate certain objects from a nearly solid background. The foreground and the background have a high contrast difference which I would further increase to aid in the object identification process. I'm planning to use Hough transform technique and OpenCV.
Sample image
As seen in the above image, I would want to separately identify the circular objects and the square objects (or any other shape out of a finite set of shapes). Since I'm quite new to image processing I do not have an idea whether such a situation needs a neural network to be implemented and each shape to be learned beforehand. Would a technique such as template matching let me do this without a neural network?
These posts will get you started:
How to detect circles
How to detect squares
How to detect a sheet of paper (advanced square detection)
You will probably have to adjust some parameters in these codes to match your circles/squares, but the core of the technique is shown on these examples.
If you intend to detect shapes other than just circles, (and from the image I assume you do), I would recommend the Chamfer matching for a quick start, especially as you have a good contrast.
The basic premise, explained in simple terms, is following:
You do an edge detection (for example, cvCanny in opencv)
You create a distance image, where the value of each pixel means the distance fom the nearest edge.
You take the shapes you would like to detect, define sample points along the edges of the shape, and try to match these points on the distance image. Basically you just add the values on the distance image which are "under" the coordinates of your sample points, given a specific position of your objects.
Find a good minimization algorithm, the effectiveness of this depends on your application.
This basic approach is a general solution, usually works well, but without further advancements, it is very slow.
Usually it's a good idea to first separate the objects of interest, so you don't have to always do the full search on the whole image. Find a good threshold, so you can separate objects. You still don't know which object it is, but you only have to do the matching itself in close proximity of this object.
Another good idea is, instead of doing the full search on the high resolution image, first do it on a very low resolution. The result will not be very accurate, but you can know the general areas where it's worth to do a search on a higher resolution, so you don't waste your time on areas where there is nothing of interest.
There are a number of more advanced techniques, but it's still worth to take a look at the basic chamfer matching, as it is the base of a large number of techniques.
With the assumption that the objects are simple shapes, here's an approach using thresholding + contour approximation. Contour approximation is based on the assumption that a curve can be approximated by a series of short line segments which can be used to determine the shape of a contour. For instance, a triangle has three vertices, a square/rectangle has four vertices, a pentagon has five vertices, and so on.
Obtain binary image. We load the image, convert to grayscale, Gaussian blur, then adaptive threshold to obtain a binary image.
Detect shapes. Find contours and identify the shape of each contour using contour approximation filtering. This can be done using arcLength to compute the perimeter of the contour and approxPolyDP to obtain the actual contour approximation.
Input image
Detected objects highlighted in green
Labeled contours
Code
import cv2
def detect_shape(c):
# Compute perimeter of contour and perform contour approximation
shape = ""
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.04 * peri, True)
# Triangle
if len(approx) == 3:
shape = "triangle"
# Square or rectangle
elif len(approx) == 4:
(x, y, w, h) = cv2.boundingRect(approx)
ar = w / float(h)
# A square will have an aspect ratio that is approximately
# equal to one, otherwise, the shape is a rectangle
shape = "square" if ar >= 0.95 and ar <= 1.05 else "rectangle"
# Star
elif len(approx) == 10:
shape = "star"
# Otherwise assume as circle or oval
else:
shape = "circle"
return shape
# Load image, grayscale, Gaussian blur, and adaptive threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,31,3)
# Find contours and detect shape
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
# Identify shape
shape = detect_shape(c)
# Find centroid and label shape name
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.putText(image, shape, (cX - 20, cY), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (36,255,12), 2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()

rotated face detection

Is there a library for detecting faces that have been rotated in the image plane? Or is there some way in which I could use a cascade for upright face detection with opencv to do it?
Here's a simple one I wrote with Python cv2
It's not the most efficient thing, and it uses the naive way suggested by etarion, but it works fairly well for just normal head tilting (It detect anything from -40 to 40 head tilt, which is roughly as much as you can tilt your head staying upright.
import cv2
from math import sin, cos, radians
camera = cv2.VideoCapture(0)
face = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
settings = {
'scaleFactor': 1.3,
'minNeighbors': 3,
'minSize': (50, 50),
'flags': cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT|cv2.cv.CV_HAAR_DO_ROUGH_SEARCH
}
def rotate_image(image, angle):
if angle == 0: return image
height, width = image.shape[:2]
rot_mat = cv2.getRotationMatrix2D((width/2, height/2), angle, 0.9)
result = cv2.warpAffine(image, rot_mat, (width, height), flags=cv2.INTER_LINEAR)
return result
def rotate_point(pos, img, angle):
if angle == 0: return pos
x = pos[0] - img.shape[1]*0.4
y = pos[1] - img.shape[0]*0.4
newx = x*cos(radians(angle)) + y*sin(radians(angle)) + img.shape[1]*0.4
newy = -x*sin(radians(angle)) + y*cos(radians(angle)) + img.shape[0]*0.4
return int(newx), int(newy), pos[2], pos[3]
while True:
ret, img = camera.read()
for angle in [0, -25, 25]:
rimg = rotate_image(img, angle)
detected = face.detectMultiScale(rimg, **settings)
if len(detected):
detected = [rotate_point(detected[-1], img, -angle)]
break
# Make a copy as we don't want to draw on the original image:
for x, y, w, h in detected[-1:]:
cv2.rectangle(img, (x, y), (x+w, y+h), (255,0,0), 2)
cv2.imshow('facedetect', img)
if cv2.waitKey(5) != -1:
break
cv2.destroyWindow("facedetect")
Personally, I don't know of a library. But, what I can say is, use an eye detection Haar Cascade, and draw a line between the eyes. Then, you can use the atan function and find the angle by which the head is rotated. (Assuming that the person has both eyes on the same horizontal level when head is not rotated)
deg = atan( (leftEye.y - rightEye.y) / (leftEye.x - rightEye.x) )
Once you get this angle, rotate the image you have by negative deg degrees and you should have a face which can be detected using the Haar Cascades.
Naive way:
Generate list of angles (for example, from -170 to 180 in 10 degree steps)
For each angle n in the list:
Rotate image by n degrees
Run face detector on rotated image
Compute the position of the detected faces in the original image (undo the rotation)
Perform non-maximum suppression on the joined result from all angles (you will likely get multiple detections from neighbouring angles)
you can use bag of words/bag of features method with constrains AAM,ASM methods.
but they also can give not optimal solution converges not to global maximum.
also haar-like-features are just collection of features and you can use rotation invariant features and put it then in adaboost classifer.
I had been dealing with the same problem of face detection for non-frontal images. Try using Multi Task CNN. It's the best solution for face detection and alignment. It is able to deal with problems like various poses, lighting, occlusion.
The paper is available at Link. The code is available on GitHub at Link. I used the python implementation and the results are outstanding. Although the code is a little slow if the image has a lot of faces.
Although if you want to stick to OpenCV, then a new deep learning model for face detection has been added to OpenCV. The results are not as good as Multi Task CNN. There's an implementation of OpenCV Deep Learning Model for Face Detection at pyimagesearch Link
mtcnn works great. It seems it has only issue when face is very near to 90 degree or 180 degree. SO if normal detection fails, just rotate the image by 45 degrees, and try again. If there is a face in the image, then this should detect it.
I am curious though, why does mtcnn fails when face is exactly 90 degree rotated or inverted (180 degrees rotated)
This repo can detect objects as rotated bounding boxes: https://github.com/NVIDIA/retinanet-examples
You could create a dataset from Open Images by randomly rotating images containing the 'human faces' class by -30 to 30 degrees, then train this network to detect those faces.
Methods for face detection based on color histogram are independent of face orientation.

Resources