Interpreting the location output of SSD Multibox Detector - machine-learning

I'm working through a mobilenetv2ssdlite Pytorch model and am confused about the output of the regression headers of the SSD. To my understanding, each bounding box has 4 outputs regarding the location of the box, deltaCx, deltaCy, deltaX, deltaY, with respect to a default bounding box/prior/anchor. However, in the model, it converts the regressional location results of the SSD into a form relative to the size by doing the following calculations:
$$predicted\_center * center_variance = \frac {real\_center - prior\_center} {prior\_hw}$$
$$exp(predicted\_hw * size_variance) = \frac {real\_hw} {prior\_hw}$$
Why is there a division by prior\hw for the centers if the output of the SSD is the offset of the centre of the predicted bounding box relative to the anchor? Why is it not predicted\_center * center_variance = real\_center - prior\_center? Is it actually the case that the output of the SSD is the offset with respect to the anchor's height/width? I've included the entire function for reference:
def convert_locations_to_boxes(locations, priors, center_variance,
size_variance):
"""Convert regressional location results of SSD into boxes in the form of (center_x, center_y, h, w).
The conversion:
$$predicted\_center * center_variance = \frac {real\_center - prior\_center} {prior\_hw}$$
$$exp(predicted\_hw * size_variance) = \frac {real\_hw} {prior\_hw}$$
We do it in the inverse direction here.
Args:
locations (batch_size, num_priors, 4): the regression output of SSD. It will contain the outputs as well.
priors (num_priors, 4) or (batch_size/1, num_priors, 4): prior boxes.
center_variance: a float used to change the scale of center.
size_variance: a float used to change of scale of size.
Returns:
boxes: priors: [[center_x, center_y, h, w]]. All the values
are relative to the image size.
"""
# priors can have one dimension less.
if priors.dim() + 1 == locations.dim():
priors = priors.unsqueeze(0)
return torch.cat([
locations[..., :2] * center_variance * priors[..., 2:] + priors[..., :2],
torch.exp(locations[..., 2:] * size_variance) * priors[..., 2:]
], dim=locations.dim() - 1)
Thanks

Related

How to set the region (and its shape) over which the SIFT descriptor is computed?

A similar question has been asked here. However I could not understand it clearly.
I understand that SIFT computation has the following steps:
Finding scale space extrema
Keypoint localization(and filtering)
Orientation assignment (using computation of gradient magnitude and orientation)
Create SIFT descriptor
My question is for the fourth step: How to set the region over which the SIFT descriptor is computed? Also how is the shape of the region for SIFT computation determined?
Suppose the scale space extrema was found at scale "s" in the second octave. I use the gradient orientation to align to a canonical orientation. How do I set the region of computation of the SIFT descriptor using these information? Do I use the scale or the magnitude of the gradient to find the region on which SIFT is to be computed? Also how is the shape of the region determined?
So this was surprisingly tricky to find an answer for.
David Lowe's original paper only seemed to provide vague theoretical explanation on how his algorithm worked.
And as far as I know, his official implementation never had its feature descriptor code open-sourced.
So I'm basing my answer off what I consider the next-most canonical implementation of the SIFT algorithm, being Rob Hess' OpenSIFT implementation;
which became the base for OpenCV's official implementation.
Anyway, here is my understanding of how SIFT roughly works:
Once you have located your extrema, you should know which octave & interval of the Gaussian Pyramid the extrema belongs to.
Based on Rob's code (these two functions on lines 1026-1112), the feature descriptor is calculated from the blurred image of that octave & interval.
And the region for calculating SIFT is a square shape surrounding the keypoint. This medium article also seems to agree (see illustration).
The SIFT formula for the Gaussian Kernel scale, relative to the original image size is (reference):
base_scale * 2^(octave + interval / intervals_per_octave)
Or this formula if working relative to the halved image in each octave:
base_scale * 2^(interval / intervals_per_octave)
Where the original paper defined the parameters through experiments as:
base_scale = 1.6 and intervals_per_octave = 3
So if your SIFT was set to have 3 intervals per octave, with a base Gaussian scale of 1.6, and the extrema was found on octave 2, interval 3;
the image will have been blurred by a Gaussian Kernel of scale : 1.6 * 2^(2 + 3/3) = 12.80 pixels
Now the actual array size of the Gaussian kernel will depend on the code you use, as the scale and the kernel size can be set independently.
In cases like MATLAB, I've found a helpful guidelines from this SO thread.
The selected answer recommends kernel width of 6 times the scale (i.e. 3 sigma rule), our kernel width (and height) is 12.80 * 6 ≈ 77 pixels;
thus, a SIFT descriptor region of size 77x77 pixels.
Meanwhile, the OpenCV implementation appears to leave the size of the kernel to be determined by OpenCV's own built-in Gaussian Blur function.
Line 246 from OpenCV's code leaves the Gaussian Blur function parameter ksize as zeroes,
which the official docs only states the kernel size will be "computed from sigma", and never defines how it is actually calculated...
Finally, for Rob's implementation, I have to admit that I couldn't quite understand what was happening in this final step. ¯\_(ツ)_/¯
From lines 1026-1112 Rob defined the code below, which shows show how he calculates the orientation histogram for the SIFT descriptor.
The code shows he defined a radius and used the nested for-loops with i and j to iterate through the square region around the keypoint, located at point (r,c).
Yet what I don't really understand is:
How he defined radius, with the Gaussian scale scl multiplied with some unknown constant SIFT_DESCR_SCL_FCTR = 3.0
As well as hist_width * sqrt(2) * ( d + 1.0 ) * 0.5 + 0.5, where d = SIFT_DESCR_WIDTH = 4
hist_width = SIFT_DESCR_SCL_FCTR * scl;
radius = hist_width * sqrt(2) * ( d + 1.0 ) * 0.5 + 0.5;
for( i = -radius; i <= radius; i++ )
for( j = -radius; j <= radius; j++ )
{
/*
Calculate sample's histogram array coords rotated relative to ori.
Subtract 0.5 so samples that fall e.g. in the center of row 1 (i.e.
r_rot = 1.5) have full weight placed in row 1 after interpolation.
*/
c_rot = ( j * cos_t - i * sin_t ) / hist_width;
r_rot = ( j * sin_t + i * cos_t ) / hist_width;
rbin = r_rot + d / 2 - 0.5;
cbin = c_rot + d / 2 - 0.5;
if( rbin > -1.0 && rbin < d && cbin > -1.0 && cbin < d )
if( calc_grad_mag_ori( img, r + i, c + j, &grad_mag, &grad_ori ))
{
grad_ori -= ori;
while( grad_ori < 0.0 )
grad_ori += PI2;
while( grad_ori >= PI2 )
grad_ori -= PI2;
obin = grad_ori * bins_per_rad;
w = exp( -(c_rot * c_rot + r_rot * r_rot) / exp_denom );
interp_hist_entry( hist, rbin, cbin, obin, grad_mag * w, d, n );
}
}
But regardless of how the exact size of the region is calculated, I think the general concept is the same.
To calculate the region size based on the original Gaussian scale.
Besides, given that the features are supposed to be "weighted by a Gaussian window" (original paper, section 6.1, page 15);
as long as the region you define is large enough to contain most of the meaningful orientation histograms, you are fine.
In summary:
The SIFT descriptor is calculated from the halved & blurred image of the same octave/interval as the keypoint (OpenSIFT)
The region for the SIFT descriptor is a square shape surrounding the keypoint (medium)(image)
The region size is calculated based on the Gaussian kernel scale, though the exact method for calculation can vary an easy rule of thumb is "width of 6 times the kernel scale" (thread)

Rotating image with its bounding boxes yielding worse boxes at 45 degrees with opencv2 and numpy

I have some code, largely taken from various sources linked at the bottom of this post, written in Python, that takes an image of shape [height, width] and some bounding boxes in the [x_min, y_min, x_max, y_max] format, both numpy arrays, and rotates an image and its bounding boxes counterclockwise. Since after rotation the bounding box becomes more of a "diamond shape", i.e. not axis aligned, then I perform some calculations to make it axis aligned. The purpose of this code is to perform data augmentation in training an object detection neural network through the use of rotated data (where flipping horizontally or vertically is common). It seems flips of other angles are common for image classification, without bounding boxes, but when there is boxes, the resources for how to flip the boxes as well as the images is relatively sparse/niche.
It seems when I input an angle of 45 degrees, that I get some less than "tight" bounding boxes, as in the four corners are not a very good annotation, whereas the original one was close to perfect.
The image shown below is the first image in the MS COCO 2014 object detection dataset (training image), and its first bounding box annotation. My code is as follows:
import math
import cv2
import numpy as np
# angle assumed to be in degrees
# bbs a list of bounding boxes in x_min, y_min, x_max, y_max format
def rotateImageAndBoundingBoxes(im, bbs, angle):
h, w = im.shape[0], im.shape[1]
(cX, cY) = (w//2, h//2) # original image center
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0) # 2 by 3 rotation matrix
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the dimensions of the rotated image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation of the new centre
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
rotated_im = cv2.warpAffine(im, M, (nW, nH))
rotated_bbs = []
for bb in bbs:
# get the four rotated corners of the bounding box
vec1 = np.matmul(M, np.array([bb[0], bb[1], 1], dtype=np.float64)) # top left corner transformed
vec2 = np.matmul(M, np.array([bb[2], bb[1], 1], dtype=np.float64)) # top right corner transformed
vec3 = np.matmul(M, np.array([bb[0], bb[3], 1], dtype=np.float64)) # bottom left corner transformed
vec4 = np.matmul(M, np.array([bb[2], bb[3], 1], dtype=np.float64)) # bottom right corner transformed
x_vals = [vec1[0], vec2[0], vec3[0], vec4[0]]
y_vals = [vec1[1], vec2[1], vec3[1], vec4[1]]
x_min = math.ceil(np.min(x_vals))
x_max = math.floor(np.max(x_vals))
y_min = math.ceil(np.min(y_vals))
y_max = math.floor(np.max(y_vals))
bb = [x_min, y_min, x_max, y_max]
rotated_bbs.append(bb)
// my function to resize image and bbs to the original image size
rotated_im, rotated_bbs = resizeImageAndBoxes(rotated_im, w, h, rotated_bbs)
return rotated_im, rotated_bbs
The good bounding box looks like:
The not-so-good bounding box looks like :
I am trying to determine if this is an error of my code, or this is expected behavior? It seems like this problem is less apparent at integer multiples of pi/2 radians (90 degrees), but I would like to achieve tight bounding boxes at any angle of rotation. Any insights at all appreciated.
Sources:
[Open CV2 documentation] https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gafbbc470ce83812914a70abfb604f4326
[Data Augmentation Discussion]
https://blog.paperspace.com/data-augmentation-for-object-detection-rotation-and-shearing/
[Mathematics of rotation around an arbitrary point in 2 dimension]
https://math.stackexchange.com/questions/2093314/rotation-matrix-of-rotation-around-a-point-other-than-the-origin
It seems for the most part this is expected behavior as per the comments. I do have a kind of hacky solution to this problem, where you can write a function like
# assuming box coords = [x_min, y_min, x_max, y_max]
def cropBoxByPercentage(box_coords, image_width, image_height, x_percentage=0.05, y_percentage=0.05):
box_xmin = box_coords[0]
box_ymin = box_coords[1]
box_xmax = box_coords[2]
box_ymax = box_coords[3]
box_width = box_xmax-box_xmin+1
box_height = box_ymax-box_ymin+1
dx = int(x_percentage * box_width)
dy = int(y_percentage * box_height)
box_xmin = max(0, box_xmin-dx)
box_xmax = min(image_width-1, box_xmax+dx)
box_ymin = max(0, box_ymax - dy)
box_ymax = min(image_height - 1, box_ymax + dy)
return np.array([box_xmin, box_xmax, box_ymin, box_ymax])
Where computing the x_percentage and y_percentage can be computed using a fixed value, or could be computed using some heuristic.

How do I 'fit a line' to a cluster of pixels?

I would like to generate a polynomial 'fit' to the cluster of colored pixels in the image here
(The point being that I would like to measure how much that cluster approximates an horizontal line).
I thought of using grabit or something similar and then treating this as a cloud of points in a graph. But is there a quicker function to do so directly on the image file?
thanks!
Here is a Python implementation. Basically we find all (xi, yi) coordinates of the colored regions, then set up a regularized least squares system where the we want to find the vector of weights, (w0, ..., wd) such that yi = w0 + w1 xi + w2 xi^2 + ... + wd xi^d "as close as possible" in the least squares sense.
import numpy as np
import matplotlib.pyplot as plt
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
def feature(x, order=3):
"""Generate polynomial feature of the form
[1, x, x^2, ..., x^order] where x is the column of x-coordinates
and 1 is the column of ones for the intercept.
"""
x = x.reshape(-1, 1)
return np.power(x, np.arange(order+1).reshape(1, -1))
I_orig = plt.imread("2Md7v.jpg")
# Convert to grayscale
I = rgb2gray(I_orig)
# Mask out region
mask = I > 20
# Get coordinates of pixels corresponding to marked region
X = np.argwhere(mask)
# Use the value as weights later
weights = I[mask] / float(I.max())
# Convert to diagonal matrix
W = np.diag(weights)
# Column indices
x = X[:, 1].reshape(-1, 1)
# Row indices to predict. Note origin is at top left corner
y = X[:, 0]
We want to find vector w that minimizes || Aw - y ||^2
so that we can use it to predict y = w . x
Here are 2 versions. One is a vanilla least squares with l2 regularization and the other is weighted least squares with l2 regularization.
# Ridge regression, i.e., least squares with l2 regularization.
# Should probably use a more numerically stable implementation,
# e.g., that in Scikit-Learn
# alpha is regularization parameter. Larger alpha => less flexible curve
alpha = 0.01
# Construct data matrix, A
order = 3
A = feature(x, order)
# w = inv (A^T A + alpha * I) A^T y
w_unweighted = np.linalg.pinv( A.T.dot(A) + alpha * np.eye(A.shape[1])).dot(A.T).dot(y)
# w = inv (A^T W A + alpha * I) A^T W y
w_weighted = np.linalg.pinv( A.T.dot(W).dot(A) + alpha * \
np.eye(A.shape[1])).dot(A.T).dot(W).dot(y)
The result
# Generate test points
n_samples = 50
x_test = np.linspace(0, I_orig.shape[1], n_samples)
X_test = feature(x_test, order)
# Predict y coordinates at test points
y_test_unweighted = X_test.dot(w_unweighted)
y_test_weighted = X_test.dot(w_weighted)
# Display
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.imshow(I_orig)
ax.plot(x_test, y_test_unweighted, color="green", marker='o', label="Unweighted")
ax.plot(x_test, y_test_weighted, color="blue", marker='x', label="Weighted")
fig.legend()
fig.savefig("curve.png")
For simple straight line fit, set the argument order of feature to 1. You can then use the gradient of the line to get a sense of how close it is to a horizontal line (e.g., by checking the angle of its slope).
It is also possible to set this to any degree of polynomial you want. I find that degree 3 looks pretty good. In this case, the 6 times the absolute value of the coefficient corresponding to x^3 (w_unweighted[3] or w_weighted[3]) is one measure of the curvature of the line.
See A measure for the curvature of a quadratic polynomial in Matlab for additional details.

How Convexity Defect is calculated in OpenCV?

What is the algorithm used in OpenCV function convexityDefects() to calculate the convexity defects of a contour?
Please, describe and illustrate the high-level operation of the algorithm, along with its inputs and outputs.
Based on the documentation, the input are two lists of coordinates:
contour defining the original contour (red on the image below)
convexhull defining the convex hull corresponding to that contour (blue on the image below)
The algorithm works in the following manner:
If the contour or the hull contain 3 or less points, then the contour is always convex, and no more processing is needed. The algorithm assures that both the contour and the hull are accessed in the same orientation.
N.B.: In further explanation I assume they are in the same orientation, and ignore the details regarding representation of the floating point depth as an integer.
Then for each pair of adjacent hull points (H[i], H[i+1]), defining one edge of the convex hull, calculate the distance from the edge for each point on the contour C[n] that lies between H[i] and H[i+1] (excluding C[n] == H[i+1]). If the distance is greater than zero, then a defect is present. When a defect is present, record i, i+1, the maximum distance and the index (n) of the contour point where the maximum located.
Distance is calculated in the following manner:
dx0 = H[i+1].x - H[i].x
dy0 = H[i+1].y - H[i].y
if (dx0 is 0) and (dy0 is 0) then
scale = 0
else
scale = 1 / sqrt(dx0 * dx0 + dy0 * dy0)
dx = C[n].x - H[i].x
dy = C[n].y - H[i].y
distance = abs(-dy0 * dx + dx0 * dy) * scale
It may be easier to visualize in terms of vectors:
C: defect vector from H[i] to C[n]
H: hull edge vector from H[i] to H[i+1]
H_rot: hull edge vector H rotated 90 degrees
U_rot: unit vector in direction of H_rot
H components are [dx0, dy0], so rotating 90 degrees gives [-dy0, dx0].
scale is used to find U_rot from H_rot, but because divisions are more computationally expensive than multiplications, the inverse is used as an optimization. It's also pre-calculated before the loop over C[n] to avoid recomputing each iteration.
|H| = sqrt(dx0 * dx0 + dy0 * dy0)
U_rot = H_rot / |H| = H_rot * scale
Then, a dot product between C and U_rot gives the perpendicular distance from the defect point to the hull edge, and abs() is used to get a positive magnitude in any orientation.
distance = abs(U_rot.C) = abs(-dy0 * dx + dx0 * dy) * scale
In the scenario depicted on the above image, in first iteration, the edge is defined by H[0] and H[1]. The contour points tho examine for this edge are C[0], C[1], and C[2] (since C[3] == H[1]).
There are defects at C[1] and C[2]. The defect at C[1] is the deepest, so the algorithm will record (0, 1, 1, 50).
The next edge is defined by H[1] and H[2], and corresponding contour point C[3]. No defect is present, so nothing is recorded.
The next edge is defined by H[2] and H[3], and corresponding contour point C[4]. No defect is present, so nothing is recorded.
Since C[5] == H[3], the last contour point can be ignored -- there can't be a defect there.

Computing 3D coordinates of keypoints in multiple images

I have multiple images of an object taken by the same calibrated camera. Let's say calibrated means both intrinsic and extrinsic parameters (I can put a checkerboard next to the object, so all parameters can be retrieved). On these images I can find matching keypoints using SIFT or SURF, and some matching algorithm, this is basic OpenCV. But how do I do the 3D reconstruction of these points from multiple images? This is not a classic stereo arrangement, so there are more than 2 images with the same object points on them, and I want to use as many as possible for increased accuracy.
Are there any built-in OpenCV functions that do this?
(Note that this is done off-line, the solution does not need to be fast, but robust)
I guess you are looking for so-called Structur from motion approaches. They are using multiple images from different viewpoints and return a 3D reconstruction (e.g. a pointcloud). It looks like OpenCV has a SfM module in the contrib package, but I have no experiences with it.
However, I used to work with bundler. It was quite uncomplicated and returns the entire information (camera calibration and point positions) as text file and you can view the point cloud with Meshlab. Please note that it uses SIFT keypoints and descriptors for correspondence establishment.
I think I have found a solution for this. Structure from motion algorithms deal with the case where the cameras are not calibrated, but in this case all intrinsic and extrinsic parameters are known.
The problem degrades into a linear least squares problem:
We have to compute the coordinates for a single object point:
X = [x, y, z, 1]'
C = [x, y, z]'
X = [[C], [1]]
We are given n images, which have these transformation matrices:
Pi = Ki * [Ri|ti]
These matrices are already known. The object point is projected on the images at
U = [ui, vi]
We can write in homogeneous coordinates (the operator * represents both matrix multiplication, dot product and scalar multiplication):
[ui * wi, vi * wi, wi]' = Pi * X
Pi = [[p11i, p12i, p13i, p14i],
[p21i, p22i, p23i, p24i],
[p31i, p32i, p33i, p34i]]
Let's define the following:
p1i = [p11i, p12i, p13i] (the first row of Pi missing the last element)
p2i = [p21i, p22i, p23i] (the second row of Pi missing the last element)
p3i = [p31i, p32i, p33i] (the third row of Pi missing the last element)
a1i = p14i
a2i = p24i
a3i = p34i
Then we can write:
Q = [x, y, z]
wi = p3i * Q + a3i
ui = (p1i * Q + a1i) / wi =
= (p1i * Q + a1i) / (p3i * Q + a3i)
ui * p3i * Q + ui * a3i - p1i * Q - a1i = 0
(ui * p3i - p1i) * Q = a1i - a3i
Similarly for vi:
(vi * p3i - p2i) * Q = a2i - a3i
And this holds for i = 1..n. We can write this in matrix form:
G * Q = b
G = [[u1 * p31 - p11],
[v1 * p31 - p21],
[u2 * p32 - p12],
[v2 * p32 - p22],
...
[un * p3n - p1n],
[vn * p3n - p2n]]
b = [[a11 - a31 * u1],
[a21 - a31 * v1],
[a12 - a32 * u2],
[a22 - a32 * v2],
...
[a1n - a3n * un],
[a2n - a3n * vn]]
Since G and b are known from the Pi matrices, and the image points [ui, vi], we can compute the pseudoinverse of G (call it G_), and compute:
Q = G_ * b

Resources