Big differences in image filtered with rotated Gabor kernels - opencv

I have calculated 8 Gabor filters with Theta rotation m*PI/8.
Parameters of the Gabor kernel given as input to OpenCv cv2.getGaborKernel:
ksize = 11, theta = m*PI/8 lambd = 16/3 sigma = (5.09030 * 8.0) / (3.0 * PI) gamma = 0.5890 psi = 0
kernel = cv2.getGaborKernel(ksize = (ksize,ksize), sigma = sigma,
theta = theta, lambd = lambd,
gamma = gamma, psi = psi)
The parameters are designed according to "Features Extraction using a Gabor filter family", Zhen, Zhao, Wang.
The formula adopted is the one of the third family of Gabor filters.
The 8 filters obtained are:
The original image is:
The images obtained by filtering the images are:
They are calculated with cv2.filter2D
fimg = cv2.filter2D(img, cv2.CV_64F, kernel)
Why the gabor filters with theta = 0 and theta = PI / 2.0 have a really different continuous component compared to the others?
It does not really make sense to me.

The reason was the PSI param that I set to 0. The problem is immediatly fixed is psi is kept at the default value PI/2.

Related

How to set the region (and its shape) over which the SIFT descriptor is computed?

A similar question has been asked here. However I could not understand it clearly.
I understand that SIFT computation has the following steps:
Finding scale space extrema
Keypoint localization(and filtering)
Orientation assignment (using computation of gradient magnitude and orientation)
Create SIFT descriptor
My question is for the fourth step: How to set the region over which the SIFT descriptor is computed? Also how is the shape of the region for SIFT computation determined?
Suppose the scale space extrema was found at scale "s" in the second octave. I use the gradient orientation to align to a canonical orientation. How do I set the region of computation of the SIFT descriptor using these information? Do I use the scale or the magnitude of the gradient to find the region on which SIFT is to be computed? Also how is the shape of the region determined?
So this was surprisingly tricky to find an answer for.
David Lowe's original paper only seemed to provide vague theoretical explanation on how his algorithm worked.
And as far as I know, his official implementation never had its feature descriptor code open-sourced.
So I'm basing my answer off what I consider the next-most canonical implementation of the SIFT algorithm, being Rob Hess' OpenSIFT implementation;
which became the base for OpenCV's official implementation.
Anyway, here is my understanding of how SIFT roughly works:
Once you have located your extrema, you should know which octave & interval of the Gaussian Pyramid the extrema belongs to.
Based on Rob's code (these two functions on lines 1026-1112), the feature descriptor is calculated from the blurred image of that octave & interval.
And the region for calculating SIFT is a square shape surrounding the keypoint. This medium article also seems to agree (see illustration).
The SIFT formula for the Gaussian Kernel scale, relative to the original image size is (reference):
base_scale * 2^(octave + interval / intervals_per_octave)
Or this formula if working relative to the halved image in each octave:
base_scale * 2^(interval / intervals_per_octave)
Where the original paper defined the parameters through experiments as:
base_scale = 1.6 and intervals_per_octave = 3
So if your SIFT was set to have 3 intervals per octave, with a base Gaussian scale of 1.6, and the extrema was found on octave 2, interval 3;
the image will have been blurred by a Gaussian Kernel of scale : 1.6 * 2^(2 + 3/3) = 12.80 pixels
Now the actual array size of the Gaussian kernel will depend on the code you use, as the scale and the kernel size can be set independently.
In cases like MATLAB, I've found a helpful guidelines from this SO thread.
The selected answer recommends kernel width of 6 times the scale (i.e. 3 sigma rule), our kernel width (and height) is 12.80 * 6 ≈ 77 pixels;
thus, a SIFT descriptor region of size 77x77 pixels.
Meanwhile, the OpenCV implementation appears to leave the size of the kernel to be determined by OpenCV's own built-in Gaussian Blur function.
Line 246 from OpenCV's code leaves the Gaussian Blur function parameter ksize as zeroes,
which the official docs only states the kernel size will be "computed from sigma", and never defines how it is actually calculated...
Finally, for Rob's implementation, I have to admit that I couldn't quite understand what was happening in this final step. ¯\_(ツ)_/¯
From lines 1026-1112 Rob defined the code below, which shows show how he calculates the orientation histogram for the SIFT descriptor.
The code shows he defined a radius and used the nested for-loops with i and j to iterate through the square region around the keypoint, located at point (r,c).
Yet what I don't really understand is:
How he defined radius, with the Gaussian scale scl multiplied with some unknown constant SIFT_DESCR_SCL_FCTR = 3.0
As well as hist_width * sqrt(2) * ( d + 1.0 ) * 0.5 + 0.5, where d = SIFT_DESCR_WIDTH = 4
hist_width = SIFT_DESCR_SCL_FCTR * scl;
radius = hist_width * sqrt(2) * ( d + 1.0 ) * 0.5 + 0.5;
for( i = -radius; i <= radius; i++ )
for( j = -radius; j <= radius; j++ )
{
/*
Calculate sample's histogram array coords rotated relative to ori.
Subtract 0.5 so samples that fall e.g. in the center of row 1 (i.e.
r_rot = 1.5) have full weight placed in row 1 after interpolation.
*/
c_rot = ( j * cos_t - i * sin_t ) / hist_width;
r_rot = ( j * sin_t + i * cos_t ) / hist_width;
rbin = r_rot + d / 2 - 0.5;
cbin = c_rot + d / 2 - 0.5;
if( rbin > -1.0 && rbin < d && cbin > -1.0 && cbin < d )
if( calc_grad_mag_ori( img, r + i, c + j, &grad_mag, &grad_ori ))
{
grad_ori -= ori;
while( grad_ori < 0.0 )
grad_ori += PI2;
while( grad_ori >= PI2 )
grad_ori -= PI2;
obin = grad_ori * bins_per_rad;
w = exp( -(c_rot * c_rot + r_rot * r_rot) / exp_denom );
interp_hist_entry( hist, rbin, cbin, obin, grad_mag * w, d, n );
}
}
But regardless of how the exact size of the region is calculated, I think the general concept is the same.
To calculate the region size based on the original Gaussian scale.
Besides, given that the features are supposed to be "weighted by a Gaussian window" (original paper, section 6.1, page 15);
as long as the region you define is large enough to contain most of the meaningful orientation histograms, you are fine.
In summary:
The SIFT descriptor is calculated from the halved & blurred image of the same octave/interval as the keypoint (OpenSIFT)
The region for the SIFT descriptor is a square shape surrounding the keypoint (medium)(image)
The region size is calculated based on the Gaussian kernel scale, though the exact method for calculation can vary an easy rule of thumb is "width of 6 times the kernel scale" (thread)

What does CWT amplitude mean?

I've a big issue with continuous wavelet transform. I've created this signal
t = 0:1/2000:1-1/2000;
dt = 1/2000;
x1 = sin(50*pi*t).*exp(-50*pi*(t-0.2).^2);
x2 = sin(50*pi*t).*exp(-100*pi*(t-0.5).^2);
x3 = 2*cos(140*pi*t).*exp(-50*pi*(t-0.2).^2);
x4 = 2*sin(140*pi*t).*exp(-80*pi*(t-0.8).^2);
x = x1+x2+x3+x4;
And its rappresentation on time is
Than I computed its Fourier Transform in the classic way
Ts =1/Fs;
N = length(x);
t = 0:Ts:Ts*N-Ts;
FTx = fft(x,N);
S = (abs(FTx).^2)/N; %amplitude
f_FT = (0:Fs/N:Fs-Fs/N);
S = S(1:N/2); % i reject half signal
f_FT = f_FT(1:N/2);
And its rappresentation is
I computed the continuous wavelet transform using the function cwtft
s0 = 2;
a0 = 2^(1/32);
scales = (s0*a0.^(32:7*32)).*dt;
cwtx = cwtft({x,dt},'Scales',scales,'Wavelet',{'bump',[4 0.9]});
figure;
contour(t,cwtx.frequencies,abs(cwtx.cfs))
xlabel('Seconds'), ylabel('Hz');
grid on;
title('Analytic CWT using Bump Wavelet')
hcol = colorbar;
hcol.Label.String = 'Magnitude';
I don't know what magnitude rappresent, and specially why it differs so much with the values obtained with the classic FFT. Is there a way to convert it?
Thank you so much
There are two dimensions present here: Time and Frequency (Scale). The third dimension, which is magnitude or amplitude, shows the intensity of your frequencies (scales, which in wavelets scales are the inverse of frequencies) in those local times.
I guess it is much simpler than what I just explained. You are basically looking at a similar image to these three dimensions from the top in your last image in the question:
Source
Machine Learning with Signal Processing Techniques

SURF: How could we get the value of sigma from the keypoint radius

In the SURF technique, and more precisely within the feature description stage, the authors have stated (if I understand correctly) that the description will be performed in a area of 20 times sigma. Sigma represents the scale on which the keypoint was detected.
Sigma = 0.4 x L where L = 2^Octave x level+1. If we use the OpenCV implementation, the DetectAndCompute function computes, with the value of Keypoint.size, the radius of the circle surrounding the keypoint.
My question is : How could we get the value of sigma from the radius value ?
According to these lines:
KeyPoint& kp = (*keypoints)[k];
float size = kp.size;
Point2f center = kp.pt;
/* The sampling intervals and wavelet sized for selecting an orientation
and building the keypoint descriptor are defined relative to 's' */
float s = size*1.2f/9.0f;
This value s = size*1.2f/9.0f is not montioned in the bay's article scale= L*0.4 or
scale= L* 1.2/3 any one can explain me this part??

How do I 'fit a line' to a cluster of pixels?

I would like to generate a polynomial 'fit' to the cluster of colored pixels in the image here
(The point being that I would like to measure how much that cluster approximates an horizontal line).
I thought of using grabit or something similar and then treating this as a cloud of points in a graph. But is there a quicker function to do so directly on the image file?
thanks!
Here is a Python implementation. Basically we find all (xi, yi) coordinates of the colored regions, then set up a regularized least squares system where the we want to find the vector of weights, (w0, ..., wd) such that yi = w0 + w1 xi + w2 xi^2 + ... + wd xi^d "as close as possible" in the least squares sense.
import numpy as np
import matplotlib.pyplot as plt
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
def feature(x, order=3):
"""Generate polynomial feature of the form
[1, x, x^2, ..., x^order] where x is the column of x-coordinates
and 1 is the column of ones for the intercept.
"""
x = x.reshape(-1, 1)
return np.power(x, np.arange(order+1).reshape(1, -1))
I_orig = plt.imread("2Md7v.jpg")
# Convert to grayscale
I = rgb2gray(I_orig)
# Mask out region
mask = I > 20
# Get coordinates of pixels corresponding to marked region
X = np.argwhere(mask)
# Use the value as weights later
weights = I[mask] / float(I.max())
# Convert to diagonal matrix
W = np.diag(weights)
# Column indices
x = X[:, 1].reshape(-1, 1)
# Row indices to predict. Note origin is at top left corner
y = X[:, 0]
We want to find vector w that minimizes || Aw - y ||^2
so that we can use it to predict y = w . x
Here are 2 versions. One is a vanilla least squares with l2 regularization and the other is weighted least squares with l2 regularization.
# Ridge regression, i.e., least squares with l2 regularization.
# Should probably use a more numerically stable implementation,
# e.g., that in Scikit-Learn
# alpha is regularization parameter. Larger alpha => less flexible curve
alpha = 0.01
# Construct data matrix, A
order = 3
A = feature(x, order)
# w = inv (A^T A + alpha * I) A^T y
w_unweighted = np.linalg.pinv( A.T.dot(A) + alpha * np.eye(A.shape[1])).dot(A.T).dot(y)
# w = inv (A^T W A + alpha * I) A^T W y
w_weighted = np.linalg.pinv( A.T.dot(W).dot(A) + alpha * \
np.eye(A.shape[1])).dot(A.T).dot(W).dot(y)
The result
# Generate test points
n_samples = 50
x_test = np.linspace(0, I_orig.shape[1], n_samples)
X_test = feature(x_test, order)
# Predict y coordinates at test points
y_test_unweighted = X_test.dot(w_unweighted)
y_test_weighted = X_test.dot(w_weighted)
# Display
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.imshow(I_orig)
ax.plot(x_test, y_test_unweighted, color="green", marker='o', label="Unweighted")
ax.plot(x_test, y_test_weighted, color="blue", marker='x', label="Weighted")
fig.legend()
fig.savefig("curve.png")
For simple straight line fit, set the argument order of feature to 1. You can then use the gradient of the line to get a sense of how close it is to a horizontal line (e.g., by checking the angle of its slope).
It is also possible to set this to any degree of polynomial you want. I find that degree 3 looks pretty good. In this case, the 6 times the absolute value of the coefficient corresponding to x^3 (w_unweighted[3] or w_weighted[3]) is one measure of the curvature of the line.
See A measure for the curvature of a quadratic polynomial in Matlab for additional details.

Implementing Otsu binarization for faded images of documents

I'm trying to implement Otsu binarization technique on document images such as the one shown:
Could someone please tell me how to implement the code in MATLAB?
Taken from Otsu's method on Wikipedia
I = imread('cameraman.tif');
Step 1. Compute histogram and probabilities of each intensity level.
nbins = 256; % Number of bins
counts = imhist(I,nbins); % Each intensity increments the histogram from 0 to 255
p = counts / sum(counts); % Probabilities
Step 2. Set up initial omega_i(0) and mu_i(0)
omega1 = 0;
omega2 = 1;
mu1 = 0;
mu2 = mean(I(:));
Step 3. Step through all possible thresholds from 0 to maximum intensity (255)
Step 3.1 Update omega_i and mu_i
Step 3.2 Compute sigma_b_squared
for t = 1:nbins
omega1(t) = sum(p(1:t));
omega2(t) = sum(p(t+1:end));
mu1(t) = sum(p(1:t).*(1:t)');
mu2(t) = sum(p(t+1:end).*(t+1:nbins)');
end
sigma_b_squared_wiki = omega1 .* omega2 .* (mu2-mu1).^2; % Eq. (14)
sigma_b_squared_otsu = (mu1(end) .* omega1-mu1) .^2 ./(omega1 .* (1-omega1)); % Eq. (18)
Step 4 Desired threshold corresponds to the location of maximum of sigma_b_squared
[~,thres_level_wiki] = max(sigma_b_squared_wiki);
[~,thres_level_otsu] = max(sigma_b_squared_otsu);
There are some differences between the wiki-version eq. (14) in Otsu and the eq. (18), and I don't why. But the thres_level_otsu correspond to the MATLAB's implementation graythresh(I)
Since the function graythresh in Matlab implements the Otsu method, what you have to do is convert your image to grayscale and then use the im2bw function to binarize the image using the threhsold level returned by graythresh.
To convert your image I to grayscale you can use the following code:
I = im2uint8(I);
if size(I,3) ~= 1
I = rgb2gray(I);
end;
To get the binary image Ib using the Otsu's method, use the following code:
Ib = im2bw(I, graythresh(I));
You should get the following result:
Starting out with what your initial question was implementing the OTSU thresolding its true that MATLAB's graythresh function is based on that method
The OTSU's method considers the threshold value as the valley between two peaks that is one of the foreground pixels and the other of the background pixels
Pertaining to your image which seems like a historical manuscript found this paper that compares all the methods that could be used for thresholding document images
You can also download and read up sauvola thresholding from here
Good luck with its implementation =)
Corrected MATLAB Implementation (for 2d matrix)
function [T] = myotsu(I,N);
% create histogram
nbins = N;
[x,h] = hist(I(:),nbins);
% calculate probabilities
p = x./sum(x);
% initialisation
om1 = 0;
om2 = 1;
mu1 = 0;
mu2 = mode(I(:));
for t = 1:nbins,
om1(t) = sum(p(1:t));
om2(t) = sum(p(t+1:nbins));
mu1(t) = sum(p(1:t).*[1:t]);
mu2(t) = sum(p(t+1:nbins).*[t+1:nbins]);
end
sigma = (mu1(nbins).*om1-mu1).^2./(om1.*(1-om1));
idx = find(sigma == max(sigma));
T = h(idx(1));

Resources