How to implement a better sliding window algorithm? - opencv

So I have been writing my own codes for HoG and its variant to work with depth images. However, I am stuck with testing my trained SVM in the detection window part.
All that I've done right now is to first create image pyramids out of the original image, and run a sliding window of 64x128 size from top left corner to bottom right.
Here's a video capture of it: http://youtu.be/3cNFOd7Aigc
Now the issue is that I'm getting more false positives than I expected.
Is there a way that I can remove all these false positives (besides training with more images) ? So far I can get the 'score' from SVM, which is the distance to the margin itself. How can I use that to leverage my results ?
Does anyone have any insight in implementing a good sliding window algorithm ?

What you could do is add a processing step to find the locally strongest response from SVM. Let me explain.
What you appear to be doing right now:
for each sliding window W, record category[W] = SVM.hardDecision(W)
Hard decision means it return a boolean or integer, and for 2-category classification could be written like this:
hardDecision(W) = bool( softDecision(W) > 0 )
Since you mentioned OpenCV, in CvSVM::predict you should set returnDFVal to true :
returnDFVal – Specifies a type of the return value. If true and the problem is 2-class classification then the method returns the decision function value that is signed distance to the margin, else the function returns a class label (classification) or estimated function value (regression).
from the documentation.
What you could do is:
for each sliding window W, record score[W] = SVM.softDecision(W)
for each W, compute and record:
neighbors = max(score[W_left], score[W_right], score[W_up], score[W_bottom])
local[W] = score[W] > neighbors
powerful[W] = score[W] > threshold.
for each W, you have a positive if local[W] && powerful[W]
Since your classifier will have a positive response for windows cloth (in space and/or appearance) to your true positive, the idea is to record the scores for each window, and then only keep positives which
are a locally maximum score (greater that its neighbors) --> local
are strong enough --> powerful
You could set threshold to 0 and adjust it until you get satisfying results. Or you could calibrate it automatically using your training set.

Related

Is it mandatory to use cv2.getOptimalNewCameraMatrix() on cv2.undistort?

In my last stage of my thesis, i have to compute 3D bounding boxes through two ip cameras using yolo. I have already found my exterior orientation with cv2.solvePnp() using some ground truth values ( measured values of points on the wall of the room ) and done calibration with rational model enabled because my cameras are wide lens. All i am asking is, through the whole process, do i have to use cv2.getOptimalNewCameraMatrix() on cv2.undistort() in order to correctly continue through my 3d bounding box or not? Because my results with non using it, with parameter alpha = 0 and alpha = 1 seems quite correct to me.

why weiner filter reduces only noise in my case, it is not reducing blur amount

I am implementing weiner filtering in python which is applied on an image blurred using disk shape point spread function, i am including code of making disk shape psf and weiner filter
def weinerFiltering(kernel,K_const,image):
#F(u,v)
copy_img= np.copy(image)
image_fft =np.fft.fft2(copy_img)
#H(u,v)
kernel_fft = np.fft.fft2(kernel,s=copy_img.shape)
#H_mag(u,v)
kernel_fft_mag = np.abs(kernel_fft)
#H*(u,v)
kernel_conj = np.conj(kernel_fft)
f = (kernel_conj)/(kernel_fft_mag**2 + K_const)
return np.abs(np.fft.ifft2(image_fft*f))
def makeDiskShape(arr,radius,centrX,centrY):
for i in range(centrX-radius,centrX+radius):
for j in range(centrY-radius,centrY+radius):
if(l2dist(centrX,centrY,i,j)<=radius):
arr[i][j]=1
return arr/np.sum(arr)
this is blurred and gaussian noised image
this is what i am getting result after weiner filtering for K value of 50
result does not seem very good, can someone help
seems noise is reduced but amount of blurred is not, shape of disk shaped psf matrix is 20,20 and radius is 9 which seems like this
Update
using power spectrum of ground truth image and noise to calculate K constant value, still i am getting strong artifacts
this is noised and blurred image
this is result after using power specturm in place of a constant K value
Reduce your value of K. You need to play around with it until you get good results. If it's too large it doesn't filter, if it's too small you get strong artifacts.
If you have knowledge of the noise variance, you can use that to estimate the regularization parameter. In the Wiener filter, the constant K is a simplification of N/S, where N is the noise power and S is the signal power. Both these values are frequency-dependent. The signal power S can be estimated by the Fourier transform of the autocorrelation function of the image to be filtered. The noise power is hard to estimate, but if you have such an estimate (or know it because you created the noisy image synthetically), then you can plug that value into the equation. Note that this is the noise power, not the variance of the noise.
The following code uses DIPlib (the Python interface we call PyDIP) to demonstrate Wiener deconvolution (disclaimer: I'm an author). I don't think it is hard to convert this code to use other libraries.
import PyDIP as dip
image = dip.ImageRead('trui.ics');
kernel = dip.CreateGauss([3,3]).Pad(image.Sizes())
smooth = dip.ConvolveFT(image, kernel)
smooth = dip.GaussianNoise(smooth, 5.0) # variance = 5.0
H = dip.FourierTransform(kernel)
F = dip.FourierTransform(smooth)
S = dip.SquareModulus(F) # signal power estimate
N = dip.Image(5.0 * smooth.NumberOfPixels()) # noise power (has same value at all frequencies)
Hinv = dip.Conjugate(H) / ( dip.SquareModulus(H) + N / S )
out = dip.FourierTransform(F * Hinv, {"inverse", "real"})
The smooth image looks like this:
The out image that comes from deconvolving the image above looks like this:
Don't expect a perfect result. The regularization term impedes a perfect inverse filtering because such filtering would enhance the noise so strongly that it would swamp the signal and produce a totally useless output. The Wiener filter finds a middle ground between undoing the convolution and suppressing the noise.
The DIPlib documentation for WienerDeconvolution explains some of the equations involved.

how use Discrete cosine transform(DCT) in opencv

dct don't the conversion properly in opencv.
imf = np.float32(block)
dct = cv2.dct(imf)
[[154,123,123,123,123,123,123,136],
[192,180,136,154,154,154,136,110],
[254,198,154,154,180,154,123,123],
[239,180,136,180,180,166,123,123],
[180,154,136,167,166,149,136,136],
[128,136,123,136,154,180,198,154],
[123,105,110,149,136,136,180,166],
[110,136,123,123,123,136,154,136]]
this block of an image,when converting with code shown above
[162.3 ,40.6, 20.0...
[30.5 ,108.4...
this should be the result,
[1186.3 , 40.6, 20.0...
[30.5, 108.4 ....
but I found this Result. for sample block, https://www.math.cuhk.edu.hk/~lmlui/dct.pdf
The DCT is working fine. The difference between what you got and what you expect is because that particular example given actually does the DFT on M instead of on the original image, I. In this case, as the paper shows, M = I - 128. The only difference in your example is that you don't subtract off that piece, so the values are all larger. In a cosine or Fourier transform, the first coefficient (the "DC offset" as it is sometimes called) has a higher value because your image values are just greater. But that's why all the other coefficients are the same. If you take an image and you simply add some or subtract some from the entire image equally, the coefficients of the transform will be the same, except the very first one.
From the standard definition of the DCT:
You can see here that for the first coefficient with k = 0, that inside the cosine function, you just get 0, and cos(0) = 1. Thus, X_0 as it's shown in this picture is just the sum of all the x_n values. Generally this value may be scaled by something relating to N so that it's something like an average. When doing so, it relates back to the X_0 term being a "DC offset" which you'll see described as the "mean value of the signal," or in other words, how far the signal is from 0. This is super useful to have as one of the cosine/Fourier transform coefficients as it then can completely describe a signal; all the other coefficients describe the frequency content and so they say nothing about how far the values are from 0, but the first coefficient, the DC offset, does tell you the shift!

Do positive and negative samples after flipping have an effect on the result?

I'm using the Machine Learning algorithm xgboost. When I'm flipping positive label(before was 1) to 0, negative label (before was 0) to 1 like this, my AUROC changes a little.
from 0.778~ to 0.779~ , I don't know why.
I used scale_pos_weight;
same other params;
set.seed is also the same.

Handling zero rows/columns in covariance matrix during em-algorithm

I tried to implement GMMs but I have a few problems during the em-algorithm.
Let's say I've got 3D Samples (stat1, stat2, stat3) which I use to train the GMMs.
One of my training sets for one of the GMMs has in nearly every sample a "0" for stat1. During training I get really small Numbers (like "1.4456539880060609E-124") in the first row and column of the covariance matrix which leads in the next iteration of the EM-Algorithm to 0.0 in the first row and column.
I get something like this:
0.0 0.0 0.0
0.0 5.0 6.0
0.0 2.0 1.0
I need the inverse covariance matrix to calculate the density but since one column is zero I can't do this.
I thought about falling back to the old covariance matrix (and mean) or to replace every 0 with a really small number.
Or is there a another simple solution to this problem?
Simply your data lies in degenerated subspace of your actual input space, and GMM is not well suited in most generic form for such setting. THe problem is that empirical covariance estimator that you use simply fail for such data (as you said - you cannot inverse it). What you usually do? You chenge covariance estimator to the constrained/regularized ones, which contain:
Constant-based shrinking, thus instead of using Sigma = Cov(X) you do Sigma = Cov(X) + eps * I, where eps is prefedefined small constant, and I is identity matrix. Consequently you never have a zero values on the diagonal, and it is easy to prove that for reasonable epsilon, this will be inversible
Nicely fitted shrinking, like Oracle Covariance Estimator or Ledoit-Wolf Covariance Estimator which find best epsilon based on the data itself.
Constrain your gaussians to for example spherical family, thus N(m, sigma I), where sigma = avg_i( cov( X[:, i] ) is the mean covariance per dimension. This limits you to spherical gaussians, and also solves the above issue
There are many more solutions possible, but all based on the same thing - chenge covariance estimator in such a way, that you have a guarantee of invertability.

Resources