In my last stage of my thesis, i have to compute 3D bounding boxes through two ip cameras using yolo. I have already found my exterior orientation with cv2.solvePnp() using some ground truth values ( measured values of points on the wall of the room ) and done calibration with rational model enabled because my cameras are wide lens. All i am asking is, through the whole process, do i have to use cv2.getOptimalNewCameraMatrix() on cv2.undistort() in order to correctly continue through my 3d bounding box or not? Because my results with non using it, with parameter alpha = 0 and alpha = 1 seems quite correct to me.
I am implementing weiner filtering in python which is applied on an image blurred using disk shape point spread function, i am including code of making disk shape psf and weiner filter
def weinerFiltering(kernel,K_const,image):
#F(u,v)
copy_img= np.copy(image)
image_fft =np.fft.fft2(copy_img)
#H(u,v)
kernel_fft = np.fft.fft2(kernel,s=copy_img.shape)
#H_mag(u,v)
kernel_fft_mag = np.abs(kernel_fft)
#H*(u,v)
kernel_conj = np.conj(kernel_fft)
f = (kernel_conj)/(kernel_fft_mag**2 + K_const)
return np.abs(np.fft.ifft2(image_fft*f))
def makeDiskShape(arr,radius,centrX,centrY):
for i in range(centrX-radius,centrX+radius):
for j in range(centrY-radius,centrY+radius):
if(l2dist(centrX,centrY,i,j)<=radius):
arr[i][j]=1
return arr/np.sum(arr)
this is blurred and gaussian noised image
this is what i am getting result after weiner filtering for K value of 50
result does not seem very good, can someone help
seems noise is reduced but amount of blurred is not, shape of disk shaped psf matrix is 20,20 and radius is 9 which seems like this
Update
using power spectrum of ground truth image and noise to calculate K constant value, still i am getting strong artifacts
this is noised and blurred image
this is result after using power specturm in place of a constant K value
Reduce your value of K. You need to play around with it until you get good results. If it's too large it doesn't filter, if it's too small you get strong artifacts.
If you have knowledge of the noise variance, you can use that to estimate the regularization parameter. In the Wiener filter, the constant K is a simplification of N/S, where N is the noise power and S is the signal power. Both these values are frequency-dependent. The signal power S can be estimated by the Fourier transform of the autocorrelation function of the image to be filtered. The noise power is hard to estimate, but if you have such an estimate (or know it because you created the noisy image synthetically), then you can plug that value into the equation. Note that this is the noise power, not the variance of the noise.
The following code uses DIPlib (the Python interface we call PyDIP) to demonstrate Wiener deconvolution (disclaimer: I'm an author). I don't think it is hard to convert this code to use other libraries.
import PyDIP as dip
image = dip.ImageRead('trui.ics');
kernel = dip.CreateGauss([3,3]).Pad(image.Sizes())
smooth = dip.ConvolveFT(image, kernel)
smooth = dip.GaussianNoise(smooth, 5.0) # variance = 5.0
H = dip.FourierTransform(kernel)
F = dip.FourierTransform(smooth)
S = dip.SquareModulus(F) # signal power estimate
N = dip.Image(5.0 * smooth.NumberOfPixels()) # noise power (has same value at all frequencies)
Hinv = dip.Conjugate(H) / ( dip.SquareModulus(H) + N / S )
out = dip.FourierTransform(F * Hinv, {"inverse", "real"})
The smooth image looks like this:
The out image that comes from deconvolving the image above looks like this:
Don't expect a perfect result. The regularization term impedes a perfect inverse filtering because such filtering would enhance the noise so strongly that it would swamp the signal and produce a totally useless output. The Wiener filter finds a middle ground between undoing the convolution and suppressing the noise.
The DIPlib documentation for WienerDeconvolution explains some of the equations involved.
dct don't the conversion properly in opencv.
imf = np.float32(block)
dct = cv2.dct(imf)
[[154,123,123,123,123,123,123,136],
[192,180,136,154,154,154,136,110],
[254,198,154,154,180,154,123,123],
[239,180,136,180,180,166,123,123],
[180,154,136,167,166,149,136,136],
[128,136,123,136,154,180,198,154],
[123,105,110,149,136,136,180,166],
[110,136,123,123,123,136,154,136]]
this block of an image,when converting with code shown above
[162.3 ,40.6, 20.0...
[30.5 ,108.4...
this should be the result,
[1186.3 , 40.6, 20.0...
[30.5, 108.4 ....
but I found this Result. for sample block, https://www.math.cuhk.edu.hk/~lmlui/dct.pdf
The DCT is working fine. The difference between what you got and what you expect is because that particular example given actually does the DFT on M instead of on the original image, I. In this case, as the paper shows, M = I - 128. The only difference in your example is that you don't subtract off that piece, so the values are all larger. In a cosine or Fourier transform, the first coefficient (the "DC offset" as it is sometimes called) has a higher value because your image values are just greater. But that's why all the other coefficients are the same. If you take an image and you simply add some or subtract some from the entire image equally, the coefficients of the transform will be the same, except the very first one.
From the standard definition of the DCT:
You can see here that for the first coefficient with k = 0, that inside the cosine function, you just get 0, and cos(0) = 1. Thus, X_0 as it's shown in this picture is just the sum of all the x_n values. Generally this value may be scaled by something relating to N so that it's something like an average. When doing so, it relates back to the X_0 term being a "DC offset" which you'll see described as the "mean value of the signal," or in other words, how far the signal is from 0. This is super useful to have as one of the cosine/Fourier transform coefficients as it then can completely describe a signal; all the other coefficients describe the frequency content and so they say nothing about how far the values are from 0, but the first coefficient, the DC offset, does tell you the shift!
I'm using the Machine Learning algorithm xgboost. When I'm flipping positive label(before was 1) to 0, negative label (before was 0) to 1 like this, my AUROC changes a little.
from 0.778~ to 0.779~ , I don't know why.
I used scale_pos_weight;
same other params;
set.seed is also the same.
I tried to implement GMMs but I have a few problems during the em-algorithm.
Let's say I've got 3D Samples (stat1, stat2, stat3) which I use to train the GMMs.
One of my training sets for one of the GMMs has in nearly every sample a "0" for stat1. During training I get really small Numbers (like "1.4456539880060609E-124") in the first row and column of the covariance matrix which leads in the next iteration of the EM-Algorithm to 0.0 in the first row and column.
I get something like this:
0.0 0.0 0.0
0.0 5.0 6.0
0.0 2.0 1.0
I need the inverse covariance matrix to calculate the density but since one column is zero I can't do this.
I thought about falling back to the old covariance matrix (and mean) or to replace every 0 with a really small number.
Or is there a another simple solution to this problem?
Simply your data lies in degenerated subspace of your actual input space, and GMM is not well suited in most generic form for such setting. THe problem is that empirical covariance estimator that you use simply fail for such data (as you said - you cannot inverse it). What you usually do? You chenge covariance estimator to the constrained/regularized ones, which contain:
Constant-based shrinking, thus instead of using Sigma = Cov(X) you do Sigma = Cov(X) + eps * I, where eps is prefedefined small constant, and I is identity matrix. Consequently you never have a zero values on the diagonal, and it is easy to prove that for reasonable epsilon, this will be inversible
Nicely fitted shrinking, like Oracle Covariance Estimator or Ledoit-Wolf Covariance Estimator which find best epsilon based on the data itself.
Constrain your gaussians to for example spherical family, thus N(m, sigma I), where sigma = avg_i( cov( X[:, i] ) is the mean covariance per dimension. This limits you to spherical gaussians, and also solves the above issue
There are many more solutions possible, but all based on the same thing - chenge covariance estimator in such a way, that you have a guarantee of invertability.