How to use Laplacian of Gaussian for elliptical blobs? - image-processing

Currently, the Scikit image package implements Laplacian of Gaussian (LoG) to find spherical blobs. The package is here. How can one convert these Laplacian of Gaussian to an elliptical Gaussian kernel? This is required to get the non-spherical blobs in an image.

Related

Fitting a 2d Gaussian with data points on python

I have an 2048x2048 image/array with some blobs and I try to fit a 2D gaussian on each blob to find its width. Image with blob
I know the position of the blobs, their intensity and an approximation of the blob rays.
What can I do?
All the codes that I found need the gaussin width...
thanks

Approximate image as summation of Gaussians

I have an image that looks like the following. I want to approximate the image content as a summation of Gaussian functions, where the Gaussian means represent the most important points in the image and the covariance represent the degree of spread of the image. Is there any algorithm for this purpose?

Gaussian kernel in OpenCV to generate multiple scales

I want to implement an OpenCV version of VL_PHOW() (matlab src code) from VLFeat. In few words, it's dense SIFT with multiple scales (increasing SIFT descriptor bin size) to make it scale invariant.
However, the authors suggests to apply a Gaussian kernel to improve the results. In paritcular, the Magnif parameter describe it:
Magnif 6 The image is smoothed by a Gaussian kernel of standard
deviation SIZE / MAGNIF. Note that, in the standard SIFT descriptor,
the magnification value is 3; here the default one is 6 as it seems to
perform better in applications.
And this is the relevant matlab code:
% smooth the image to the appropriate scale based on the size
% of the SIFT bins
sigma = opts.sizes(si) / opts.magnif ;
ims = vl_imsmooth(im, sigma) ;
My question is: how can I implement this in OpenCV? The equivalent function in OpenCV seems to be GaussianBlur, but I can't figure out how to represent the code above in terms of this function.

Are Wavelet Coefficients simply the pixel values of decomposed image in 2D Discrete Wavelet Transform

I've been working with Discrete Wavelet Transform, I'm new to this theory. I want to access and modify the wavelet coefficients of the decomposed image, Are those wavelet coefficients simply the pixel values of the decomposed image in 2D DWT?
This is for example the result of DWT Decomposition:
So, when I want to access and modify the Wavelet Coefficients, can I just iterate through the pixel values of above image? Thank you for your help.
No. The image is merely illustrative.
The image you are looking on does not exactly correspond to original coefficients. The original wavelet coefficients are real numbers. Unlike them, you are looking on their absolute values quantized into a range from 0 to 255.
It is not true that the coefficients were calculated as pairwise sums and differences of the input samples. The coefficients were calculated using two complementary filters. See the description here. However, it is essential that these coefficients were adjusted and it is no longer possible to synthesize the original image. If you need to synthesize the image, you cannot access the pixels of the referenced image.

gaussian blur with FFT

im trying to implement a gaussian blur with the use of FFT and could find here the following recipe.
This means that you can take the
Fourier transform of the image and the
filter, multiply the (complex)
results, and then take the inverse
Fourier transform.
I've got a kernel K, a 7x7 Matrix
and a Image I, a 512x512 Matrix.
I do not understand how to multiply K by I.
Is the only way to do that by making K as big as I (512x512) ?
Yes, you do need to make K as big as I by padding it with zeros. Also, after padding, but before you take the FFT of the kernel, you need to translate it with wraparound, such that the center of the kernel (the peak of the Gaussian) is at (0,0). Otherwise, your filtered image will be translated. Alternatively, you can translate the resulting filtered image once you are done.
Another point: for small kernels not using the FFT may actually be faster. A 2D Gaussian kernel is separable, meaning that you can separate it into two 1D kernels for x and y. Then instead of a 2D convolution, you can do two 1D convolutions in x and y directions in the spatial domain. For smaller kernels that may end up being faster than doing the convolution in the frequency domain using the FFT.
If you are comfortable with pixel shader and if FFT is not your main goal here, but convolution with gaussian blur kernel IS,- then i can recommend my tutorial on what convolution is
regards.

Resources