I am planning to use the discrete wavelet transform to extract textural features from grayscale images for classification purpose. However, I am not sure which type of wavelet should I choose? most of the studies I read, using Haar or Daubechies wavelets when extracting features from images.
So, is there a way to determine which wavelet is suitable?
You may test your images and observe wavelet coefficients in any resolution (level of decomposition, time/scale) that a wavelet transformation will return to you, based on which you may design a simple equation as an objective function to select the base function (e.g., Haar, Daubechies 4, Daubechies 12, Morlet, Coiflet) for you by a simple for loop.
Related
Apologies if this is a naive or foolish question, but I am trying to learn a bit more about image processing techniques. I had an intuition about Gabor filters but can't seem to find an answer.
If I calculate a bank of Gabor filters for a set of images and reduce them to N features that a machine learning algorithm has determined to be indicative of a specific texture, can these N features be applied to a novel image to "transfer" the texture to the novel image? Perhaps via an inverse Gabor transform? For example, if I have 10 Gabor filters that can accurate classify a texture as "brick", can these 10 filters be applied to a "wood" texture image (picture of a 2x4) to approximate the brick texture on the wood surface?
If possible this is possible, can it be easily implemented in Python?
As far as I understand, this is directly impossible. "When working with Gabor filters, it is common to work with the magnitude response of each filter." https://www.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html
That is, information is only about the magnitude of the signal, but there is no information about the phase.
I have a Deep learning model ( transfer learning based in keras) to do regression problem on medical images. Does it help or have any logical idea or doing some image enhancements like strengthening the edges or doing histogram equalization before feeding the inputs to the CNN?
It is possible to train model accurately by using something you told.
For training CNN model with data, they almost use image augmentation in pre-processing phase.
There are list usually used in augmentation.
color noise
transform
rotate
whitening
affine
crop
flip
etc...
You can refer to here
I have some images of tags with shapes on them (circle, rectangle and blank). After processing the images with median blur and Gabor filters I can eliminate most of the effect that variable illumination had on the images and they look like this:
I've tried training an SVM using HOG, LDA, PCA and the pixels themselves but I can barely get past 40-60% accuracy. What I really want to do is use in the information in the shapes of the images. I had Fourier descriptors recommended to me, and while I've found a good tutorial about applying Fourier transform to images using NumPy and OpenCV, I'm not sure how to go about extracting Fourier descriptors from an image and then identifying the ones that are unique to the different shapes. Does anyone know how to do this or can recommend an alternative technique to get features from these images that would allow an SVM to distinguish between them?
I have a dataset consisting of fMRI images (from mice) which are divided into 4 groups (different drug dose levels applied). Each fMRI image is 4D, that means each voxel is a time series. For each fMRI image I want to extract one feature vector.
Now I want to use wavelet decomposition for feature extraction. In Matlab there exist no 4D wavelet decomposition, so I turn the 4D images into 3D by taking the average of the time series. Then I could apply 3D wavelet decomposition and taking the LL component as features, that means doing something like that:
WT = wavedec3(fMRI, 4, 'db4');
LL = WT.dec(1);
temp = cell2mat(LL);
feature_vector = temp(:);
Of course afterwards feature selection algorithms (like recursive feature elimination) could be applied to reduce dimensionality.
What do you think of this approach? Are there better approaches?
I need to make an application in iphone which needs to calculate noise, geometric deformation other distortions in an image. How to do this? I have done some image processing stuff with opencv + iphone. But I dont know how to calculate these parameters.
1) How to calculate noise in an image?
2) What is geometric deformation and how to calculate geometric deformation of an image?
3) Is geometric deformation and distortion are same parameters in terms of image filter? or any other distortions available to calculate an image is good quality or not?
Input: My image is a face image in live video stream.
I advise you to read some literature about image processing, for example Gonzalez & Woods.
1) The simplest method of noise calculation by single image is to compute standard deviation between image and its smoothed copy. For smoothing I recommend you to use simple median filter by sample of 3x3 pixels (or more). Median is non-sensitive to outbursts of data, so noice like "salt-n-pepper" won't worsen statistics.
In cases of overexposed or underexposed images such method can give you bad results, in that case you can calculate FFT of image and use a high frequency components for noise estimation.
2), 3) Calculation of geometric deformation is possible only if you know, what should be on image. For example, if you use mire (optical etalon) with quadratic grid, you can find lines on your image (for example by Canny edge detector) and compute distortion, astigmatism and some other aberrations. This could be done also if you sure that image have some straight lines.
Defocusing can be computed from analysis of edges on image or with help of image wavelet transform.
There also much more different methods for image analysing. For example, by analysis of colour image you can estimate chromatic aberration and so on.
But I repeat: in common case this operations are impossible. They all have some particular cases of application.
Read about image quality: there are no standard for this term, in every particular case you can use one or more simple characteristics to recognize whether image good or not.
In you case I'd advice you to make a lot of photos with different kind of artefacts and quality, then make simple analysis of their statistics, wavelet compositions and R-G-B components correlation. BTW, to make analysis of colour image less sensitive to its brightness I recommend you to work in HSV colorspace (but to estimate chromatic aberration you need to work exactly with RGB components).