i have a signal with added white gaussian noise. i m doing some kind of mode decomposition. After mode decomposition the noise does not look random but have streaks like appearance in time domain. The mode decomposition is a wiener filter and it does linear decomposition. So it transforms the white noise as linear streaks. Please see the noise in frequency domain. The red is the signal and the output mode in blue has noise in it. I appreciate assistance
thanks
sal
Related
I have done some pre-processing including N4 Bias correction, noise removal and scaling on medical 3D MRIs, and I was asked one question:
How to evaluate the noise influence of the effectivity and robustness of the medical image segmentation? When affecting the image structure with various noise, the extracted features will be deteriorated. Such effect should be taken advantage in the context of the method
effectivity for different noise intensity.
How to evaluate the noise affect and how to justify the noise removal method used in the scientific manuscript?
I don't know if this can be helpful but I did once in classrom with nuclear magnetic resonance.
In that case we use the Shepp Logan Phantom with FFT. then we add noise to the picture (by adding random numbers with gaussian distribution).
When you transform the image back to the phantom you can see the effects of noise and sometimes artifacts (mostly due to the FFT algorithm and the window function choosed).
What I did was check the mean value of color in the image before and after, then on edges of the pahntom (skull) you can see how much is clear the passage from white to black and vice versa.
This can be easily tested with MATLAB code and the phantom. When you have the accuracy you need you can then apply the algorithm you choose on real images.
I am using OpenCV to process a video to identify frames containing objects using a rudimentary algorithm (background subtraction → brightness threshold → dilation → contour detection).
The video is shot over the course of a day, so lighting conditions are gradually changing, so I expect it would improve my results if I did some sort of brightness and/or contrast normalization step first.
Answers to this question suggest using convertScaleAbs, or contrast optimization with histogram clipping, or histogram equalization.
Of these techniques or others, what would be the best preprocessing step to normalize the frames?
I wish to use Stein Unbiased Estimate of Risk (Sure) for denoising signals.
I have a 1-Dimensional signal. I am using wavelets to decompose the signal into multiple levels of approximate and detail coefficients.
For denoising the original signal, do I need to do a thresholding for every level of detail coefficients or doing it on the last level of detail coeffcient will do the job ?
Thresholding is usually applied to all the frequencies of a signal because the procedure exploits the fact that the wavelet transform maps white noise (purely random, uncorrelated and constant power spectral density noise) in the signal domain to white noise in the transform domain and as such is spread across the different frequencies Thus, while signal energy becomes more concentrated into fewer coefficients in the transform domain, noise energy does not. Other noises given that have different spectrum properties will map differently and this is where the selection of the type of thresholding procedure becomes important.
In thresholding the highest decomposition level (lowest frequencies) while leaving the lower levels (higher frequencies) not denoised sounds a little extrange if you want to reconstruct the signal.
However you could also extract a level and denoise its related range of frequencies (e.g. from level 1 to level 2) if you have a range of frequencies you may have interest for.
Speaking about the thresholding function be aware in any case that Sure has different results depending on the type of noises the signal has. For example it will reduce the distribution of white noise in horizontal components but will only decrease large amplitudes. For signals where togueter with white you may have other noise colors like random walk and flicker noise sure is not an efective procedure.
this is my attempt to skin color detection using opencv2 after reading this cool tutorial.
take a face with haar
use the face ROI histogram 2D (on hue and saturation) to model the skin color, calcHist
use this model to evaluate a new image with calcBackProject
apply dilate, erode, blur filters on the result mask.
the better case is this one:
but there is no background and no lights (only ambiental sun light in the room)
in other cases I obtain really worse result, there are a lot of noise in background, hand fingers are black or noised and so on. and when I'm try to get a 0-1 mask for mask only skin.. the final result is not so good.
maybe I can apply other filters, like threshold, or other technique (some other clustering or filling methods? I have looked for floodfill but I don't have a start point) or combining more histograms (rgb histogram for example).. but, how?
all kind of brainstorming are welcome.
In this link is suggested the use of thresholds in the HSV space. You could create a mask with these thresholds and combine with the back histogram, using a AND operation.
I am looking for inputs on an image noise filter method. A 9-pixel median filter does not work very well with dense noise. Noise is periodic (periods of 50 lines) and additive.
Thanks,
Bi
What about filtering in the Fourier domain? If the noise is periodic then with any luck your noise will appear as a pair of nice pointy features in Fourier space, where you can filter them with a couple of Gaussians then transform back to real space and your periodic noise should be gone.
I like to use selective blurs, which finds the average of only the surrounding pixels whos values are within a certain range from the value of the center pixel.
Gimp has a weighted version of this called "selective gaussian blur" you could try to see what this looks like.