Gaussian noise generation - signal-processing

I want to generate Gaussian noise in GNU Radio companion.I have studied that by feeding a VCO with a saw tooth wave can do that.Can any body explain how the signal from the VCO will have a Gaussian distribution? As i think it will have a uniform distribution because the frequency is first increased and the decreased and all the frequencies will have equal probability of occurrence.
Regards,
Ali

The easiest way would be to use the channel model blocks available in recent GNU Radio releases. There's information on this page: http://gnuradio.org/doc/doxygen/page_channels.html
For simple Additive White Gaussian Noise (AWGN), simply set the noise voltage and ignore the other parameters (which are useful for emulating a more realistic channel, but are not related to your immediate question). Bear in mind that you are adding noise, so the number you use for noise voltage is specifying the 'noise floor', as opposed to specifying e.g. the desired SNR.

Related

How to estimate optimal gamma parameter for gamma correction?

Is it possible to estimate optimal gamma parameter for gamma correction by algorithm using some image statistics? By 'optimal' I mean that image should 'look good' for human on average after correction.
If your image pixels are scaled on the range 0..255, you could use:
gamma = log(mean)/log(128)
where mean is the mean of your image pixels. If they are scaled on the range 0..1:
gamma = log(mean)/log(0.5)
Note that this is the technique that ImageMagick uses, documentation here, and you can test it yourself on the command line with:
magick input.jpg -auto-gamma result.jpg
Nothing is perfect though and that may not look good if there are heavy shadows or bright areas in your images.
The so-called gamma correction is a weird beast, which exists for historical reasons. It was initially implemented by TV broadcasters to deal with the fact that cathode ray tubes did not have a linear response to the signal amplitude. And rather than compensate in every TV set, i.e. in the receiver, they decided to compensate in the emitter. This also had a nice dynamic compression effect.
As time passed, the pre-compensation remained in the standards, and for modern devices that have a linear response, the pre-compensation must be cancelled by applying a gamma correction with the inverse exponent. So when you get an image from an unknown source, it is unsure if it needs to be gamma de-corrected, and with which exponent.
This said, a gamma exponent is also used in a complete empirical way to strengthen or weaken the dark tones, and conversely. A priori, the concept of an "optimal" gamma exponents is quite subjective and will differ depending on the atmosphere you want to give to your picture, and depending on the particular subject.
I don't know of any technique to choose a gamma value automatically. If I had to, I would choose some feature drawn from the image histogram (such as mean, deviation, coefficient of variation...) and adjust the gamma until that criterion reached a particular value. As the histogram does not have an analytic form, a trial and error process (such as a dichotomic search) is necessary.
Also have a look at the "histogram specification" technique.

Semantic Segmentation: How to evaluate the noise influence of the effectivity and robustness of the medical image segmentation?

I have done some pre-processing including N4 Bias correction, noise removal and scaling on medical 3D MRIs, and I was asked one question:
How to evaluate the noise influence of the effectivity and robustness of the medical image segmentation? When affecting the image structure with various noise, the extracted features will be deteriorated. Such effect should be taken advantage in the context of the method
effectivity for different noise intensity.
How to evaluate the noise affect and how to justify the noise removal method used in the scientific manuscript?
I don't know if this can be helpful but I did once in classrom with nuclear magnetic resonance.
In that case we use the Shepp Logan Phantom with FFT. then we add noise to the picture (by adding random numbers with gaussian distribution).
When you transform the image back to the phantom you can see the effects of noise and sometimes artifacts (mostly due to the FFT algorithm and the window function choosed).
What I did was check the mean value of color in the image before and after, then on edges of the pahntom (skull) you can see how much is clear the passage from white to black and vice versa.
This can be easily tested with MATLAB code and the phantom. When you have the accuracy you need you can then apply the algorithm you choose on real images.

Effect of variance (sigma) at gaussian smoothing

I know about Gaussian, varaince, image blurring and i think that i understood the concept of variance at Gaussian blur but still i am not 100% sure.
I just want to know the role of sigma or variance at Gaussian smoothing. I mean, what happens by increasing the value of sigma for the same window size..and why it happens?
It would be really helpful if somebody provide some nice literature about it. (I already tried few but couldn't find what i am looking for)
Major confusion:
Higher frequency-> details (e.g. noise),
Lower Frequency-> kind of overview of the image.
By increasing sigma, we are allowing some higher frequencies....so we should get more detailed with increasing frequency but the case is opposite, when we increase sigma, the image becomes more blurry.
I think it should be done in the following steps, first from the signal processing point of view:
Gaussian Filter is a low pass filter. Low pass filters as their names imply pass low frequencies - keeping low frequencies. So when we look at the image in the frequency domain the highest frequencies happen in the edges(places that there is a high change in intensity and each intensity value corresponds to a specific visible frequency).
The role of sigma in the Gaussian filter is to control the variation
around its mean value. So as the Sigma becomes larger the more variance allowed around mean and as the Sigma becomes smaller the less variance allowed around mean.
Filtering in the spatial domain is done through convolution. it simply
means that we apply a kernel on every pixel in the image. The law exists for kernels. Their sum has to be zero.
Now putting all together! When we apply a Gaussian filter to an image, we are doing a low pass filtering. But as you know this happen in the discrete domain(image pixels). So we have to quantize our Gaussian filter in order to make a Gaussian kernel. In the quantization step, as the Gaussian filter(GF) has a small sigma it has the steepest pick. So the more weights will be focused in the center and the less around it.
In the sense of natural image statistics! The scientists in this field of studies showed that our vision system is a kind of Gaussian filter in the responses to the images. see for example take a look at a broad scene! don't pay attention to a specific point! so you see a broad scene with lots things in it. but the details are not clear! Now see a specific point in that seen. you see more details that previously you didn't. This is the Sigma appear here. when you increase the sigma you are looking to the broad scene without paying attention to the details exits. and when you decrease the value you will get more details.
I think Wikipedia can help more than me, Low Pass Filters, Guassian Blur
Put simply, increasing the sigma terms will cast a broader net over the neighboring pixels and decrease the impact of the pixels nearest the pixel of interest, e.g. it makes a blurrier image.

Stein Unbiased Estimate of Risk (Sure) to denoise signals

I wish to use Stein Unbiased Estimate of Risk (Sure) for denoising signals.
I have a 1-Dimensional signal. I am using wavelets to decompose the signal into multiple levels of approximate and detail coefficients.
For denoising the original signal, do I need to do a thresholding for every level of detail coefficients or doing it on the last level of detail coeffcient will do the job ?
Thresholding is usually applied to all the frequencies of a signal because the procedure exploits the fact that the wavelet transform maps white noise (purely random, uncorrelated and constant power spectral density noise) in the signal domain to white noise in the transform domain and as such is spread across the different frequencies Thus, while signal energy becomes more concentrated into fewer coefficients in the transform domain, noise energy does not. Other noises given that have different spectrum properties will map differently and this is where the selection of the type of thresholding procedure becomes important.
In thresholding the highest decomposition level (lowest frequencies) while leaving the lower levels (higher frequencies) not denoised sounds a little extrange if you want to reconstruct the signal.
However you could also extract a level and denoise its related range of frequencies (e.g. from level 1 to level 2) if you have a range of frequencies you may have interest for.
Speaking about the thresholding function be aware in any case that Sure has different results depending on the type of noises the signal has. For example it will reduce the distribution of white noise in horizontal components but will only decrease large amplitudes. For signals where togueter with white you may have other noise colors like random walk and flicker noise sure is not an efective procedure.

Primary FFT Coefficients vs Low-Pass Filter

I'm working on signal processing issues. I'm extracting some features for feeding a classifier. Among these features, there is the sum of first 5 FFT coefficients. As you know primary FFT coefficients actually indicate how dominant low frequency components of a signal are. This is very close to what a low-pass filter gives.
Here I'm suspicious about whether computing FFT to take those first 5 coefficients is an unnecessary task. I think applying low-pass filter will just eliminate low-frequency components and it won't have a significant effect on primary FFT coefficients. However there may be some other way in combination with low-pass filter in order to extract same information (that is contained in first five FFT coefficients) without using FFT.
Do you have any ideas or suggestions regarding this issue?
Thanks in advance.
If you just need an indicator for the low freq part of a signal I suggest to do something really simple. Just take an ordinary lowpass filter, for instance a 2nd order butterworth, with the cutoff frequency set appropriately (5Hz in your case, if I understood right). Then compute the energy (sum over squared values) or rms-value over your window (length 100). Or perhaps take the ratio of the low-freq energy and the overall energy of the window, to get a relative measure. That should give you a pretty good indicator for low frequency contributions of your signal.
People tend to overuse the fft for all kinds of really simple tasks. In 90% of the use cases an fft can be replaced by a simpler algorithm.
I seems you should take a look at the Goertzel Algorithm, as for the seemingly limited number of frequencies you need, it should take less computation. After updating the feedback parts on each sample, you can select how often to generate your "feature metric" or a little additional weighting of the results, can yield a respectable low pass filter.

Resources