Find High Frequencies with Discrete Fourier Transform [OpenCV] - opencv

I want to determine image sharpness by the amount of high frequencies within the image. As far as I understand the dft() function from OpenCV returns two matrices with real and complex numbers.
This is where I am stuck. How can I determine the amount of high frequencies from this data?
I am thankful for every hint/link which could provide me with a better understanding.
Greetings

Make FT
Calculate magnitude of result
Now you have 2D matrix. Consider upper left quadrant (other are mirrors for real source).
Here Magn[0][0] entry corresponds to zero frequency, and Magn[(n-1)/2][(n-1)/2] entry corresponds to the highest frequency.
Left upper part of this submatrix contains low-frequency samples, so you can calculate sum of values in this part and in the rest part and compare these sums. For example (pseudocode):
cvIntegral(Magn, Rect(0..n/4, 0..n/4)) compare with
cvIntegral(Magn, Rect(0..n/2, 0..n/2)) - cvIntegral(Magn, Rect(0..n/4, 0..n/4))

Related

Is pixel value normalization needed in medical image segmentation?

I have a dataset of CT-Scan representing hips scan. I'm currently not normalizing the pixel value because in CT-Scan pixel value represent different part of the scan (bone 1000+, water0, air-1000, etc). Also the range of pixels value change every scan (ex. -500:1500, -400:1200).
I'm wondering if normalizing pixel value between [0,1] would be a + for my training or I would lost information on the relation between int pixel value and segmentation truth.
Thanks for the answers
It depends a little on your data. What you are describing are so called Hounsfield Units (probably read up on that), you basically express every intensity relative to the one of water.
Bone density (and with that the corresponding intensity) can vary greatly, not to mention if there is metal present.
Your HU range is highly dependent on the body region and mainly the patient.
https://images.app.goo.gl/WNLCs8eENTdbXWwM7
CT Scans are usually uint16 grayscale, I would definitely normalize as long as you can ensure that your float range is sufficient to accommodate the 2^16 different grayscale values.

Reconstruct image from eigenvectors obtained from solving the eigenfunction of Hamiltonian operator in matrix form

I have an Image I
I am trying to do Automatic Object Extraction using Quantum Mechanics
Each pixel in an image is considered as a potential field, V(x,y) and hence each wave (eigen) function represents a meaningful region.
2D Time-independent Sschrodinger's equation
Multiplying both sides by
We get,
Rewriting the Laplacian using Finite Difference approach
where Ni is the set of neighbours with index i, and |Ni| is the cardinality of, i.e. the number of elements in Ni
Combining the above two equations, we get:
where M is the number of elements in
Now,the left hand side of the equation is a measure of how similar the labels in a neighbourhood are, i.e. a measure of spatial coherence.
Now, for applying this to images, the potential V is given as the pixel intensities.
Here, V is the pixel intensities
The right hand side is a measure of how close the pixel values in a segment are to a constant value E.
Now, the wave functions can be numerically calculated by solving the eigenvectors of Hamiltonian operator in matrix form which is
for i = j
for
and elsewhere 0
Now, in this paper it is said that first we have to find the maximum and minimum eigenvalues and then calculate the eigenvectors with eigenvalues closest to a number of values regularly selected between the minimum and maximum eigenvalues. the number is 300.
I have calculated the 300 eigenvectors.
And then the absolute square of the eigenvectors are thresholded to obtain the segments.
Fine upto this part.
Now, how do I reconstruct the eigenvectors into a 2D image so as to get the potential segments in the image?

what is the PSD unit by using FFT method

I'm just doing a power spectral density analysis of a signal in time domain. I'm following the fft method described in :
http://www.mathworks.com/support/tech-notes/1700/1702.html
It gives the real physical unit for the PSD. However, the unit is "power", is that mean "V^2/Hz"?
If I take 10*log10(power) or 10*log10(V^2/Hz), do I get the unit of "dB/Hz"?
Then how can I convert it to dBm/MHz?
It depends on the unit of your timeseries. Often we think of this as just "amplitude", but if your timeseries is a series of voltage amplitude vs. time, then your PSD estimate will be Volts^2/Hz. This is because the PSD is the Fourier Transform of the autocorrelation of your original signal: The autocorrelation has units of Volts^2, and running it through the Fourier Transform decomposes these units over frequency, instead of time, resulting in units of Volts^2/Hz. This is commonly referred to as Watts/Hz, but the conversion from Volts^2 to Watts is not very physically meaningful, as W = V^2/R.
10*log10(power) will result in a unit of dB/Hz, but remember that decibels are always a comparison between two power levels; you are quantifying a ratio of powers. A better definition of decibels is 10*log10(P1/P0), as explained here. If you simply plug a PSD bin estimate into this equation, you are setting your PSD bin to P1 and implicitly comparing it to a P0 value of 1. This may be what you want, and it may not be. For visualization purposes, this is fairly typical, but if you have a standard reference power you should be comparing to, you should use that for P0 instead.
Assuming that you are attempting to plot a dB Power Spectral Density estimate, to convert from Hz to MHz, you simple rescale the x-axis of your frequency graph. Remember that a MHz is just 1 million Hz, so the only difference is that 240000Hz = 0.24MHz
EDIT
The point brought up by mtrw is a very valid one; if you are dealing with large amounts of data and are averaging FFT vectors, I highly suggest the Multitaper method; it's a much more statistically sound method of sacrificing frequency resolution for greater confidence on your PSD estimate.
If you have a PSD in W/Hz i.e. 100 W/Hz then you have 50 dBm/Hz. dB/Hz or is often vaguely and generically used instead of dBm/Hz. Audacity uses dB as shorthand for dBFS (not dBFS/Hz, because it is computing a DFT, and discrete frequencies use a power spectrum and not a density) . A digital signal that reaches 50% of the maximum level has an amplitude of −6 dBFS, which is 6 dB below full scale – the removal of the MSB, hence the 6dB/bit figure (because 50% of maximum level is 25% of maximum power; 1/4 = - 6dB)
dBm is the logarithmic ratio of the power with respect to 1mW, you divide the power by 1mW to get a unitless ratio, and then take the logarithm to get dB units, which in this case makes more sense to be clarified as dBm.
dBc/Hz is the ratio with respect to the carrier power, which is a ratio of two dBm/Hz values, meaning you subtract them and you get dBc/Hz; you get the same result if you divide the two linear power levels in W and then convert the ratio to dB (or more appropriately dBc).
dB-Hz is a logarithmic measure of bandwidth with respect to 1Hz and
dBJ is a measure of spectral density as a logarithmic ratio to 1 joule, seeing as W/Hz is indeed J.
Power spectral density is a density function, so you need to integrate it to get the actual quantity, like a line Integral of a V/m electric field, or a probability density of probability per x. This does not make sense for discrete quantities and instead the power spectrum is used akin to a probability mass function. If you see dB (which should be used for the discrete frequency domain) instead of dBm/Hz then it's wrong, but if you see it instead of dBm then it's right, as long as it's made clear what the reference is.

Comparison metric for two open contours

I'm validating an image segmentation algorithm applied to 2D images. The algorithm generates a contour segment, i.e. a set of connected pixels that form a freecurve in 2D space. The idea is to compare this set of pixels with a ground-truth, in my case another contour segment manually traced by an expert. An image showing what would be a segmentation result and the corresponding manual (ground-truth) segmentation is shown below:
I'm trying to think of an adequate comparison metric to validate the segmentation results. Ideally the best metric would be the point-to-point euclidean distance between corresponding pairs of pixels on each segment, however (as seen in previous figure) the segments don't have the same length (i.e. differ by the total number of pixels) so pixel-to-pixel comparisons have to be discarded.
Can you suggest me an adequate metric for validating my algorithm? Thanks for any suggestion!
For each pixel in the ground truth, take the distance to the nearest pixel in the segmentation result. Then take the sum of that for all ground truth pixels as the total error.
That's basically recall weighted by distance. If you start with the pixels in the result, it would resemble precision instead.
If the curves are closed, you can compute the area between the curves. If you can tell which pixels belong to a segment, that is as easy as computing XOR set of the 2 pixel sets.
Here is an example using that I've created using Matlab:
You could divide each line into n segments of equal length, then compute the euclidean distance between each segment and its pair on the other line.

U-matrix and self organizing maps

I am trying to understand SOMs. I am confused about when people post images representing
the image of data gotten my using SOM to map data to the map space. It is said that the U-matrix is used. But we have a finite grid of neurons so how do you get a "continous" image ?
For example starting with a 40x40 grid there are 1600 neurons. Now compute U-matrix but how do you plot these numbers now to get visualization ?
Links:
SOM tutorial with visualization
SOM from Wikipedia
The U-matrix stands for unified distance and contains in each cell the euclidean distance (in the input space) between neighboring cells. Small values in this matrix mean that SOM nodes are close together in the input space, whereas larger values mean that SOM nodes are far apart, even if they are close in the output space. As such, the U-matrix can be seen as summary of the probability density function of the input matrix in a 2D space. Usually, those distance values are discretized, color-coded based on intensity and displayed as a kind of heatmap.
Quoting the Matlab SOM toolbox,
Compute and return the unified distance matrix of a SOM.
For example a case of 5x1 -sized map:
m(1) m(2) m(3) m(4) m(5)
where m(i) denotes one map unit. The u-matrix is a 9x1 vector:
u(1) u(1,2) u(2) u(2,3) u(3) u(3,4) u(4) u(4,5) u(5)
where u(i,j) is the distance between map units m(i) and m(j)
and u(k) is the mean (or minimum, maximum or median) of the
surrounding values, e.g. u(3) = (u(2,3) + u(3,4))/2.
Apart from the SOM toolbox, you may have a look at the kohonen R package (see help(plot.kohonen) and use type="dist.neighbours").

Resources