Related
I'm reading an image segmentation paper in which the problem is approached using the paradigm "signal separation", the idea that a signal (in this case, an image) is composed of several signals (objects in the image) as well as noise, and the task is to separate out the signals (segment the image).
The output of the algorithm is a matrix, which represents a segmentation of an image into M components. T is the total number of pixels in the image, is the value of the source component (/signal/object) i at pixel j
In the paper I'm reading, the authors wish to select a component m for which matches certain smoothness and entropy criteria. But I'm failing to understand what entropy is in this case.
Entropy is defined as the following:
and they say that '' are probabilities associated with the bins of the histogram of ''
The target component is a tumor and the paper reads: "the tumor related component with "almost" constant values is expected to have the lowest value of entropy."
But what does low entropy mean in this context? What does each bin represent? What does a vector with low entropy look like?
link to paper
They are talking about Shannon's entropy. One way to view entropy is to relate it to the amount of uncertainty about an event associated with a given probability distribution. Entropy can serve as a measure of 'disorder'. As the level of disorder rises, the entropy rises and events become less predictable.
Back to the definition of entropy in the paper:
H(s_m) is the entropy of the random variable s_m. Here is the probability that outcome s_m happens. m are all the possible outcomes. The probability density p_n is calculated using the gray level histogram, that is the reason why the sum runs from 1 to 256. The bins represent possible states.
So what does this mean? In image processing entropy might be used to classify textures, a certain texture might have a certain entropy as certain patterns repeat themselves in approximately certain ways. In the context of the paper low entropy (H(s_m) means low disorder, low variance within the component m. A component with low entropy is more homogenous than a component with high entropy, which they use in combination with the smoothness criterion to classify the components.
Another way of looking at entropy is to view it as the measure of information content. A vector with relatively 'low' entropy is a vector with relatively low information content. It might be [0 1 0 1 1 1 0]. A vector with relatively 'high' entropy is a vector with relatively high information content. It might be [0 242 124 222 149 13].
It's a fascinating and complex subject which really can't be summarised in one post.
Entropy was introduced by Shanon (1948), were the higher value of Entropy = more detailed information.
Entropy is a measure of image information content, which is interpreted as the average uncertainty of information source.
In Image, Entropy is defined as corresponding states of intensity level which individual pixels can adapt.
It is used in the quantitative analysis and evaluation image details, the entropy value is used as it provides better comparison of the image details.
Perhaps, another way to think about entropy and information content in an image is to consider how much an image can be compressed. Independent of the compression scheme (run length encoding being one of many), you can imagine a simple image having little information (low entropy) can be encoded with fewer bytes of data while completely random images (like white noise) cannot be compressed much, if at all.
I have some geographical trajectories sampled to analyze, and I calculated the histogram of data in spatial and temporal dimension, which yielded a time domain based feature for each spatial element. I want to perform a discrete FFT to transform the time domain based feature into frequency domain based feature (which I think maybe more robust), and then do some classification or clustering algorithms.
But I'm not sure using what descriptor as frequency domain based feature, since there are amplitude spectrum, power spectrum and phase spectrum of a signal and I've read some references but still got confused about the significance. And what distance (similarity) function should be used as measurement when performing learning algorithms on frequency domain based feature vector(Euclidean distance? Cosine distance? Gaussian function? Chi-kernel or something else?)
Hope someone give me a clue or some material that I can refer to, thanks~
Edit
Thanks to #DrKoch, I chose a spatial element with the largest L-1 norm and plotted its log power spectrum in python and it did show some prominent peaks, below is my code and the figure
import numpy as np
import matplotlib.pyplot as plt
sp = np.fft.fft(signal)
freq = np.fft.fftfreq(signal.shape[-1], d = 1.) # time sloth of histogram is 1 hour
plt.plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
And I have several trivial questions to ask to make sure I totally understand your suggestion:
In your second suggestion, you said "ignore all these values."
Do you mean the horizontal line represent the threshold and all values below it should be assigned to value zero?
"you may search for the two, three largest peaks and use their location and probably widths as 'Features' for further classification."
I'm a little bit confused about the meaning of "location" and "width", does "location" refer to the log value of power spectrum (y-axis) and "width" refer to the frequency (x-axis)? If so, how to combine them together as a feature vector and compare two feature vector of "a similar frequency and a similar widths" ?
Edit
I replaced np.fft.fft with np.fft.rfft to calculate the positive part and plot both power spectrum and log power spectrum.
code:
f, axarr = plt.subplot(2, sharex = True)
axarr[0].plot(freq, np.abs(sp) ** 2)
axarr[1].plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
figure:
Please correct me if I'm wrong:
I think I should keep the last four peaks in first figure with power = np.abs(sp) ** 2 and power[power < threshold] = 0 because the log power spectrum reduces the difference among each component. And then use the log spectrum of new power as feature vector to feed classifiers.
I also see some reference suggest applying a window function (e.g. Hamming window) before doing fft to avoid spectral leakage. My raw data is sampled every 5 ~ 15 seconds and I've applied a histogram on sampling time, is that method equivalent to apply a window function or I still need apply it on the histogram data?
Generally you should extract just a small number of "Features" out of the complete FFT spectrum.
First: Use the log power spec.
Complex numbers and Phase are useless in these circumstances, because they depend on where you start/stop your data acquisiton (among many other things)
Second: you will see a "Noise Level" e.g. most values are below a certain threshold, ignore all these values.
Third: If you are lucky, e.g. your data has some harmonic content (cycles, repetitions) you will see a few prominent Peaks.
If there are clear peaks, it is even easier to detect the noise: Everything between the peaks should be considered noise.
Now you may search for the two, three largest peaks and use their location and probably widths as "Features" for further classification.
Location is the x-value of the peak i.e. the 'frequency'. It says something how "fast" your cycles are in the input data.
If your cycles don't have constant frequency during the measuring intervall (or you use a window before caclculating the FFT), the peak will be broader than one bin. So this widths of the peak says something about the 'stability' of your cycles.
Based on this: Two patterns are similar if the biggest peaks of both hava a similar frequency and a similar widths, and so on.
EDIT
Very intersiting to see a logarithmic power spectrum of one of your examples.
Now its clear that your input contains a single harmonic (periodic, oscillating) component with a frequency (repetition rate, cycle-duration) of about f0=0.04.
(This is relative frquency, proprtional to the your sampling frequency, the inverse of the time beetween individual measurment points)
Its is not a pute sine-wave, but some "interesting" waveform. Such waveforms produce peaks at 1*f0, 2*f0, 3*f0 and so on.
(So using an FFT for further analysis turns out to be very good idea)
At this point you should produce spectra of several measurements and see what makes a similar measurement and how differ different measurements. What are the "important" features to distinguish your mesurements? Thinks to look out for:
Absolute amplitude: Height of the prominent (leftmost, highest) peaks.
Pitch (Main cycle rate, speed of changes): this is position of first peak, distance between consecutive peaks.
Exact Waveform: Relative amplitude of the first few peaks.
If your most important feature is absoulute amplitude, you're better off with calculating the RMS (root mean square) level of our input signal.
If pitch is important, you're better off with calculationg the ACF (auto-correlation function) of your input signal.
Don't focus on the leftmost peaks, these come from the high frequency components in your input and tend to vary as much as the noise floor.
Windows
For a high quality analyis it is importnat to apply a window to the input data before applying the FFT. This reduces the infulens of the "jump" between the end of your input vector ant the beginning of your input vector, because the FFT considers the input as a single cycle.
There are several popular windows which mark different choices of an unavoidable trade-off: Precision of a single peak vs. level of sidelobes:
You chose a "rectangular window" (equivalent to no window at all, just start/stop your measurement). This gives excellent precission of your peaks which now have a width of just one sample. Your sidelobes (the small peaks left and right of your main peaks) are at -21dB, very tolerable given your input data. In your case this is an excellent choice.
A Hanning window is a single cosine wave. It makes your peaks slightly broader but reduces side-lobe levels.
The Hammimg-Window (cosine-wave, slightly raised above 0.0) produces even broader peaks, but supresses side-lobes by -42 dB. This is a good choice if you expect further weak (but important) components between your main peaks or generally if you have complicated signals like speech, music and so on.
Edit: Scaling
Correct scaling of a spectrum is a complicated thing, because the values of the FFT lines depend on may things like sampling rate, lenght of FFT, window, and even implementation details of the FFT algorithm (there exist several different accepted conventions).
After all, the FFT should show the underlying conservation of energy. The RMS of the input signal should be the same as the RMS (Energy) of the spectrum.
On the other hand: if used for classification it is enough to maintain relative amplitudes. As long as the paramaters mentioned above do not change, the result can be used for classification without further scaling.
I am doing a research in stereo vision and I am interested in accuracy of depth estimation in this question. It depends of several factors like:
Proper stereo calibration (rotation, translation and distortion extraction),
image resolution,
camera and lens quality (the less distortion, proper color capturing),
matching features between two images.
Let's say we have a no low-cost cameras and lenses (no cheap webcams etc).
My question is, what is the accuracy of depth estimation we can achieve in this field?
Anyone knows a real stereo vision system that works with some accuracy?
Can we achieve 1 mm depth estimation accuracy?
My question also aims in systems implemented in opencv. What accuracy did you manage to achieve?
Q. Anyone knows a real stereo vision system that works with some accuracy? Can we achieve 1 mm depth estimation accuracy?
Yes, you definitely can achieve 1mm (and much better) depth estimation accuracy with a stereo rig (heck, you can do stereo recon with a pair of microscopes). Stereo-based industrial inspection systems with accuracies in the 0.1 mm range are in routine use, and have been since the early 1990's at least. To be clear, by "stereo-based" I mean a 3D reconstruction system using 2 or more geometrically separated sensors, where the 3D location of a point is inferred by triangulating matched images of the 3D point in the sensors. Such a system may use structured light projectors to help with the image matching, however, unlike a proper "structured light-based 3D reconstruction system", it does not rely on a calibrated geometry for the light projector itself.
However, most (likely, all) such stereo systems designed for high accuracy use either some form of structured lighting, or some prior information about the geometry of the reconstructed shapes (or a combination of both), in order to tightly constrain the matching of points to be triangulated. The reason is that, generally speaking, one can triangulate more accurately than they can match, so matching accuracy is the limiting factor for reconstruction accuracy.
One intuitive way to see why this is the case is to look at the simple form of the stereo reconstruction equation: z = f b / d. Here "f" (focal length) and "b" (baseline) summarize the properties of the rig, and they are estimated by calibration, whereas "d" (disparity) expresses the match of the two images of the same 3D point.
Now, crucially, the calibration parameters are "global" ones, and they are estimated based on many measurements taken over the field of view and depth range of interest. Therefore, assuming the calibration procedure is unbiased and that the system is approximately time-invariant, the errors in each of the measurements are averaged out in the parameter estimates. So it is possible, by taking lots of measurements, and by tightly controlling the rig optics, geometry and environment (including vibrations, temperature and humidity changes, etc), to estimate the calibration parameters very accurately, that is, with unbiased estimated values affected by uncertainty of the order of the sensor's resolution, or better, so that the effect of their residual inaccuracies can be neglected within a known volume of space where the rig operates.
However, disparities are point-wise estimates: one states that point p in left image matches (maybe) point q in right image, and any error in the disparity d = (q - p) appears in z scaled by f b. It's a one-shot thing. Worse, the estimation of disparity is, in all nontrivial cases, affected by the (a-priori unknown) geometry and surface properties of the object being analyzed, and by their interaction with the lighting. These conspire - through whatever matching algorithm one uses - to reduce the practical accuracy of reconstruction one can achieve. Structured lighting helps here because it reduces such matching uncertainty: the basic idea is to project sharp, well-focused edges on the object that can be found and matched (often, with subpixel accuracy) in the images. There is a plethora of structured light methods, so I won't go into any details here. But I note that this is an area where using color and carefully choosing the optics of the projector can help a lot.
So, what you can achieve in practice depends, as usual, on how much money you are willing to spend (better optics, lower-noise sensor, rigid materials and design for the rig's mechanics, controlled lighting), and on how well you understand and can constrain your particular reconstruction problem.
I would add that using color is a bad idea even with expensive cameras - just use the gradient of gray intensity. Some producers of high-end stereo cameras (for example Point Grey) used to rely on color and then switched to grey. Also consider a bias and a variance as two components of a stereo matching error. This is important since using a correlation stereo, for example, with a large correlation window would average depth (i.e. model the world as a bunch of fronto-parallel patches) and reduce the bias while increasing the variance and vice versa. So there is always a trade-off.
More than the factors you mentioned above, the accuracy of your stereo will depend on the specifics of the algorithm. It is up to an algorithm to validate depth (important step after stereo estimation) and gracefully patch the holes in textureless areas. For example, consider back-and-forth validation (matching R to L should produce the same candidates as matching L to R), blob noise removal (non Gaussian noise typical for stereo matching removed with connected component algorithm), texture validation (invalidate depth in areas with weak texture), uniqueness validation (having a uni-modal matching score without second and third strong candidates. This is typically a short cut to back-and-forth validation), etc. The accuracy will also depend on sensor noise and sensor's dynamic range.
Finally you have to ask your question about accuracy as a function of depth since d=f*B/z, where B is a baseline between cameras, f is focal length in pixels and z is the distance along optical axis. Thus there is a strong dependence of accuracy on the baseline and distance.
Kinect will provide 1mm accuracy (bias) with quite large variance up to 1m or so. Then it sharply goes down. Kinect would have a dead zone up to 50cm since there is no sufficient overlap of two cameras at a close distance. And yes - Kinect is a stereo camera where one of the cameras is simulated by an IR projector.
I am sure with probabilistic stereo such as Belief Propagation on Markov Random Fields one can achieve a higher accuracy. But those methods assume some strong priors about smoothness of object surfaces or particular surface orientation. See this for example, page 14.
If you wan't to know a bit more about accuracy of the approaches take a look at this site, although is no longer very active the results are pretty much state of the art. Take into account that a couple of the papers presented there went to create companies. What do you mean with real stereo vision system? If you mean commercial there aren't many, most of the commercial reconstruction systems work with structured light or directly scanners. This is because (you missed one important factor in your list), the texture is a key factor for accuracy (or even before that correctness); a white wall cannot be reconstructed by a stereo system unless texture or structured light is added. Nevertheless, in my own experience, systems that involve variational matching can be very accurate (subpixel accuracy in image space) which is generally not achieved by probabilistic approaches. One last remark, the distance between cameras is also important for accuracy: very close cameras will find a lot of correct matches and quickly but the accuracy will be low, more distant cameras will find less matches, will probably take longer but the results could be more accurate; there is an optimal conic region defined in many books.
After all this blabla, I can tell you that using opencv one of the best things you can do is do an initial cameras calibration, use Brox's optical flow to find find matches and reconstruct.
This may be a naive question, but I didn't find exact details in searching.
In FFT with window overlapping, after we've applied window functions to sequences of data set with overlapping and got the FFT results, how do we combine those FFT results for overlapping sequence?
Do we just add them together, treating those frequency domain results as non-overlapping parts?
Are magnitudes of these results in complex numbers frequency magnitudes?
Thank you.
For each FFT you typically calculate the magnitude of each complex output bin - this gives you a spectrum (magnitude versus frequency) for one window. The sequence of magnitude spectra for all time windows is effectively a 3D data set or graph - magnitude versus frequency versus time - which is typically plotted as a a spectrogram, waterfall or time varying 2D spectrum.
In the specific case where the data is statistically stationary and you just want to reduce the variance you can average the successive magnitude spectra - this is called ensemble averaging. Normally though for time-varying signals such as speech or music you would not want to do this.
I've read across several Image Processing books and websites, but I'm still not sure the true definition of the term "energy" in Image Processing. I've found several definition, but sometimes they just don't match.
When we say "energy" in Image processing, what are we implying?
The energy is a measure the localized change of the image.
The energy gets a bunch of different names and a lot of different contexts but tends to refer to the same thing. It's the rate of change in the color/brightness/magnitude of the pixels over local areas. This is especially true for edges of the things inside the image and because of the nature of compression, these areas are the hardest to compress and therefore it's a solid guess that these are more important, they are often edges or quick gradients. These are the different contexts but they refer to the same thing.
The seam carving algorithm uses determinations of energy (uses gradient magnitude) to find the least noticed if removed. JPEG represents the local cluster of pixels relative to the energy of the first one. The Snake algorithm uses it to find the local contoured edge of a thing in the image. So there's a lot of different definitions but they all refer to the sort of oomph of the image. Whether that's the sum of the local pixels in terms of the square of absolute brightness or the hard bits to compress in a jpeg, or the edges in Canny Edge detection or the gradient magnitude:
The important bit is that energy is where the stuff is.
The energy of an image more broadly is the distances of some quality between the pixels of some locality.
We can take the sum of the LABdE2000 color distances within a properly weighted 2d gaussian kernel. Here the distances are summed together, the locality is defined by a gaussian kernel and the quality is color and the distance is LAB Delta formula from the year 2000 (Errata: previously this claimed E stood for Euclidean but the distance for standard delta E is Euclidean but the 94 and 00 formulas are not strictly Euclidean and the 'E' stands for Empfindung; German for "sensation"). We could also add up the local 3x3 kernel of the local difference in brightness, or square of brightness etc. We need to measure the localized change of the image.
In this example, local is defined as a 2d gaussian kernel and the color distance as LabDE2000 algorithm.
If you took an image and moved all the pixels and sorted them by color for some reason. You would reduce the energy of the image. You could take a collection of 50% black pixels and 50% white pixels and arrange them as random noise for maximal energy or put them as two sides of the image for minimum energy. Likewise, if you had 100% white pixels the energy would be 0 no matter how you arranged them.
It depends on the context, but in general, in Signal Processing, "energy" corresponds to the mean squared value of the signal (typically measured with respect to the global mean value). This concept is usually associated with the Parseval theorem, which allows us to think of the total energy as distributed along "frequencies" (and so one can say, for example, that a image has most of its energy concentrated in low frequencies).
Another -related- use is in image transforms: for example, the DCT transform (basis of the JPEG compression method) transforms a blocks of pixels (8x8 image) into a matrix of transformed coefficients; for typical images, it results that, while the original 8x8 image has its energy evenly distributed among the 64 pixels, the transformed image has its energy concentrated in the left-upper "pixels" (which, again, correspond to "low frequencies", in some analagous sense).
Energy is a fairly loose term used to describe any user defined function (in the image domain).
The motivation for using the term 'Energy' is that typical object detection/segmentation tasks are posed as a Energy minimization problem. We define an energy that would capture the solution we desire and perform gradient-descent to compute its lowest value, resulting in a solution for the image segmentation.
There is more than one definition of "energy" in image processing, so it depends on the context of where it was used.
Energy is used to describe a measure of "information" when formulating an operation under a probability framework such as MAP (maximum a priori) estimation in conjunction with Markov Random Fields. Sometimes the energy can be a negative measure to be minimised and sometimes it is a positive measure to be maximized.
If you consider that (for natural images captured by cameras) the light is an energy, you may call energy the value of the pixel on some channel.
However, I think that by energy the books are referring to the spectral density. From wikipedia:
The energy spectral density describes how the energy (or variance) of a signal or a time series is distributed with frequency
http://en.wikipedia.org/wiki/Spectral_density
Going back to my chemistry - Energy and Entropy are closely related terms. And Entropy and Randomness are also closely related. So in Image Processing, Energy might be similar to Randomness. For example, a picture of a plain wall has low energy, while the picture of a city taken from a helicopter might have high energy.
Image "energy" should be inversely proportional to Shannon entropy of image. But as already said image energy is loosely coupled term, it is better use "compressibility" term instead. That is - high image "energy" should correspond to high image compressibility.
http://lcni.uoregon.edu/~mark/Stat_mech/thermodynamic_entropy_and_information.html
Energy is like the "information present on the image". Compression of images cause energy-loss. I guess its something like that.
Energy is defined based on a normalized histogram of the image. Energy shows how the gray levels are distributed. When the number of gray levels is low then energy is high.
The Snake algorithm an image processing technique used to determine the contour of an object, the snake is nothing but a vector of (X,Y) points with some constraints, its final goal is to surround the object and describe its shape (contour) and then to track or represent the object by its shape.
The algorithm has two kinds of energies, internal and external.
Internal energy (the snake energy) (IE) is a user defined energy which acts on the snake (internally) to impose constraints on smoothness of the snake, without such a force, the snake shape will end up with the exact shape of the object, this is not desirable, because the exact shape of an object is very difficult to be obtained, due to light conditions, quality of image, noise, etc.
The external energy (EE) arises from the data (the image intensities), and it is nothing but the absolute difference of the intensities in the x and y directions (the intensity gradient) multiplied by -1, to be summed with internal energy, because the total energy must be minimized. so the total energy for all of the snake point should be minimized, Ideally, this come true when there are edges, because the gradient on the edge or (EE) is maximized, and since it is multiplied by -1, the total energy of the snake around the nearest object is minimized, and thus the algorithm converges to a solution, which is hopefully the true contour of the studied object.
because this algorithm relies on EE which is not only high on edges but also high on noisy points, sometimes the snake algorithm does not converge to an optimal solution, that why it is an approximate greedy algorithm.
I found this at Image Processing book;
Energy: S_N = sum (from b=0 to b=L-1) of abs(P(b))^2
P(b) = N(b) / M
where M represents the total number of pixels in a neighborhood window centered
about (j,k), and N(b) is the number of pixels of amplitude in the same window.
It may give us a better understand if we see this equation with entropy;
Entropy: S_E = - sum (from b=0 to b=L-1) of P(b)log2{P(b)}
source: Pp. 538~539 Digital Image Processing written by William K. Pratt (4th edition)
For my current imaging project, which is rendering a diffuse light source, I'd like to consider energy as light energy, or radiation energy. Question I had initially: does an RGB "pixel value" represent light energy ? It could be asserted using a light intensity meter and generating subsequent screens with gray pixel values (n,n,n) for 0..255. According to matlabs forum, the radiated energy of 1 greyscale pixel is always proportional to its pixel value, but pixel to pixel it will vary slightly.
There is another assumption regarding energy: while performing the forward ray tracing, I yield a ray count on each sampled position hit. This ray count is, or preferably should be, proportional to radiation energy that would hit the target at this position. In order to be able to compare it to actual photographs taken, I'd have to normalize the ray count to some pixel value range..(?) I enclose an example below, the energy source is a diffuse light emitter inside a dark cylinder.
Energy in signal processing is the integral of the signal square within signal boundaries. An analogy could be made that involves two dimensional signals and you can square pixel values and sum for all the pixels.
Image Energy is calculated through MATLAB using:
image_energy = graycoprops(i1, {'energy'})