Why physically realizable systems should have Hermitian symmetry? - signal-processing

I have read that For any physically realizable system H(f) has Hermitian symmetry. That is H( f ) = H*(− f ) and that fact can be used to show that its magnitude is an even function and its phase can be expressed as an
odd function.
I came across this statement in many books. But what is the 'physical explanation of this statement? Why the magnitude response has to be an even function and why the phase response has to be an odd function for physically realizable sytems?

This is likely due to the standard practice of reporting all measurements of observables of any physically realizable systems in terms of strictly real values. And the DFT of any strictly real set of measurements is conjugate symmetric.
If you assume that some measurement or behavior of a physical system is an imaginary component of a complex value, then conjugate symmetry of the DFT or FT is no longer guaranteed.

Related

Time Series DFT Signals Clustering

I have a number of time series data sets, which I want to transform to dft signals in order to reduce dimensionality. After transforming to dft, I want to cluster the resulting dft data sets using k-means algorithm.
Since dft signals contain an imaginary number how can one cluster them?
You could simply treat the imaginary part as another component in your vectors. In other applications, you will want to ignore it!
But you'll be facing other, more severe challenges.
Data mining, and clustering in particular, rarely is as easy as appliyng function a (dft) and function b (k-means) and then you have the result, hooray. Sorry - that is not how exploratory data mining works.
First of all, for many time series, DFT will not be helpful at all. On others, you will first have to do appropriate resampling, or segmentation, or get rid of uninteresting effects such as seasonality. Even if DFT works, it may emphasize artifacts such as the sampling frequency or some interferences.
And then you'll run into one major problem: k-means is based on the assumption that all attributes have the same importance. And DFT is based on the very opposite idea: the first components capture most of the signal, the later ones only minor deviations from it (and that is the very motivation for using this as dimensionality reduction).
So based on this intuition, you maybe never should apply k-means on DFT coefficients at all. At the same time, data-mining repeatedly has shown that appfoaches that are "statistical nonsense" can nevertheless provide useful results... so you can try, but verify your resultd with care, and avoid being too enthusiastic or optimistic.
With the help of FFT, it converts dataset into dft signals. It helps to calculates DFT for each small data set.

what does Maximum Likelihood Estimation exactly mean?

When we are training our model we usually use MLE to estimate our model. I know it means that the most probable data for such a learned model is our training set. But I'm wondering if its probability match 1 exactly or not?
You almost have it right. The Likelihood of a model (theta) for the observed data (X) is the probability of observing X, given theta:
L(theta|X) = P(X|theta)
For Maximum Likelihood Estimation (MLE), you choose the value of theta that provides the greatest value of P(X|theta). This does not necessarily mean that the observed value of X is the most probable for the MLE estimate of theta. It just means that there is no other value of theta that would provide a higher probability for the observed value of X.
In other words, if T1 is the MLE estimate of theta, and if T2 is any other possible value of theta, then P(X|T1) > P(X|T2). However, there still could be another possible value of the data (Y) different than the observed data (X) such that P(Y|T1) > P(X|T1).
The probability of X for the MLE estimate of theta is not necessarily 1 (and probably never is except for trivial cases). This is expected since X can take multiple values that have non-zero probabilities.
To build on what bogatron said with an example, the parameters learned from MLE are the ones that explain the data you see (and nothing else) the best. And no, the probability is not 1 (except in trivial cases).
As an example (that has been used billions of times) of what MLE does is:
If you have a simple coin-toss problem, and you observe 5 results of coin tosses (H, H, H, T, H) and you do MLE, you will end up giving p(coin_toss == H) a high probability (0.80) because you see Heads way too many times. There are good and bad things about MLE obviously...
Pros: It is an optimization problem, so it is generally quite fast to solve (even if there isn't an analytical solution).
Cons: It can overfit when there isn't a lot of data (like our coin-toss example).
The example I got in my stat classes was as follows:
A suspect is on the run ! Nothing is known about them, except that they're approximatively 1m80 tall. Should the police look for a man or a woman ?
The idea here is that you have a parameter for your model (M/F), and probabilities given that parameter. There are tall men, tall women, short men and short women. However, in the absence of any other information, the probability of a man being 1m80 is larger than the probability of a woman being 1m80. Likelihood (as bogatron very well explained) is a formalisation of that, and maximum likelihood is the estimation method based on favouring parameters which are more likely to result in the actual observations.
But that's just a toy example, with a single binary variable... Let's expand it a bit: I threw two identical die, and the sum of their value is 7. How many side did my die have ? Well, we all know that the probability of two D6 summing to 7 is quite high. But it might as well be D4, D20, D100, ... However, P(7 | 2D6) > P(7 | 2D20), and P(7 | 2D6) > P(7 | 2D100) ..., so you might estimate that my die are 6-faced. That doesn't mean it's true, but its a reasonable estimation, in the absence of any additional information.
That's better, but we're not in machine-learning territory yet... Let's get there: if you want to fit your umptillion-layer neural network on some empirical data, you can consider all possible parameterisations, and how likely each of them is to return the empirical data. That's exploring an umptillion-dimensional space, each dimensions having infinitely many possibilities, but you can map every single one of these points to a likelihood. It is then reasonable to fit your network using these parameters: given that the empirical data did occur, it is reasonable to assume that they should be likely under your model.
That doesn't mean that your parameters are likely ! Just that under these parameters, the observed value is likely. Statistical estimation is usually not a closed problem with a single solution (like solving an equation might be, and where you would have a probability of 1), but we need to find a best solution, according to some metric. Likelihood is such a metric, and is used widely because it has some interesting properties:
It makes intuitive sense
It's reasonably simple to compute, fit and optimise, for a large family of models
For normal variables (which tend to crop up everywhere) MLE gives the same results as other methods, such as least-squares estimations
Its formulation in terms of conditional probabilities makes it easy to use/manipulate it in Bayesian frameworks

Can GLSL perform a recursion formula calculation? Or how can I speed up this formular

I want to implement this formula in my iOS App. Is there any way to using GLSL to speed this formula up. Or can I use mental or something to speed this formula up?
for (k = 0; k < imageSize; k++) {
imageOut[k] = imageOut[k-1] * a + imageIn[k] * b;
}
OpenCL is not available.
This is a classic IIR filter, and the data dependencies cause problems when converting it to SIMD code. This means that you can't do the operation as a simple transform feedback or render-to-texture operation. In other words, the GPU is designed to work on a bunch of data in parallel, but your formula forces the output to be computed serially (you can't compute out[k] without computing out[k-1] first).
I see two ways to optimize this:
You can use SIMD on the CPU. For iOS, this means ARM NEON. See articles like Optimising IIR Filters Using ARM NEON or Parallelization of IIR Filters using SIMD Extensions.
You can redesign the filter as an FIR filter, completely eliminating data dependencies.
Unfortunately, there is no easy translation to GLSL. Maybe you could use Metal instead of NEON, I'm not sure.
What you have there, as Dietrich Epp already pointed out, is a IIR filter. Now on a computer there's no such thing as "inifinite", you're always limited by number precision, memory, available computational time etc. – even if you executed that loop ad infinitum, due to the limited precision of your typical number representation you'll loose anything meaningful to roundoff errors quite early on.
So lets be honest about it and call a FIR filter with a very long response time. Can those be parallelized? Yes, they can, but for that we have to leave the time domain and look at it from the frequency domain.
Assume you can model the response to a system (=filter) to all the possible signals there are, then "playing back" that response based on the signal gives you the desired output. In the frequency domain that would be a "recording" of the system in response to a broadband signal covering all the frequencies. But that signal is just a simple impulse. That's where the terms FIR and IIR get their middle I from.
Any applying the impulse response of the system to an arbitrary signal by means of a convolution gives you what the system would respond to like to the signal itself. However calculating a convolution in the time domain is the same as multiplying the Fourier transform of the signal with the Fourier transform of the impulse response and transforming the result back, i.e.
s * r = F^-1(F(s) · F(r))
And Fourier transforms are one of the things that can be well parallelized and GPUs are really quite good at doing.
Now there are GLSL based Fourier transform codes, but normally these are written in OpenCL or CUDA to run on GPUs.
Anyway, here's the recipe for you:
Determine the cutoff k for which your IIR becomes indistinguishable from a FIR. Determine the Fourier transform of the impulse response (= complex spectral response, CSR). Fourier transform the signal (=image) multiply with the CSR and transform back.

Complex interpolation on an FPGA

I have a problem in that I need to implement an algorithm on an FPGA that requires a large array of data that is too large to fit into block or distributed memory. The array contains complex fixed-point values, and it turns out that I can do a good job by reducing the total number of stored values through decimation and then linearly interpolating the interim values on demand.
Though I have DSP blocks (and so fixed-point hardware multipliers) which could be used trivially for real and imaginary part interpolation, I actually want to do the interpolation on the amplitude and angle (of the polar form of the complex number) and then convert the result to real-imaginary form. The data can be stored in polar form if it improves things.
I think my question boils down to this: How should I quickly convert between polar complex numbers and real-imaginary complex numbers (and back again) on an FPGA (noting availability of DSP hardware)? The solution need not be exact, just close, but be speed optimised. Alternatively, better strategies are gladly received!
edit: I know about cordic techniques, so this would be how I would do it in the absence of a better idea. Are there refinements specific to this problem I could invoke?
Another edit: Following from #mbschenkel's question, and some more thinking on my part, I wanted to know if there were any known tricks specific to the problem of polar interpolation.
In my case, the dominant variation between samples is a phase rotation, with a slowly varying amplitude. Since the sampling grid is known ahead of time and is regular, one trick could be to precompute some complex interpolation factors. So, for two complex values a and b, if we wish to find (N-1) intermediate equally spaced values, we can precompute the factor
scale = (abs(b)/abs(a))**(1/N)*exp(1j*(angle(b)-angle(a)))/N)
and then find each intermediate value iteratively as val[n] = scale * val[n-1] where val[0] = a.
This works well for me as I need the samples in order and I compute them all. For small variations in amplitude (i.e. abs(b)/abs(a) ~= 1) and 0 < n < N, (abs(b)/abs(a))**(n/N) is approximately linear (though linear is not necessarily better).
The above is all very good, but still results in a complex multiplication. Are there other options for approximating this? I'm interested in resource and speed constraints, not accuracy. I know I can do the rotation with CORDIC, but still need a pair of multiplications for the scaling, so I'm adding lots of complexity and resource usage for potentially limited results. I don't really have a feel for the convergence of CORDIC, so perhaps I just truncate early, or use lots of resources to converge quickly.

What is the computational complexity of the EM algorithm?

In general, and more specifically for Bernoulli mixture model (aka Latent Class Analysis).
EM is pretty much the same as Lloyds variant of k-means. So each iteration takes O(n*k) distance computations. They are more complex distance computations, but in the end they are some variant of Mahalanobis distance.
However, k-means has a quite nice termination behavior. There are only k^n possible cluster assignments. Now if in each step, you choose one that is better, it will have to terminate the latest after trying out all k^n. But in reality, it usually terminates after at most a few dozen steps.
For EM, this is not as easy. Objects are not assigned to a single cluster, but as in fuzzy-c-means are assigned relatively to all clusters. And that's when you lose this termination guarantee.
So without any stopping threshold, EM would infinitely optimize the cluster assignments, up to an infinite precision (assuming you would implement it with infinite precision).
As such, the theoretical runtime of EM is: infinite.
Any threshold (and if it's just hardware floating point precision) will make it finish earlier. But it will be hard to get a theoretical limit here different than O(n*k*i) where i is the number of iterations (which could be infinite, but which you can also set to 0 if you don't want to do a single iteration).
Since the EM algorithm is by nature iterative, you must decide on a termination criterion. If you fix an upper bound on the number of steps, runtime analysis is obviously easy. For other termination criteria (like convergence up to a constant difference), the situation must be analyzed specifically.
Long story short, the description "EM" does not include a termination criterion, so the question can't be answered as such.
This is what I think:
If we presume that the EM algorithm uses linear algebra, which it does, then its complexity should be O(m.n^3), where m is the number of
iterations and n is the number of parameters.
The number of iterations will be influenced by the quality of the starting values. Good starting values means small m.
Not-so-good starting values means larger m or even failure because of convergence to a local solution. Local solutions can exist because EM is used on likelihood functions which are nonlinear.
Good starting values means that you start in the convex zone of attraction that surrounds the locus of the globally optimal solution.

Resources