Kalman Filter for iOS - ios

I am trying to get smooth rssi value from Bluetooth low energy beacons deployed at ceiling of my lab. I used Weighted-mean filter and moving average filter but couldn't get good result. Through various journal papers I got to know that Kalman filter can be used for this purpose. But I couldn't get a proper mathematical equation to code with objective-c. Can somebody provide any hint regarding mathematical equation or Kalman filter implementation?Thanks a lot.

A one-dimensional case like this means that all of the matrices are actually just scalar values. You need to know two things:
R, the measurement variance. You can directly measure this by recording a series of RSSI values (in a fixed location) exactly how you would normally and then measuring their variance. You can do this easily with Excel, or python, or even write your own code from scratch.
Q, the process variance. This is how much you expect RSSI to actually change in the same amount of time (between measurements). You can also measure this, or you can reason about it.
If you look at the Kalman Filter equations you'll notice that P is not dependent on your actual measurements, only the two values above. As a result, since they are constant, P will converge to a fixed value. And since K (the Kalman gain) only relies on those values, it will also converge. For an application like yours, it is usually sufficient to find the steady-state K and use it all the time.
This is now just a complicated (but optimal in a least-squares sense) way of creating a simple moving average filter.

If you are looking for a swift implementation of Kalman Filter, then it is worth looking at this framework. It is a generic implementation of conventional filter algorithm and it also provides a Matrix struct and all the necessary operations on Matrices that are used in Kalman filter

Related

Kalman Filtering need

I have a swarm robotic project. The localization system is done using ultrasonic and infrared transmitters/receivers. The accuracy is +-7 cm. I was able to do follow the leader algorithm. However, i was wondering why do i still have to use Kalman filter if the sensors raw data are good? what will it improve? isn't just will delay the coordinate being sent to the robots (the coordinates won't be updated instantly as it will take time to do the kalman filter math as each robot send its coordinates 4 times a second)
Sensor data is NEVER the truth, no matter how good they are. They will always be perturbed by some noise. Additionally, they do have finite precision. So sensor data is nothing but an observation that you make, and what you want to do is estimate the true state based on these observations. In mathematical terms, you want to estimate a likelihood or joint probability based on those measurements. You can do that using different tools depending on the context. One such tool is the Kalman filter, which in the simplest case is just a moving average, but is usually used in conjunction with dynamic models and some assumptions on error/state distributions in order to be useful. The dynamic models model the state propagation (e.g motion knowing previous states) and the observation (measurements), and in robotics/SLAM one often assumes the error is Gaussian. A very important and useful product of such filters is an estimation of the uncertainty in terms of covariances.
Now, what are the potential improvements? Basically, you make sure that your sensor measurements are coherent with a mathematical model and that they are "smooth". For example, if you want to estimate the position of a moving vehicle, the kinematic equations will tell you where you expect the vehicle to be, and you have an associated covariance. Your measurements also come with a covariance. So, if you get measurements that have low certainty, you will end up trusting the mathematical model instead of trusting the measurements, and vice-versa.
Finally, if you are worried about the delay... Note that the complexity of a standard extended Kalman filter is roughly O(N^3) where N is the number of landmarks. So if you really don't have enough computational power you can just reduce the state to pose, velocity and then the overhead will be negligible.
In general, Kalman filter helps to improve sensor accuracy by summing (with right coefficients) measurement (sensor output) and prediction for the sensor output. Prediction is the hardest part, because you need to create model that predicts in some way sensors' output. And I think in your case it is unnecessary to spend time creating this model.
Although you are getting accurate data from sensors but they cannot be consistent always. The Kalman filter will not only identify any outliers in the measurement data but also can predict it when some measurement is missing. However, if you are really looking for something with less computational requirements then you can go for a complimentary filter.

What is the future of filtering methods vs incremental-SFM in visual-SLAM

In the Visual SLAM area, there's the well-known solution of EKF/UKF/Particle-SLAM , like the "mono-slam".
Recently , there was a direction to Local Bundle Adjustment methods , like lsd-slam or orb-slam ..
My question is :
Do the filtering ways still have a future or steady usage? in what applications? what are the pros/cons?
I read these papers but, I failed to extract a final answer,(mostly out of misunderstanding):
Visual SLAM: why filter?
Past, Present, and Future of Simultaneous Localization and Mapping
P. S.: I know the first is saying that Local BA is better somehow, and the second rarely mentioned filtering, so.., that's it.. , is it the end of the awesome Kalman filter in Visual-SLAM area?!!
No, the Kalman filter still has its uses. Although "visual SLAM: Why filter" is interesting in that it is the first (to my knowledge) paper to conduct a mathematically sound comparison, you should note that it only compares bundle adjustment to a very specific Kalman filter, which for example includes the points in the filter, while state of the art EKF-based odometry/slam methods seem to indicate that this is not a good idea. Also, you can argue that a recursive Kalman filter is more or less the same as bundle adjustment.
A kalman filter, despite its computational disadvantage in some cases, will have the advantage of easily providing you with uncertainty estimates. Obtaining non-local uncertainties in bundle adjustment is not trivial, and adds significant overhead (see for example this paper, which actually is the only uncertainty propagation in bundle adjustment I know of.).
Another advantage of Kalman filters is that sensor fusion is straightforward. You more or less have to add the parameters to estimate to the state vector. For an example of a state of the art Kalman filter for IMU/Vision fusion that is actually being used in many applications, see this paper.
But yes, there is a clear tendency in the SLAM community to move away from Kalman-based methods, except in specific areas (experimental sensors or large sensor graphs where having global covariances is mandatory etc), but the arguments are usually a little weak. People mumble something about better empirical results, and then cite "Visual SLAM: why filter". I recommend you read the thesis from that paper's author. Although his theoretical arguments on entropy are convincing, I still think that we have to be very cautious when citing that paper, because of the aforementioned particularities of the filter.
No, the second paper does not describe the end of the Kalman filter in Visual-Slam. The Kalman filter is a special case of the Maximum Likelihood Estimator for Gaussian noise. I want to direct your attention to Page 4, paragraph 3 of the second paper. There, the authors should clarify that the Kalman Filter and MAP are both extensions of Maximum Likelihood Estimation. As written, that insight is only implicit.

Time Series DFT Signals Clustering

I have a number of time series data sets, which I want to transform to dft signals in order to reduce dimensionality. After transforming to dft, I want to cluster the resulting dft data sets using k-means algorithm.
Since dft signals contain an imaginary number how can one cluster them?
You could simply treat the imaginary part as another component in your vectors. In other applications, you will want to ignore it!
But you'll be facing other, more severe challenges.
Data mining, and clustering in particular, rarely is as easy as appliyng function a (dft) and function b (k-means) and then you have the result, hooray. Sorry - that is not how exploratory data mining works.
First of all, for many time series, DFT will not be helpful at all. On others, you will first have to do appropriate resampling, or segmentation, or get rid of uninteresting effects such as seasonality. Even if DFT works, it may emphasize artifacts such as the sampling frequency or some interferences.
And then you'll run into one major problem: k-means is based on the assumption that all attributes have the same importance. And DFT is based on the very opposite idea: the first components capture most of the signal, the later ones only minor deviations from it (and that is the very motivation for using this as dimensionality reduction).
So based on this intuition, you maybe never should apply k-means on DFT coefficients at all. At the same time, data-mining repeatedly has shown that appfoaches that are "statistical nonsense" can nevertheless provide useful results... so you can try, but verify your resultd with care, and avoid being too enthusiastic or optimistic.
With the help of FFT, it converts dataset into dft signals. It helps to calculates DFT for each small data set.

Complex interpolation on an FPGA

I have a problem in that I need to implement an algorithm on an FPGA that requires a large array of data that is too large to fit into block or distributed memory. The array contains complex fixed-point values, and it turns out that I can do a good job by reducing the total number of stored values through decimation and then linearly interpolating the interim values on demand.
Though I have DSP blocks (and so fixed-point hardware multipliers) which could be used trivially for real and imaginary part interpolation, I actually want to do the interpolation on the amplitude and angle (of the polar form of the complex number) and then convert the result to real-imaginary form. The data can be stored in polar form if it improves things.
I think my question boils down to this: How should I quickly convert between polar complex numbers and real-imaginary complex numbers (and back again) on an FPGA (noting availability of DSP hardware)? The solution need not be exact, just close, but be speed optimised. Alternatively, better strategies are gladly received!
edit: I know about cordic techniques, so this would be how I would do it in the absence of a better idea. Are there refinements specific to this problem I could invoke?
Another edit: Following from #mbschenkel's question, and some more thinking on my part, I wanted to know if there were any known tricks specific to the problem of polar interpolation.
In my case, the dominant variation between samples is a phase rotation, with a slowly varying amplitude. Since the sampling grid is known ahead of time and is regular, one trick could be to precompute some complex interpolation factors. So, for two complex values a and b, if we wish to find (N-1) intermediate equally spaced values, we can precompute the factor
scale = (abs(b)/abs(a))**(1/N)*exp(1j*(angle(b)-angle(a)))/N)
and then find each intermediate value iteratively as val[n] = scale * val[n-1] where val[0] = a.
This works well for me as I need the samples in order and I compute them all. For small variations in amplitude (i.e. abs(b)/abs(a) ~= 1) and 0 < n < N, (abs(b)/abs(a))**(n/N) is approximately linear (though linear is not necessarily better).
The above is all very good, but still results in a complex multiplication. Are there other options for approximating this? I'm interested in resource and speed constraints, not accuracy. I know I can do the rotation with CORDIC, but still need a pair of multiplications for the scaling, so I'm adding lots of complexity and resource usage for potentially limited results. I don't really have a feel for the convergence of CORDIC, so perhaps I just truncate early, or use lots of resources to converge quickly.

What algorithm would you use for clustering based on people attributes?

I'm pretty new in the field of machine learning (even if I find it extremely interesting), and I wanted to start a small project where I'd be able to apply some stuff.
Let's say I have a dataset of persons, where each person has N different attributes (only discrete values, each attribute can be pretty much anything).
I want to find clusters of people who exhibit the same behavior, i.e. who have a similar pattern in their attributes ("look-alikes").
How would you go about this? Any thoughts to get me started?
I was thinking about using PCA since we can have an arbitrary number of dimensions, that could be useful to reduce it. K-Means? I'm not sure in this case. Any ideas on what would be most adapted to this situation?
I do know how to code all those algorithms, but I'm truly missing some real world experience to know what to apply in which case.
K-means using the n-dimensional attribute vectors is a reasonable way to get started. You may want to play with your distance metric to see how it affects the results.
The first step to pretty much any clustering algorithm is to find a suitable distance function. Many algorithms such as DBSCAN can be parameterized with this distance function then (at least in a decent implementation. Some of course only support Euclidean distance ...).
So start with considering how to measure object similarity!
In my opinion you should also try expectation-maximization algorithm (also called EM). On the other hand, you must be careful while using PCA because this algorithm may reduce the dimensions relevant to clustering.

Resources