Finding out distance between router and receiver? - wifi

A general question: is it possible to retrieve information about how far away e.g. a computer is from a wifi-router. For instance I want to get data on my computer if I'm 10 meters away from my home-wifispot or 2 meters.
Any idea if that is even possible?
Edit: How about bluetooth? Is it possible to get information about how far away bluetooth-connected devices are one from another?

I would recommend a measuring line or just good-old-fashioned guesstimating.
There is no "simple" way to do it (complex ways may involve building "accurate" signal maps ahead of time or trying to fit a better equation which is still subject to anumber of the limitations with the naive rule) and the rule of thumb "1/r^2" is just that -- a general rule of thumb. On the other hand, perhaps there is some existing software that will show you your RSS strength and make the task feel accomplished :-)
You will find useful links if you google for "RSS signal distance". This kind of task seems quite a common topic in academia w.r.t. small wireless devices ("motes") as well and there have been some interesting approaches to this problem such as the fitting of secondary low-frequency acoustic sensors.

You can query the signal strength which is some kind of indication of distance and obstructions and a few other factors all rolled into one measure. With just plain wifi though this isn't possible directly.

Try measuring the response time of the router to pings, with the data rate set to constant to avoid that effecting the response time. Take lots of samples and remove outliers to reduce errors, but you will still have a substantial quantization error. Subtract the latency of the router and computer, divide by 6 then multiply by the speed of light and hopefully you will have the distance to a resolution of a few metres.

Related

What is the best strategy for determining a GPS tracked user's location, once they have stopped moving for a reasonable period?

What is the best strategy for determining a user's location from a series of GPS fixes, once they are considered not to be moving?
When tracking a user, if they should stop moving there will subsequently be a sequence of fixes roughly in the same location.
If possible I would like to not just use the last fix, but also take into consideration previous fixes so as to calculate a more accurate position for them.
Factors that I would have thought need to be considered:
The best way to determine a user is stationary (from experience speed from the GPS fixes is not sufficiently reliable)
Each fix has an accuracy, how can this be factored in?
Are there well established algorithms/libraries that could be used?
Any suggestions greatly appreciated
What's wrong with taking the average?
If you want to take uncertainty into account, use a weighted average. Or a trimmed one, discarding those measurements that deviate most.
But it is a known fact that other factors such as reflections on buildings, can have a much larger effect on the accuracy.
Nevertheless, this is not so much a programming question, but one that needs GPS expertise. You are better off reading expert literature than asking random internet users for their opinion.

Seperation of instruments' audios from a single channel non-MIDI musical file

My friend Prasad Raghavendra and me, were trying to experiment with Machine Learning on audio.
We were doing it to learn and to explore interesting possibilities at any upcoming get-togethers.
I decided to see how deep learning or any machine learning can be fed with certain audios rated by humans (evaluation).
To our dismay, we found that the problem had to be split to accommodate for the dimensionality of input.
So, we decided to discard vocals and assess by accompaniments with an assumption that vocals and instruments are always correlated.
We tried to look for mp3/wav to MIDI converter. Unfortunately, they were only for single instruments on SourceForge and Github and other options are paid options. (Ableton Live, Fruity Loops etc.) We decided to take this as a sub-problem.
We thought of FFT, band-pass filters and moving window to accommodate for these.
But, we are not understanding as to how we can go about splitting instruments if chords are played and there are 5-6 instruments in file.
What are the algorithms that I can look for?
My friend knows to play Keyboard. So, I will be able to get MIDI data. But, are there any data-sets meant for this?
How many instruments can these algorithms detect?
How do we split the audio? We do not have multiple audios or the mixing matrix
We were also thinking about finding out the patterns of accompaniments and using those accompaniments in real-time while singing along. I guess we will be able to think about it once we get answers to 1,2,3 and 4. (We are thinking about both Chord progressions and Markovian dynamics)
Thanks for all help!
P.S.: We also tried FFT and we are able to see some harmonics. Is it due to Sinc() in fft when rectangular wave is input in time domain? Can that be used to determine timbre?
We were able to formulate the problem roughly. But, still, we are finding it difficult to formulate the problem. If we use frequency domain for certain frequency, then the instruments are indistinguishable. A trombone playing at 440 Hz or a Guitar playing at 440 Hz would have same frequency excepting timbre. We still do not know how we can determine timbre. We decided to go by time domain by considering notes. If a note exceeds a certain octave, we would use that as a separate dimension +1 for next octave, 0 for current octave and -1 for the previous octave.
If notes are represented by letters such as 'A', 'B', 'C' etc, then the problem reduces to mixing matrices.
O = MI during training.
M is the mixing matrix that will have to be found out using the known O output and I input of MIDI file.
During prediction though, M must be replaced by a probability matrix P which would be generated using previous M matrices.
The problem reduces to Ipredicted = P-1O. The error would then be reduced to LMSE of I. We can use DNN to adjust P using back-propagation.
But, in this approach, we assume that the notes 'A','B','C' etc are known. How do we detect them instantaneously or in small duration like 0.1 seconds? Because, template matching may not work due to harmonics. Any suggestions would be much appreciated.
Splitting out the different parts is a machine learning problem all to its own. Unfortunately, you can't look at this problem in audio land only. You must consider the music.
You need to train something to understand musical patterns and progressions in the context of the type of music you give it. It needs to understand what the different instruments sound like, both mixed and not mixed. It needs to understand how these instruments are often played together, if it's going to have any chance at all at separating what's going on.
This is a very, very difficult problem.
This is a very hard problem mainly because converting audio to pitch isnt very simple due to Nyquist folding harmonics that are 22Khz+ back down and also other harmonic introductions such as saturators/distortion and other analogue equipment that introduce harmonics.
The fundamental harmonic isnt always the loudest which is why your plan will not work.
The hardest thing to measure would be a distorted guitar. The harmonic some pedals/plugins can make is crazy.

Recommended local search optimization algorithm for control domain

Background: I am trying to find a list of floating point parameters for a low level controller that will lead to balance of a robot while it is walking.
Question: Can anybody recommend me any local search algorithms that will perform well for the domain I just described? The main criteria for me is the speed of convergence to the right solution.
Any help will be greatly appreciated!
P.S. Also, I conducted some research and found out that "Evolutianry
Strategy" algorithms are a good fit for continuous state space. However, I am not entirely sure, if they will fit well my particular problem.
More info: I am trying to optimize 8 parameters (although it is possible for me to reduce the number of parameters to 4). I do have a simulator and a criteria for me is speed in number of trials because simulation resets are costly (take 10-15 seconds on average).
One of the best local search algorithms for low number of dimensions (up to about 10 or so) is the Nelder-Mead simplex method. By the way, it is used as the default optimizer in MATLAB's fminsearch function. I personally used this method for finding parameters of some textbook 2nd or 3rd degree dynamic system (though very simple one).
Other option are the already mentioned evolutionary strategies. Currently the best one is the Covariance Matrix Adaption ES, or CMA-ES. There are variations to this algorithm, e.g. BI-POP CMA-ES etc. that are probably better than the vanilla version.
You just have to try what works best for you.
In addition to evolutionary algorithm, I recommend you also check reinforcement learning.
The right method depends a lot on the details of your problem. How many parameters? Do you have a simulator? Do you work in simulation only, or also with real hardware? Speed is in number of trials, or CPU time?

Classifying human activities from accelerometer-data with neural network

I've been tasked to carry out a benchmark of an existing classifier for my company. The biggest problem currently is differentiating between different type of transportation's, like recognizing if i'm currently in a train, driving a car or bicycling so this is the main focus.
I've been reading alot about LSTM, http://en.wikipedia.org/wiki/Long_short_term_memory and its recent success in handwriting and speech-recognition, where the time between significant events could be pretty long.
So, my first thought about the problem with train/bus is that there probably isn't such a clear and short cycle as there is when walking/running for instance so long-term memory is probably crucial.
Have anyone tried anything similar with decent results?
Or is there other techniques that could potentially solve this problem better?
I've worked on mode of transportation detection using smartphone accelerometers. The main result I've found is that almost any classifier will do; the key problem is then the set of features. (This is no different from many other machine learning problems.) My feature set ended up containing time-domain and frequency-domain values, both taken from time-series sliding-window segmentation.
Another problem is that the accelerometer can be placed anywhere. On the body, it can be anywhere and in any orientation. If the user is driving, is the phone in a pocket, in a bag, on a car seat, attached to a suction-cup window mount, etc.?
If you want to avoid these problems, use GPS instead of the accelerometer. You can make relatively accurate classifications with that sensor, but the cost is the battery usage.

Determining location with Wi Fi

Is there a way that I can determine a location of a laptop/phone connected to my router via a wireless network access point? (I do not want to use GPS... only the access point).
No. But let's examine why.
If you can get the metrics from the router, which might or might not be possible, you can get the signal strength. This will give you a circle. But, this is limited, as you also need to know how strong the WiFi card is to determine rough distance. But, you probably know the rough distance your router works under, or the max circle, so this is not very useful.
If you have more than one access point, however, you can use triangulation. With two, the information is limited; three or more will give you a more accurate distance and allow you to extrapolate the strength of the signal.
Nope. You might be able to estimate its distance away, but even that is not likely if you're inside a building. Various building materials attenuate the signal, so the response is non-linear. If your router has two separate antennas, and you can measure the signal strengh from each independently, then you might have a chance of getting a feel for the direction, but I doubt the signal resolution will be high enough to give you any meaningful data.
Yes. However you'll need more than one Access Point and some serious software.
There are a number of solutions available and in-development for Location Based Services in Wi-Fi Networks. As Gregory mentioned above a single AP is not enough to do anything but poor range estimation, however multiple APs do not typically use triangulation to determine the location solution, they use a trained Hidden Markov Model.

Resources