Determining location with Wi Fi - geolocation

Is there a way that I can determine a location of a laptop/phone connected to my router via a wireless network access point? (I do not want to use GPS... only the access point).

No. But let's examine why.
If you can get the metrics from the router, which might or might not be possible, you can get the signal strength. This will give you a circle. But, this is limited, as you also need to know how strong the WiFi card is to determine rough distance. But, you probably know the rough distance your router works under, or the max circle, so this is not very useful.
If you have more than one access point, however, you can use triangulation. With two, the information is limited; three or more will give you a more accurate distance and allow you to extrapolate the strength of the signal.

Nope. You might be able to estimate its distance away, but even that is not likely if you're inside a building. Various building materials attenuate the signal, so the response is non-linear. If your router has two separate antennas, and you can measure the signal strengh from each independently, then you might have a chance of getting a feel for the direction, but I doubt the signal resolution will be high enough to give you any meaningful data.

Yes. However you'll need more than one Access Point and some serious software.
There are a number of solutions available and in-development for Location Based Services in Wi-Fi Networks. As Gregory mentioned above a single AP is not enough to do anything but poor range estimation, however multiple APs do not typically use triangulation to determine the location solution, they use a trained Hidden Markov Model.

Related

Image analysis technique to determine approximate change in view over a short period of time?

I am working on an open source package for robot owners. I want to do a decent job of detecting when the robot is having movement problems. One of the problems the robot commonly has is that the back wheel gets "tucked underneath" in a bad way and makes it turn very slowly when on carpet. I believe that with a combination of accelerometer value inspection and (I hope) a relatively simple yet robust vision analysis technique, I will be able to tell when the robot is having this specific problem.
What I need is to be able to analyze two images, separated by about 1/2 second in time, and get a numerical value that tells about how close they are, but in a way that has some intelligence about the objects in the screen instead of just a simple color/hue/etc. analysis. I've heard of an algorithm called optical flow that is used in object and scene tracking, but I'm hoping I don't need something heavyweight.
Is there an machine vision algorithm/function that can analyze two JPEG's and tell if they belong to the same scene and viewpoint, yet can also deliver a numerical monotonically increasing value that tells me rough how different they are? If I could get that numerical value and compare it to the number of milliseconds past, while examining the current accelerometer activity, I believe I can detect when the robot is having the "slow turn of death" problem.
If so, please tell me the basic technique involved, and if you know of machine vision library that implements it, which one it is.
but in a way that has some intelligence about the objects in the screen instead of just a simple color/hue/etc. analysis
What you are suggesting is a complex problem by itself, so forget about 'lightweight' solutions. Probably you are going to need something like optical flow.
Other options I would recommend you looking into are:
Vanishing points detection and variation from image to image. This quite fits into your problem domain Wikipedia
Disparity map: related to optical flow. Used for stereographic vision, but I think you can use it for the kind of application you are looking for. Take a look at this

Graph-SLAM when it uses only odometry information, will it still run? and what is the outcome?

This is a kind of difficult question.
I know about EKF-SLAM, which uses a state from previous time to estimate next state as an online filter,
I also know about Graph-SLAM, which uses all states in past as full SLAM, and represents them as merely whole bunch of nodes and edges, and optimize structure of nodes and eges by minimizing error to estimate better states.
Now,
I know that there is no meaning in running EKF-SLAM with odometry info only, since what EKF does is estimating future state by balancing weight between Odometry info AND Observation of landmark info. so both are needed.
My question is, is it possible to run Graph-SLAM with only Odometry info and no landmark observation info whatsoever?
It seems like Graph-SLAM can run by gathering all Odometry info state upto current ones and converting them to nodes and edges just like it does when both Odo and obs are provided, and it can optimize the structure of nodes and edges.
Is is possible?
What does output mean? "Optimized" Odometry?
Any thought to it?
Thank you in advance. :)
preface: I am not 100% certain, these are just my assumptions/opinions
the point of SLAM is Simultaneous Localization And Mapping In order to do any mapping you need Observation of landmarks, or some other feature. Otherwise you are only performing localization.
Think if I dropped you a building you've never been in before and I said, create a map for me, BUT you can ONLY count your footsteps. You must not use any other senses (eyes closed, ears plugged, etc). You would quickly realize this is a nearly impossible task. If you use only odometry, something like a Kalman Filter, or EKF should work nicely, but again this is only doing localization, not mapping.
hope that helps

core location constants meaning

I am very confused about the meaning of the core location constants. For example for my app I would like to get accuracy readings within 100 meters and it looks like kCLLocationAccuracyHundredMeters would be the appropriate choice. However with this settings I often get points with accuracy worse than +- one thousand meters and when I disable wifi. Are these core location constants only relevant when wifi is enabled or does it sound like I am doing something wrong? It seems weird that Apple wants developers to not have to worry about the underlying hardware (i.e. whether it is using gps, wifi, or cell towers) but have the accuracy totally depend on wifi being enabled.
Thanks for your help.
GPS readings depend on a LOT more than just your accuracy setting. For example if you are not using wifi and you try to take a reading from indoors I have seen GPS be several thousand meters off consistently until you go outside. If you are planning on making your app accurate for indoors I would not plan on relying on the typical GPS. If your major use case is outside than GPS is VERY accurate.
The accuracy constants are how you "request" a specific accuracy. What you actually get depends on what is available. CL will try to give you your requested accuracy (or better) but it will give you what it has even if it is worse accuracy while it is trying to get better accuracy location.
If you wait long enough (and ignore the locations that aren't good enough) then you will eventually get better accuracy location unless it can't be done (such as when GPS satellites aren't visible or there is no WiFi, etc).

Finding out distance between router and receiver?

A general question: is it possible to retrieve information about how far away e.g. a computer is from a wifi-router. For instance I want to get data on my computer if I'm 10 meters away from my home-wifispot or 2 meters.
Any idea if that is even possible?
Edit: How about bluetooth? Is it possible to get information about how far away bluetooth-connected devices are one from another?
I would recommend a measuring line or just good-old-fashioned guesstimating.
There is no "simple" way to do it (complex ways may involve building "accurate" signal maps ahead of time or trying to fit a better equation which is still subject to anumber of the limitations with the naive rule) and the rule of thumb "1/r^2" is just that -- a general rule of thumb. On the other hand, perhaps there is some existing software that will show you your RSS strength and make the task feel accomplished :-)
You will find useful links if you google for "RSS signal distance". This kind of task seems quite a common topic in academia w.r.t. small wireless devices ("motes") as well and there have been some interesting approaches to this problem such as the fitting of secondary low-frequency acoustic sensors.
You can query the signal strength which is some kind of indication of distance and obstructions and a few other factors all rolled into one measure. With just plain wifi though this isn't possible directly.
Try measuring the response time of the router to pings, with the data rate set to constant to avoid that effecting the response time. Take lots of samples and remove outliers to reduce errors, but you will still have a substantial quantization error. Subtract the latency of the router and computer, divide by 6 then multiply by the speed of light and hopefully you will have the distance to a resolution of a few metres.

Game network physics collision

How to simulating two client-controlled vehicles colliding (sensibly) in a typical client/server setup for a network game? I did read this eminent blog post on how to do distributed network physics in general (without traditional client prediction), but this question is specifically on how to handle collisions of owned objects.
Example
Say client A is 20 ms ahead of server, client B 300 ms ahead of server (counting both latency and maximum jitter). This means that when the two vehicles collide, both clients will see the other as 320 ms behind - in the opposite direction of the velocity of the other vehicle. Head-to-head on a Swedish highway means a difference of 16 meters/17.5 yards!
What not to try
It is virtually impossible to extrapolate the positions, since I also have very complex vehicles with joints and bodies all over, which in turn have linear and angular positions, velocities and accelerations, not to mention states from user input.
I don't know of a perfect solution, and I have a feeling that one does not exist. Even if you could accurately predict the future position of the vehicle, you would be unable to predict the way the user will operate the controls. So the problem comes down to minimizing the negative effects of client/server lag. With that in mind, I would approach this from the position of the principle of least astonishment (paraphrased from Wikipedia):
In user interface design, the principle of least astonishment (or surprise) states that, when two elements of an interface conflict, or are ambiguous, the behaviour should be that which will least surprise the human user at the time the conflict arises.
In your example, each user sees two vehicles. Their own, and that of another player. The user expects their own vehicle to behave exactly the way they control it, so we are unable to play with that aspect of the simulation. However, the user can not know exactly how the other user is controlling their vehicle, and I would use this ambiguity to hide the lag from the user.
Here is the basic idea:
The server has to make the decision about an impending collision. The collision detection algorithm doesn't have to be 100% perfect, it just has to be close enough to avoid obvious inconsistencies.
Once the server has determined that two vehicles will collide, it sends each of the two users a message indicating that a collision is imminent.
On client A, the position of vehicle B is adjusted (realistically) to guarantee that the collision occurs.
On client B, the position of vehicle A is adjusted (realistically) to guarantee that the collision occurs.
During the aftermath of the collision, the position of each vehicle can be adjusted, as necessary, so that the end result is in keeping with the rest of the game. This part is exactly what MedicineMan proposed in his answer.
In this way, each user is still in complete control of their own vehicle. When the collision occurs, it will not be unexpected. Each user will see the other vehicle move towards them, and they will still have the feeling of a real-time simulation. The nice thing is that this method reacts well in low-lag conditions. If both clients have low-latency connections to the server, the amount of adjustment will be small. The end result will, of course, get worse as the lag increases, but that is unavoidable. If someone is playing a fast-paced action game over a connection with several seconds worth of lag, they simply aren't going to get the full exeperience.
Perhaps the best thing that you can do is not so show the actual collision real time, but give the illusion that things are happening in real time.
Since the client is behind the server (lag), and the server needs to show the result of the collision, perhaps what you can do, client side, is to show a flash or explosion or some other graphic to distract the user and buy enough time on the server side to calculate the result of the collision.. When you are finished with the prediction, you ship it back to the client side for presentation.
Sorry to answer with "What not to try", but I've never heard of a solution that doesn't involve predicting the outcome on client side. Consider a simplified example:
Client A is stationary, and watching client B's vehicle approach a cliff. Client B's vehicle is capable of reducing speed to 0 instantly, and does so at the last possible moment before going over the cliff.
If Client A is attempting to show Client B's state in real time, Client A has no choice but to predict that Client B fell off the cliff. You see this a lot in MMORPGs designed such that a player's character is capable of stopping immediately when running full-speed. Otherwise, Client A could just show Client B's state as the state updates come in, but this isn't viable, as Client A needs to be able to interact with Client B in real time in your scenario (I assume).
Could you try simplifying the collision models so that extrapolation is possible for real time prediction? Maybe make your "joints and bodies all over" have processor-less-intensive physical models, like a few cubes or spheres. I'm not too familiar with how to improve the efficiency of collision detection, but I assume it's done by detecting collisions of models that are less complex than the visual models.
Regarding "What not to try". You are assuming that you need to predict perfectly, but you are never going to find a perfect solution in a game with complex physics. An approximation is probably the best you can do (for example, most commercial physics engines can cast a shape into the physics scene and return the first point of collision).
For example, I implemented some critical parts of the network physics for Mercenaries 2 under the guidance of Glenn (the blog poster you mentioned). It was impossible to push all of the necessary physics state across the wire for even a single rigid body. Havok physics gradually generates contact points each frame, so the current "contact manifold" is a necessary part of the physics state to keep the simulation deterministic. It's also way too much data. Instead, we sent over the desired transform and velocities and used forces and torques to gently push bodies into place. Errors are inevitable, so you need a good error correction scheme.
What I eventually ended up doing was simply skipping prediction alltogether and simply doing this:
Client has very much say about its own position,
Server (almost) only says anything about the owning client's position when a "high energy" collision has happened with another dynamic object (i.e. not static environment).
Client takes meshoffset=meshpos-physpos when receiving a positional update from the server and then sets meshpos=physpos+meshoffset each frame and gradually decreases meshoffset.
It looks quite good most of the time (in low latency situation), I don't even have to slerp my quaternions to get smooth transitions.
Skipping prediction probably gives high-latency clients an awful experiance, but I don't have time to dwell on this if I'm ever going to ship this indie game. Once in a while it's nice to create a half-ass solution that works good enough but best. ;)
Edit: I eventually ended up adding the "ownership" feature that Glen Fiedler (the blogger mentioned in the question) implemented for Mercenaries 2: each client gets ownership of (dynamic) objects that they collide with for a while. This was necessary since the penetration otherwise becomes deep in high latency and high speed situations. That soluation works just as great as you'd think when you see the GDC video presentation, can definitely recommend it!
Few thoughts.
Peer to peer is better at dealing with latency & high speeds.
So if this is your own engine then switch to peer to peer. You then extrapolate the other peer's vehicle based on their button input to move forward to where it is now. You the set collision such that you collide against the other vehicle as if it's world. I.e. you take the hit.
This means as you collide against the other, you bounce off, on the peer's network they bounce off you, so it looks roughly correct. The lower the latency the better it works.
If you want to go client / server then this will be inferior to p2p
Things to attempt
o) Extrapolate clients forward as in p2p to perform collision detection.
o) Send collision results back to clients and extrapolate forward
Note, this is NEVER going to be as good as p2p. Fundamentally high speed & latency = error so removing latency is the best strategy. P2P does that.
In addition to predicting on the client side where the other user might be and sending the collision information and how you handled it to the server, what most mmo's do to deal with lag is they have the server run "in the past" as it were. Basically they buffer the recent inputs but only react to what happened .1sec in the past. This lets you "peek into the future" when you need to (ie when a collision is about to happen in your time frame, you can look at the buffered input to see what will happen and decide if the collision is real).
Of course, this adds an extra layer of complexity to your program as you have to consider what data to send to your clients and how they should react to it. For example you could send the entire "future" buffer to the clients and let them see which possible collisions will actually happen and which won't.
Ross has a good point. You could simplify the model you use to detect collisions by abstracting it to some simpler volume (i.e. the rough boxy outline of the vehicle). Then you can do the predictions based on the simple volume and the detailed calculations on the exact volumes while you have the user distracted by the "explosion". It may not be perfect but would allow you to speed up your collision detection.

Resources