I've been playing around with Estimote Beacons for the last few days. I'm starting to doubt the effectiveness for iBeacons becouse of the high latency they have when it comes to determine a Beacons position.
When you move 2-3 meters it takes a few seconds until it gets the position right.
A usecase-scenario like, capturing a person walking by a beacon can be quite hard to determine.
Is it possible to manipulate the Update/Refresh Rate of a CLLocationManager or a CLBeaconRegion? e.g. every 0.1 Seconds
The reason that you are seeing it take so long for the iOS distance measurement (what they call "accuracy" in the CLBeacon object) to stabilize is because it is based on a running average of the RSSI -- the received signal strength. This signal strength measurement is inherently noisy and it bounces all around. That is why collecting multiple samples is necessary to smooth it out.
But because of this averaging, there is a lag. The most recent estimate is based on measurements from several seconds ago.
You cannot change the refresh rate of the CLLocationManager or the CLBeaconRegion, but you may be able to get an iBeacon that transmits more often than the 1s baseline. More transmissions gives you more RSSI measurements to work with, and it may help smooth out the noise. Because I am not sure of the internal implementation of CoreLocation, I am not positive whether a higher iBeacon transmission rate would reduce the noise on the distance measurement.
You can always calculate your own distance measurement, too, based on RSSI and the Power calibration value sent out by an iBeacon. If you use a single RSSI sample, then there will be no lag from averaging with earlier measurements, but you will have a high degree of variability. You basically have to accept a tradeoff between filtering out noise and filtering out old measurements based on different positions.
If you want to try your own calculation, you can use something like below (See my answer to this question for details).
distanceInMeters = 0.89976 * (rssi/txPower)**7.7095 + 0.111
You have to set realistic expectations on how accurate this estimate is going to be. Apple generally recommends that you don't use their "accuracy" measurement inside CLBeacon, unless it is in combination of other rougher measurements like "proximity" that bucketize the distance measurement into "immediate", "near" and "far" groupings.
Related
Apple iPhone's now have a U1 chip which is described as "Ultra Wideband technology for spatial awareness". I've heard the technology can do time of flight calculations to determine range, but that doesn't answer how it determines relative position. How does the positioning work?
How does ultra-wideband work?
Travelling at the speed of light
The idea is to send radio waves from one module to another and measure the time of flight (TOF), or in other words, how long it takes. Because radio waves travel at the speed of light (c = 299792458 m/s) we can simply divide the time of flight by this speed to get the distance.
However, perhaps you've noticed that the radio waves travel fast. Very fast! In a single nanosecond, which is a billionth of a second, a wave has travelled almost 30 cm. So if we want to perform centimetre-accurate ranging, we have to be able to measure the timing very very accurately! So now the question is, how can we do this? How can we even measure the timing of.. a wave?
It's all about the bandwidth
In physics, there is something called the Heisenberg's uncertainty principle. The principle states that it is impossible to know both the frequency and the timing of a signal. Consider for example a sinusoid; a signal with a well-known frequency but a very ill-determined timing: the signal has no beginning or end. However, if we combine multiple sinusoidal signals with a slightly different frequency, we can create a 'pulse' with more defined timing, i.e., the peak of the pulse. This is seen in the following figure from Wikipedia that sequentially adds sinusoids to a signal to get a sharper pulse:
fig. 1
The range of frequencies that are used for this signal is called the bandwidth Δf. Using Heisenberg's uncertainty principle we can roughly determine the width Δt of the pulse, given a certain bandwidth Δf*:
ΔfΔt ≥ 1/4π
From this simple formula we can see that if we want a narrow pulse, which is necessary for accurate timing, we need to use a large bandwidth. For example, using the bandwidth Δf = 20 MHz available for wifi systems we obtain a pulse-width larger than Δt ≥ 4ns. Using the speed of light this relates to a pulse of 1.2 m 'long' which is too much for accurate ranging. Firstly, because it is hard to accurately determine the peak of such a wide pulse, and secondly because of reflections. Reflections come from the signals bouncing onto objects (walls, ceilings, closets, desks, etc..) within the surrounding environment. These reflections are also captured by the receiver and may overlap with the line-of-sight pulse which makes it very hard to measure the true peak of the pulse. With pulses of 4 ns wide, any object within 1.2 m of the receiver or the transmitter will cause an overlapping pulse. Because of this, ranging from wifi using time-of-flight is not suitable for indoor applications.
The ultra-wideband signals have typically a bandwidth of 500 MHz resulting in pulses of 0.16 ns wide! This timing resolution is so fine that at the receiver, we are able to distinguish several reflections of the signal. Hence, it remains possible to do accurate ranging even in places with a lot of reflectors, such as indoor environments.
fig. 2
Where to put all this bandwidth?
So we need a lot of bandwidth. Unfortunately, everybody wants it: in wireless communication systems, more bandwidth means faster downloads. However, if everybody would transmit signals on the same frequencies, all the signals would interfere and nobody would be able to receive anything meaningful. Because of this, the use of the frequency spectrum is highly regulated.
So how is it possible that UWB gets 500 MHz of precious bandwidth and most other systems have to satisfy with a lot less? Well, the UWB systems are only allowed to transmit at very low power (the power spectrum density must be below -41.3 dBm/MHz). This very strict power constraint means that a single pulse is not able to reach far: at the receiver, the pulse will likely be below the noise level. In order to solve this issue, a train of pulses is sent by the transmitter (typically 128 of 1024) to represent a single bit of information. At the receiver, the received pulses are accumulated and with enough pulses, the power of the 'accumulated pulse' will rise above the noise level and reception is possible. Hooray!
The IEEE 802.15.4 standard for Low-Rate Wireless Personal Area Networks has defined a number of UWB channels of at least 500MHz wide. Depending on the country you're in, some of these channels are either allowed or not. In general, the lower band channels (1 to 4) can be used in most countries under some limitations on update rate (using mitigation techniques). Channel 5 is accepted in most parts of the world without any limitations, with the notable exception of Japan. Purely from physics, the lower the channel center frequency, the better the range.
A note on the received signal strength (RSS)
There exists another way to measure the distance between two points by using radio waves, and that is by using the received signal strength. The further the two points are, the smaller the received signal strength will be. Hence, from this RSS-value, we should be able to derive the distance. Unfortunately, it's not that simple. In general, the received signal strength will be a combination of the power of all the reflections and not only of the desired line-of-sight. Because of this, it becomes very hard to relate the RSS value to the true distance. The figure below shows just how bad it is.
In this figure, the RSS value of a Bluetooth signal is measured at certain distances. At every distance, the error bars show how the RSS value behaves at the given distance. Clearly, the variation on the RSS value is very large which makes RSS unsuitable for accurate ranging or positioning.
Source
I am sorry if this has been asked in one way shape or another. I have started working with beacons, and in Xcode (Swift) - using CoreLocation. I really need a more accurate determination between the device and a beacon though. So far I have been using the standard proximity region values (Far, Near, and Immediate), however this just isn't cutting it at all. It seems far too unstable for the solution I am looking for - which is a simple one at best.
My scenario;
I need to display notifications, adverts, images etc to the users device when they are approximately 4 meters away from the beacon. This sounds simple enough, but when I found out that the only real solutions there are for beacons are those aforementioned proximity regions, I started to get worried because I need to only display to devices that are 3-5 meters away, no more.
I am aware of the accuracy property of the CLBeacon class, however Apple state it should not be used for accurate positioning of beacons, which I believe is what I am trying to achieve.
Is there a solution to this? Any help is appreciated!
Thanks,
Olly
There are limitations of physics when it comes to estimating distance with Bluetooth radio signals. Radio noise, signal reflections, and obstructions all affect the ability to estimate distance based on radio signal strength. It's OK to use beacons for estimating distance, but you must set your expectations appropriately.
Apple's algorithms in CoreLocation take a running average of the measured signal strength over 20 seconds or so, then come up with a distance estimate in meters that is put into the CLBeacon accuracy field. The results of this field are then used to come up with the proximity field. (0.5 meters or less means immediate, 0.5-3 meters means near, etc.)
When Apple recommends against using the accuracy field, it is simply trying to protect you against unrealistic expectations. This will never be an exact estimate in meters. Best results will come with a phone out of a pocket, with no obstructions between the beacon and the phone, and the phone relatively stationary. Under best conditions, you might expect to get distance estimates of +/- 1 meter at close distances of 3 meters or less. The further you get away, the more variation you will see.
You have to decide if this is good enough for your use case. If you can control the beacons there are a few things you can do to make the results as good as possible:
Turn the beacon transmitter power setting up as high as possible. This gives you a higher signal to noise ratio, hence better distance estimates.
Turn the advertising rate up as high as possible. This gives you more statistical samples, hence better distance estimates.
Place your beacons in locations where there will be as few obstructions as possible.
Always calibrate your beacon after making the changes like above. Calibration involves measuring the signal level at 1 meter and storing this as a calibration constant inside the beacon. Consult your beacon manufacturer instructions for details of how to do this calibration.
I want to use iPhone running a beacon scanning app to detect iBeacon to measure my distance changing. However I found the scanning rate is 1 per second, which cannot satisfy my moving speed. Some documents show scanning 1 time per second is determined by API that cannot be changed.
So do I have chance to speed up scanning rate?
There are two issues with the ranging beacons with CoreLocation for fast moving mobile devices:
As you mention, updates come only once per second.
The distance estimate in the CLBeacon accuracy field is based on the running average of the RSSI over 20 seconds, so it effectively gives you the average distance over that interval.
Unfortunately, you cannot change this -- it is how the API works. An alternative is to use the CoreBluetooth APIs, which can give you a call back once for each Bluetooth packet -- 10 times per second for a beacon advertising at that rate. Three obstacles with this:
You do not get a distance estimate with CoireBluetooth callbacks, just an RSSI measurement, so you must calculate your own distance from RSSI.
There is a lot of noise on RSSI, so using only a single reading the calculated distance estimate will be very inaccurate.
An iBeacon transmission cannot be parsed by iOS using CoreBluetooth, so you must use an alternate beacon format like AltBeacon.
You have to decide if these obstacles are acceptable for your use case.
I am developing an iOS application in which I need to know the exact distance and direction of the device from the beacon. I am using Estimote beacon.
I have used iOS's CLLocation as well as Estimote's framework but both of them give an incorrect value for the distance. Moreover, the values fluctuate a lot, the beacon even goes into unknown state (accuracy -1.000) a lot of times.
I have also tried to use the formula given here:
Understanding ibeacon distancing
but in iOS, it seems there is no way to get the txPower or measured power of Beacon.
I have searched a lot but nowhere I found a satisfactory way to find the distance accurately.
is there any other way which can help me in finding accurately the distance and direction of iOS device from Beacon?
The distance is computed by comparing the received signal strength (RSSI) with the advertised transmitted power (txPower) of the beacon, as the signal strength in theory is inversely proportional to the square of the distance.
But there are lots of other things that can affect RSSI, including obstacles, orientation of the antennas, and possibly multi-path (reflections). So it's difficult to accurately measure distance based on this information.
Another way of measuring distance is using round-trip-time (RTT): you send something to the beacon, and you measure how long it takes to come back. But this requires a fixed response time, and on this sort of scale (meters), there are probably enough variable delays here and there that it might severely affect the calculation.
Direction would require either triangulation or multiple directional antennas, I don't believe that's the case in this scenario.
In short, you can get a rough idea of the distance (which is why it's good for proximity alerts), but accurate distance or direction would require different technologies.
Why do you need them? There may be alternatives based on your specific scenario.
EDIT
If you have a large number of beacons around, and you know their exact positions, it might be possible to pull off the following:
use at least 3 beacon distances to compute your exact position by triangulation
from there, as you know the position of the beacons, you can compute the distance and direction of any of the beacons (or anything else, really)
Of course, depending on the actual accuracy of the beacon distance measurement provided by the SDK, the result might be more or less accurate. The more beacons you have, the more precise you should be able to get (by picking only those that return a distance, or by eliminating those that are not "compatible" with the others when computing solutions).
Even having 3 or more beacons with fixed positions, you still won't be able to receive very accurate positioning without some serious and complex noise reduction. That's because radio waves are prone to being affected by diffraction, multipath propagation, interference and absorption - mostly by metal objects and water particles (therefore human bodies are strong signal blockers). Even phone's alignment (antenna position) can have a significant impact on the proximity readings. Therefore, without implementing alorithms for noise reduction, trilateration can give you accuracy of about 5 meters.
You can find some examples in Obj-C (https://github.com/MatVre/MiBeaconTrilaterationDemo) and Swift (https://github.com/a34729t/TriangulatorSwift) and check how they work for you.
Cheers.
We are having a issue with iBeacons.
App makes wrong guess sometimes as to which proximity region its in before eventually correcting itself. It sometimes shows Far when the Proximity is Near. And then later it flips back to Near.
The problem actually occurs when we need to fire an event when we are in the Near/Far/Immediate region. This doest happen in that way. App is likely to lose range of beacons for brief instances.
Are there any other way to solve this Issue.
It is normal for the Proximity estimate to fluctuate with radio noise, but your experience sounds extreme. What iBeacon brand are you using?
Make sure you are using an iBeacon with as fast enough transmission rate. Different iBeacons transmit advertisements at different frequencies from 30 times per second to once per second or less. Generally, faster transmission rates give you less noisy distance estimates because they give iOS more radio signal strength measurements to work with.
If an iBeacon transmits less than once per sec, you may get intermittent exit/entry events.
For your testing, Try an iOS-based iBeacon like Locate for iBeacon or EZBeacon to see if it helps. It is known to transmit 30x per second.
The proximity issue can be effected by advertising frequency as David has already said. The reason for this is that iOS takes an average of the RSSI readings overtime and uses these to find a final value, if you hold an iOS device in an ideal location, (i.e. clear line of site to the beacon) the result settles down over a few seconds of holding the device still. Apple describes the averaging as: "This value is the average RSSI value of the samples received since the range of the beacon was last reported to your app."
However a bigger factor can be fluctuations in the environment, the RSSI will change dramatically if an obstruction appears between the iBeacon and the iOS device, if the iBeacon and iOS device are both at a low level, this could be a person walking past. I have published some initial results using Estimote iBeacons that show changes in distance based on the device operator rotating 360 degrees. A distance change of +/- 2m is not an uncommon change in this circumstance and could result in the behaviour you observed if the iOS device is near the proximity region boundary.
This is Wojtek Borowicz, I'm a community evangelist at Estimote.
Calculating the exact proximity of a Beacon is based solely on the radio waves it’s broadcasting an is really hard. You encounter factors like multipath propagation, wave diffraction, absorption or interference. That’s exactly why iBeacon standard does not try to calculate the exact distance between a Beacon and the receiving device. Instead, it uses a value called RSSI (received signal strength indicator), which allows to estimate the proximity based on signal power. For calibration purposes, there’s also included a metric called Measured Power - but it’s nothing more than just RSSI measured 1 meter away from the Beacon. Even calibrated, RSSI might fluctuate heavily, due to the factors mentioned above.
The stability of the Beacon’s signal is also based on two main factors. One of them is advertising interval (frequency - the lower, the better signal) and broadcasting power (the higher the better signal). Improving them will allow for much better proximity reading, but will also strongly affect battery life.