iBeacon Trilateration and showing user's position on a map iOS [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to build a simple iOS Application with iBeacons.
I have four iBeacons and my goal is to calculate my room position by doing Trilateration. I want to show the room on the display, set the fixed positions of the iBeacons and then calculate the Position and show it on the display.
My problem is, I don't know how to start.

Even though iBeacons are relatively simple to use, trilateration done with them is far from that. The standard is meant for simply determining your current location zone from the nearest beacon. The zones are: immediate (0-0,5m), near (0,5-2m), far (2-20m). Due to the instability of the signal, it is difficult to obtain more precise location data.
That being said, there are a couple of companies that I know, who have worked with that issue: Estimote (Estimote Indoor SDK) and Steerpath. Maybe looking at those two solutions could help you get started with your project.

Related

How can I locate people with a gyroscope and accelerometer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to build an app that locate and track people by iPhone using Gyro+Accelerometer. No need for GPS here.
How should I approach this problem?
Unfortunately it's not feasible to track device position based on accelerometer and gyro.
In order to calculate position from accelerometer data double integration needs to be applied, integration amplifies noise and turns it to drift, so even small measurement error would create huge position drift. Similar problem appears for gyro as well.
You can find more details here:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k&t=23m20s
You don't!
The values retrieved from the gyroscope and accelerometer are "relative" to the device. They have no absolute meaning which you would need to retrieve some kind of location.
You can theoretically measure / calculate what way a device has taken but you do not know if the user took that way in Germany, China or the USA. You know he went right, then left, then 200m straight - but that is not of any help if you do not know from where he originated.
That being said, if you do have the initial position you can theoretically calculate the new position based on the measured values. But that calculation is probably far to error prone and far to inexact. If you try to measure the values over the course of a few minutes or even hours you will probably get an measurement that is many meters or even kilometers off.

Recognising complex objects in an image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm going to be more specific about the situation:
I've captured a screenshot from the game DotA. The information I want to get is what objects eg. heroes (also its name, hp, ...), creeps (also which side), towers, etc. is visible in the image and where they are. A problem come from the fact that in DotA 2 many of these object can be viewed from many perspective, so let's reduce the problem and assume that every object have only one orientation. How might this problem be solved quickly enough, that it can recognise all objects in real time at about 30fps? Any help or suggestions is welcome.
I think that you have the good flags: CNN for image segmentation. So my point is that for so many different objects from different points of view and scale (because I guess that you can zoom in/out on your heroes/objects), the easiest way (but the heaviest in term of computation) is to build one CNN for each type of object.
But images would help a lot to get a better understanding of the problem.

How can i meassure short distance inside buildings [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm new here im learning xcode and swift by myself and things are going well.
I wanted to ask what will be the best way (and most exact way) to measure short distances let's say up to 10 meters inside of a building so I cant use GPS
I want to get results in millimeters or centimeters.
Thank you for your time guys
Calculating distances based on device movement and using gyroscope, accelerometer, and other internal sensors is impossible.
There are a few reason why but see this link for an explanation...
https://www.youtube.com/watch?v=_q_8d0E3tDk&list=UUj_UmpoD8Ph_EcyN_xEXrUQ&spfreload=10
Use iBeacon and CoreLocation framework. Here is a video from last WWDC that touches this subject
Taking CoreLocation Indoors

Radar View like LOVOO [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hi I am developing a location based iOS application. In which there is a feature to find user's friends in a radar view. I am getting friends latitude and longitude from backend.
Now I tried so many thing to show them in a radar view like a compass (means when I rotate the friend's spots in radar also rotates)
Client need feature exactly like the app LOVOO
How can I do this ?
Please help me.
I've created a similar LOVOO like Radar view for iOS. It's available on github.
https://github.com/abm-adnan/Radar
If you are not worried about the curvature of the earth's surface then this turns into something really easy.
You change all of the latitude and longitudes into a polar coordinate based system (angle and magnitude from your own latitude and longitude). Then you get the device's compass bearing. Then you adjust all of the "friend's" angles based off your device's angle. Then plot everything out on your map (translating your polar coordinates back to cartesian). Then update whenever the device's compass bearing changes (or friend's location updates).
It's a lot of math, but the process is pretty straight forward and you should be able to get all of the conversion information with a single google search.
In case you also require the live camera image and augmentations on top of that, you can use Wikitude SDK, which includes a customizable radar widget (see the example here). You can use it also without the camera image, however the library could be too much for your case.

iOS and Basic offline app [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know about mapxbox/routeme and routeme but I need something basic. I need a basic map with all the countries and the only details it would be the name of the countries and the borders.
Can I do that with mapbox? Is there any easier way to do that?
The app should be offline and preload the maps.
iOS 7 support offline maps.
You just need to add tiles for specific area.
Tile contains overlay of map so we just need to add that overlay on the map.
For tiles you need to calculate area for which you need offline map and then get that area from OpenStreetMaps.
That's it.
Here is detailed Link tutorial for this.
After getting idea from tutorial you can easily understand the map structure and move ahead.
Cheers.

Resources