Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to build an app that locate and track people by iPhone using Gyro+Accelerometer. No need for GPS here.
How should I approach this problem?
Unfortunately it's not feasible to track device position based on accelerometer and gyro.
In order to calculate position from accelerometer data double integration needs to be applied, integration amplifies noise and turns it to drift, so even small measurement error would create huge position drift. Similar problem appears for gyro as well.
You can find more details here:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k&t=23m20s
You don't!
The values retrieved from the gyroscope and accelerometer are "relative" to the device. They have no absolute meaning which you would need to retrieve some kind of location.
You can theoretically measure / calculate what way a device has taken but you do not know if the user took that way in Germany, China or the USA. You know he went right, then left, then 200m straight - but that is not of any help if you do not know from where he originated.
That being said, if you do have the initial position you can theoretically calculate the new position based on the measured values. But that calculation is probably far to error prone and far to inexact. If you try to measure the values over the course of a few minutes or even hours you will probably get an measurement that is many meters or even kilometers off.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm going to be more specific about the situation:
I've captured a screenshot from the game DotA. The information I want to get is what objects eg. heroes (also its name, hp, ...), creeps (also which side), towers, etc. is visible in the image and where they are. A problem come from the fact that in DotA 2 many of these object can be viewed from many perspective, so let's reduce the problem and assume that every object have only one orientation. How might this problem be solved quickly enough, that it can recognise all objects in real time at about 30fps? Any help or suggestions is welcome.
I think that you have the good flags: CNN for image segmentation. So my point is that for so many different objects from different points of view and scale (because I guess that you can zoom in/out on your heroes/objects), the easiest way (but the heaviest in term of computation) is to build one CNN for each type of object.
But images would help a lot to get a better understanding of the problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to build a simple iOS Application with iBeacons.
I have four iBeacons and my goal is to calculate my room position by doing Trilateration. I want to show the room on the display, set the fixed positions of the iBeacons and then calculate the Position and show it on the display.
My problem is, I don't know how to start.
Even though iBeacons are relatively simple to use, trilateration done with them is far from that. The standard is meant for simply determining your current location zone from the nearest beacon. The zones are: immediate (0-0,5m), near (0,5-2m), far (2-20m). Due to the instability of the signal, it is difficult to obtain more precise location data.
That being said, there are a couple of companies that I know, who have worked with that issue: Estimote (Estimote Indoor SDK) and Steerpath. Maybe looking at those two solutions could help you get started with your project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for university I'm working on a project in which I have to teach a robot(Nao-robot) play nine men's morris. Unfortunately I'm fairly new to the area of robotics and I need some tips how to solve some problems. Currently I'm working on the localization/orientation of the robot and I'm wondering which approach of localization would fit best in my project.
A short explanation of the project:
The robot has a fixed starting position and has to walk around on a boardwhich has a size of about 3x3 meter ( I will post a picture of the board when i reach 10 reputation). There are no obstacles on the field except the game tokens and the game lines are marked yellow on the board. For orientation I use the two camera devices the robot has.
I found some approaches like
Monte Carlo Localization
SLAM (Simultaneous Localization and Mapping)
but these approaches seem to be quite complex for a beginner like me and I would really appreciate if some has some good ideas what would be a simpler way to solve this problem. Functionality has for me a far higher priority than performance.
I have vague knowledge about the nine men's morris game as such, but I will try to give you my simpler idea.
First thing first, you need to have a map of your board. This should be easy in your case, cause your environment is static. There are few technique to do this mapping from your board. For your case I would suggest to have a metric map, which is an occupancy grid. Assign coordinates to each cell in the grid. This will be helpful in robot navigation.
As you have mentioned, your robot starts from a fixed position. On start up, initialize your robot with this reference location and orientation (with respect to X-Y axes of the grid, may be you don't need the cameras, I am not sure!!). By initialization I mean, mark your position on the grid.
Use Dead Reckoning for localization and keep updating position and orientation of your robot as it move through the board. I would hope that your robot get some feedback from the servos, like number of rotations and so forth. Do that math and update the position coordinates of your robot as it move into different cell in the grid.
You can use A-Star algorithm to find a path for your robot. You need to do the path planning before you want to navigate. You also have to mark those game tokens on the grid, to avoid collisions in planning the path.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm new here im learning xcode and swift by myself and things are going well.
I wanted to ask what will be the best way (and most exact way) to measure short distances let's say up to 10 meters inside of a building so I cant use GPS
I want to get results in millimeters or centimeters.
Thank you for your time guys
Calculating distances based on device movement and using gyroscope, accelerometer, and other internal sensors is impossible.
There are a few reason why but see this link for an explanation...
https://www.youtube.com/watch?v=_q_8d0E3tDk&list=UUj_UmpoD8Ph_EcyN_xEXrUQ&spfreload=10
Use iBeacon and CoreLocation framework. Here is a video from last WWDC that touches this subject
Taking CoreLocation Indoors
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hi I am developing a location based iOS application. In which there is a feature to find user's friends in a radar view. I am getting friends latitude and longitude from backend.
Now I tried so many thing to show them in a radar view like a compass (means when I rotate the friend's spots in radar also rotates)
Client need feature exactly like the app LOVOO
How can I do this ?
Please help me.
I've created a similar LOVOO like Radar view for iOS. It's available on github.
https://github.com/abm-adnan/Radar
If you are not worried about the curvature of the earth's surface then this turns into something really easy.
You change all of the latitude and longitudes into a polar coordinate based system (angle and magnitude from your own latitude and longitude). Then you get the device's compass bearing. Then you adjust all of the "friend's" angles based off your device's angle. Then plot everything out on your map (translating your polar coordinates back to cartesian). Then update whenever the device's compass bearing changes (or friend's location updates).
It's a lot of math, but the process is pretty straight forward and you should be able to get all of the conversion information with a single google search.
In case you also require the live camera image and augmentations on top of that, you can use Wikitude SDK, which includes a customizable radar widget (see the example here). You can use it also without the camera image, however the library could be too much for your case.