Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hi I am developing a location based iOS application. In which there is a feature to find user's friends in a radar view. I am getting friends latitude and longitude from backend.
Now I tried so many thing to show them in a radar view like a compass (means when I rotate the friend's spots in radar also rotates)
Client need feature exactly like the app LOVOO
How can I do this ?
Please help me.
I've created a similar LOVOO like Radar view for iOS. It's available on github.
https://github.com/abm-adnan/Radar
If you are not worried about the curvature of the earth's surface then this turns into something really easy.
You change all of the latitude and longitudes into a polar coordinate based system (angle and magnitude from your own latitude and longitude). Then you get the device's compass bearing. Then you adjust all of the "friend's" angles based off your device's angle. Then plot everything out on your map (translating your polar coordinates back to cartesian). Then update whenever the device's compass bearing changes (or friend's location updates).
It's a lot of math, but the process is pretty straight forward and you should be able to get all of the conversion information with a single google search.
In case you also require the live camera image and augmentations on top of that, you can use Wikitude SDK, which includes a customizable radar widget (see the example here). You can use it also without the camera image, however the library could be too much for your case.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to build an app that locate and track people by iPhone using Gyro+Accelerometer. No need for GPS here.
How should I approach this problem?
Unfortunately it's not feasible to track device position based on accelerometer and gyro.
In order to calculate position from accelerometer data double integration needs to be applied, integration amplifies noise and turns it to drift, so even small measurement error would create huge position drift. Similar problem appears for gyro as well.
You can find more details here:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k&t=23m20s
You don't!
The values retrieved from the gyroscope and accelerometer are "relative" to the device. They have no absolute meaning which you would need to retrieve some kind of location.
You can theoretically measure / calculate what way a device has taken but you do not know if the user took that way in Germany, China or the USA. You know he went right, then left, then 200m straight - but that is not of any help if you do not know from where he originated.
That being said, if you do have the initial position you can theoretically calculate the new position based on the measured values. But that calculation is probably far to error prone and far to inexact. If you try to measure the values over the course of a few minutes or even hours you will probably get an measurement that is many meters or even kilometers off.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm stuck on a problem and needed some help or guide for a possible solution.
Basically in my application there will be a map with several zones.
The user can select any of these areas, at that time this area is filled with a color.
Imagine a map like this one, so i need to be able to change the color of only one country.
Something like what happens in the books of paintings (https://itunes.apple.com/pt/app/colorfly-best-coloring-book/id1020187921?mt=8), or Paint Bucket command in the Photoshop .
Any idea how to get something like this on iOS ?
Thanks in advance
The paint bucket technique you're looking for is a set of graphics algorithms usually called "flood fill". There are different approaches to the implementation depending on the circumstances and performance needs. (There is more at that wikipedia link.)
I have no experience with it, but here is a library from GitHub that purports to implement this for iOS given a UIImage object: https://github.com/Chintan-Dave/UIImageScanlineFloodfill
Re: your question about doing this without user touch: yes, you'll want to keep a map of countries to (x,y) points so you can re-flood countries when required. That said, the intricacies of the county borders might make an algorithmic fill inexact without more careful normalization of the original source. If your overall map only contains a small set of possible states, there are other ways of achieving this goal, like keeping a complete set of possible images (created in ie Photoshop) and switching them out, or keeping a set of per-country "overlay" images that you swap in as needed. (But if the flood fill is accurate on that source image, and performant for your needs, then great.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to build a simple iOS Application with iBeacons.
I have four iBeacons and my goal is to calculate my room position by doing Trilateration. I want to show the room on the display, set the fixed positions of the iBeacons and then calculate the Position and show it on the display.
My problem is, I don't know how to start.
Even though iBeacons are relatively simple to use, trilateration done with them is far from that. The standard is meant for simply determining your current location zone from the nearest beacon. The zones are: immediate (0-0,5m), near (0,5-2m), far (2-20m). Due to the instability of the signal, it is difficult to obtain more precise location data.
That being said, there are a couple of companies that I know, who have worked with that issue: Estimote (Estimote Indoor SDK) and Steerpath. Maybe looking at those two solutions could help you get started with your project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for university I'm working on a project in which I have to teach a robot(Nao-robot) play nine men's morris. Unfortunately I'm fairly new to the area of robotics and I need some tips how to solve some problems. Currently I'm working on the localization/orientation of the robot and I'm wondering which approach of localization would fit best in my project.
A short explanation of the project:
The robot has a fixed starting position and has to walk around on a boardwhich has a size of about 3x3 meter ( I will post a picture of the board when i reach 10 reputation). There are no obstacles on the field except the game tokens and the game lines are marked yellow on the board. For orientation I use the two camera devices the robot has.
I found some approaches like
Monte Carlo Localization
SLAM (Simultaneous Localization and Mapping)
but these approaches seem to be quite complex for a beginner like me and I would really appreciate if some has some good ideas what would be a simpler way to solve this problem. Functionality has for me a far higher priority than performance.
I have vague knowledge about the nine men's morris game as such, but I will try to give you my simpler idea.
First thing first, you need to have a map of your board. This should be easy in your case, cause your environment is static. There are few technique to do this mapping from your board. For your case I would suggest to have a metric map, which is an occupancy grid. Assign coordinates to each cell in the grid. This will be helpful in robot navigation.
As you have mentioned, your robot starts from a fixed position. On start up, initialize your robot with this reference location and orientation (with respect to X-Y axes of the grid, may be you don't need the cameras, I am not sure!!). By initialization I mean, mark your position on the grid.
Use Dead Reckoning for localization and keep updating position and orientation of your robot as it move through the board. I would hope that your robot get some feedback from the servos, like number of rotations and so forth. Do that math and update the position coordinates of your robot as it move into different cell in the grid.
You can use A-Star algorithm to find a path for your robot. You need to do the path planning before you want to navigate. You also have to mark those game tokens on the grid, to avoid collisions in planning the path.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to develop a virtual fit room app in Microsoft kinectSDK. I want to show the dress on the skeletons.
Can anyone tell me which of the following item is better one.
1)Draw the whole Dress on user skeleton
2)Draw the texture on the each and every joints of the skeletons
I try to do with the 1st option but I want show the dress or alter the dress if the user turns right or left side.
Can anyone help in displaying the cloth on user skeleton when he turns too. So if the user turns right or left the cloth should get aligned. Is this possible to do by normal jpeg image? Or have to create any other special type of images(not sure anykind of 3D images).
Regards,
Jayakumar Natarajan
To do what you want, you need to render a skinned, skeletally animated 3D model that can attach different parts corresponding to clothing items, similar to what the XBox Live avatar does.
For flexible clothing that needs to billow/react to movement, you will have to use some sort of cloth physics to move that little bit around properly.
It is impossible to explain all the necessary concepts here. You will probably have to work your way through displaying a skinned model and animating based on the Kinect skeleton - to attaching different meshes based on the clothing outline (and possibly changing the material to enable color/material variations) - to adding elements that can flex/behave realistically.
Using XNA is definitely the best answer. There's a very good example given in Microsoft Kinect Developer Toolkit named "Aveteering XNA". Have a look at it.
Also, if you need a skeleton to skin 3d modeled clothes, you can try the skeleton which comes with the model (dude.FBX) in that Sample application. You can download the Kinect Toolkit here : http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx