Any way at all of improving iOS current location? [closed] - ios

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I am on a search for ultra accurate gps for iOS development. CoreLocations best is still 10 metres off most times, and jumps about. Within the commercial bounds of iOS development, is this defiantly the very best accuracy we have? Any work arounds? Interested to know how close to perfection i can get it.
I know this is not a normal code question, but it's relevant and will help many others too.

Unfortunately 10 meters is an ideal accuracy for iPhone. In real it may be even worse. If you are developing some kind of fitness application take a look at Kalman filter. It allows to get pretty accurate track on iPhone data.

Related

Sound Localization with a single microphone [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am trying to determine the direction of an audio signal using the microphone on an iPhone. Is there any way to do this? As far as I have read and attempted, it isn't possible. I have made extensive models with keras and even then determining the location of the sound is shaky at best due to the number of variables. So not including any ML aspects, is there a library or method to determine audio direction from an iOS microphone?
No, in general it shouldn't be possible (Even with machine learning)--you need at least two points (and excellent timing) to determine a direction. You MIGHT be able to do something with multiple iPhones, but that would require very tight timing and some learning to determine where the phones are in relation to each other--and I doubt such a library already exists for the iPhone (existing libraries could be ported/adapted though)

SWIFT ARKIT detect the hand and nails, and place the object on the nails? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I would like to know how to detect a person’s hand and nails, and then use ARKIT to place an object on her nails. Frankly, I’ve been looking for information about it for several days in Google, I haven’t found anything that could help me. I would really appreciate if you could help me! Thanks a lot in advance!
You may have to create a machine learning model using Apple's CreateML with images of fingernails and hands to train your app to recognize fingernails and hands and then use CoreML to transfer that recognition to ARKit where you can possibly use it place the object on the nails and hands. I understand that can be a lot to do so for a simpler start to solving your problem Apple has native image recognition functions that you can start experimenting with. Not sure if that necessarily solves your exact problem in recognizing fingernails and hands but at least it's a start.
Check below
https://developer.apple.com/documentation/arkit/tracking_and_altering_images

Should webgl be used for simple websites? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Should webgl be used for simple websites?
I'm not sure or it is wise to use webgl for a simple website just to give it a better look. Will this work on all devices?
WebGL is widely supported today https://www.caniuse.com/#feat=webgl
Whether you "should" use it or not is a broad question. Remember that you aim at improving the user experience. People are forgiving when they play video games, but they don't want to hear their computer fans spin, witness their battery discharging very fast or feel their device getting hot when all they wanted was to read a cooking recipe. Try to be user friendly.
You may for instance want to cap the framerate and/or reduce the resolution on high definition devices, pause the animation loop when the window looses focus (which is not the default behaviour of requestAnimationFrame) or when there is nothing changing on the screen (if the WebGL element is interactive for example). Also, try to write efficient algorithms: it's easy to start writing things on the fragment shader or the CPU when they should be done on the vertex shader. There are many ways to accomplish the same thing and they don't put the same stress on the computer.

Robotics Project based on slam algorithm [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am very beginner in robotics. I want to make a robotics project based on slam algorithms. I know many algorithm and i have the confidence to implement it in any language but i dont have any idea based on image processing and hardware. So, can anyone give a tuotorial based on slam based robotics projects[including how hardware organized and how image processing is done for that project], after seeing that i can make a slam based robotics project from my own.
In addition, If anyone give me a video lecture series for that then it would be very helpful.
Thanks in advance.
I have tried to do something similar last year. I created two systems. The first system made use of a camera and laser to detect objects and determine their location relative to the system itself. The second system was a little robot with tracks (wheels would be better), that used dead reckoning to keep track of its own location relative to its starting location. The techniques worked really well, but unfortunately I did not have the time to combine the two systems. I can however provide you with some documentation that was incredibly useful for me at that time.
These tutorials provide information on both the hardware and the software.
Optical Triangulation (detection of objects with a camera and laser) :
http://www.seattlerobotics.org/encoder/200110/vision.htm
Dead Reckoning (a technique to keep track of one's own location) :
http://www.seattlerobotics.org/encoder/200010/dead_reckoning_article.html

Detect a knock/clap to iPhone/iPad [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am planning to develop an gyroscope based project like TipSkip, handle event knock to device from behind or detect a clap ,I searched google but I didn't find anything except core motion guide and event handling guide.
Thanks for any help
Detecting a clap requires audio recognition i.e. frequency analysis. There is no better source than Apple's own AurioTouch example for FFT. There is fairly good material about FFT and auriotouch online as well, like this.
As for the knocks, accelerometer is the way to go and you just need value recognition for the kind of movements your knock generate.

Resources