objective-c/iOS Track location with camera - ios

I want to display real time video on the iPhone Screen.
I know the GPS coordinate of my house.
I want to display a vertical line on the screen that shows me my house direction.
If i move my phone, turn left, right, i want the vertical line to move left and right to show where is my house.
If my house is behind me, i do not want to see the vertical line.
Do you think this kind of application is easy to make ?

For this you have to use "Augmented Reality". You can find many more source code for this.
Please check the below links:
Location Based AR
Augmented Reality Toolkit

Related

How to find how much the mobile screen went from the eye's straight line

I have this requirement how to find whether the user is looking down or up the iPhone screen. like if the user have his iPhone in desk and he need to look down to the screen. if the same user taking a photo over his head means how to find it.
Is there is any sensors we need to use?
There is no direct sensor in iPhone that can recognise where your eyes looking but you can use front camera & machine learning to achieve your functionality. For more refer recognize gaze direction

Create track motion app in iOS

I just want some hint of how can I create an app in iOS which can do following.
When a user is at point X, user will click on start button so app will start a timer and track the movement. User will be on a horse and user needs to ride in a full circle. When user comes back to point X the app should draw the route taken by the user on the horse.
Aim is to ride completely in a circle. I want to make this app to
practice and see how close to a circle I ride.
I tried to look at GPS locator but I am not sure whether it will give me efficient results because the circle I ride can be as small as 60m or less in radius.
I don't know if iOS GPS can be this accurate. I read article on motion sensor and how to track rotation and acceleration.
But I am not quite sure how to use that to my advantage.
I just need some tips like which API to use etc.
Using the Standard Positioning Service one can achieve 15 meter
horizontal accuracy 95% of the time. This means that 95% of the time,
the coordinates you read from your GPS receiver display will be within
15 meters of your true position on the earth.
More Information click here
For integrate Map and draw path using current position google map is good option for integrate in iOS mobile .
small and range and get accurate result use indoor position system.
For more information about Indoor positioning system (IPS) click here
http://developer.estimote.com/
and Github iOS demo : get code

Image tracking - tracking a screen with a camera

I want to track the relative position of a camera aimed at a computer screen.
I can’t control what is displayed on the computer screen but I can receive screen dumps whenever something changes on the screen. Those screen dumps can hopefully be used to find the screen when analyzing the video from the camera.
I see many videos on youtube for face, logo or single colored objects tracking using OpenCV but I’m unsure those methods would work finding and tracking a more detailed image like a screen dump.
Maybe Template Matching is the way to go? But I need to find the screen even at an angle.
Basically I don’t know where to begin and need help from people with experience in this field to find the best way for achieving what I want.
Thanks
Using feature matching should do the trick (Sift/SURF/ORB/...)

camera overlay change with bearing and elevation

Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor?
Have a look at the following, each article or link has different key things required to make your final product. You eventually will be using a combination of geolocation, the compass/or the iphone's gyroscope data coming in.
Reading all the references combined and implementing them one by one in different projects will give you a solid start on how to then combine it all together to create your application. But first you need to get a solid understanding on how to manipulate the knowledge you will learn and how you can then apply it to create your project.
References:
A cool project from Ray Winderlech teach you how to use location gps coordinates in your application
Augmented reality location based tutorial
The next two links show you how to grab gyroscope data to find out the pitch, yaw and rotation and find out the device current position in space.
Apple gyroscope example app
Another core motion gyroscope example
Will teach you how to use the compass
Ray Winderlichs augmented reality compass tutorial for ios
Here's some more augmented reality stuff on overlaying stuff on the camera view
iPhone AR Toolkit
Augmented reality marker tracking tutorial

iPad Object Detection

I want to build an iPad app that detect an alphabet physical shape placed on the iPad screen and print the alphabet to the screen after processing the object detection. Is this doable?
I am trying to find a way to implement this, but could not find any article or online resource that guide me to that.
Thanks,
I would imagine you could start by looking at the various pens and stylus's that are available for iPads. Look at how they work. Then you would need to see if you cna make an object that will activate the touch mechanism over a defined area in the same way, for example - a line, and see if you can detech the touch points along the line. Sorting all that out will effectively get you started.

Resources