Right now i am developing an app for testing human eye by reading letters and symbols, for that the user have to maintain 2 feet distance from his device. So I need to detect distance between human face and ios device using front camera.
Regarding this i have some doubts to clarify
For detecting human face i planned to use core-image framework. In that is it possible to detect the human face in background without camera UI?
For calculating distance i planned to use the below formula
distance = focal length * real height of object * camera frame height /(image height * sensor height)
And i have seen few apps in app store, those are using back camera to calculate the distance between the device and object. So i have little bit confusion is it possible to work it out using front camera.
Please help me, how to acheive this or tell me your suggestion whether it is a right way or not.
Related
I'm working on an iOS project that can create scaled photos using only the center point of the lidar distance information on newer iOS devices. I have a Spike by IkeGPS that acheives this result using a bluetooth laser device to obtain a distance measurement. I am looking for a formula for this that I can reproduce in Xcode. So far, my assumptions are that I will need the resolution of the photos, the focal length of the lens, and the distance to the surface.
So far, all that I have acheived is realtime distance information from the center point, but I would like to use this to create a scaled image.
I am writing an app that will determine the angle at which the ios device is tilted off vertical. Specifically a window will come up with cross hairs (similar to a rifle scope). When the target object is placed in the center of the cross hairs, I would like to get a reading as to what the up or down angle is as referenced from the device. I suppose it would be data similar to a surveyors transit. It just needs to be accurate to 1/2 degree. I've read about the accelerometer and gyroscope sensors. Both seem relevant but not sure which is best way to go. Any insights would be appreciated. Thanks.
Right now I'm exploring features of iOS Depth camera and now I want to obtain the distance in real-world metrics between two points (for example, between two eyes).
I have successfully connected iOS Depth camera functionality and I have AVDepthData in my hands but I'm not quite sure how I can get a real-world distance between two specific points.
I believe I could calculate it if I have depth and viewing angle, but I don't see that the latter is presented as parameter. Also I know that this task could be handled with ARKit, but I'm really curious how I can implement it myself. I mean ARKit uses Depth camera as well, so there must be an algorithm where Depth maps is all I need to calculate the real distance
Could you please give me an advice how to tackle this task? Thanks in advance!
I'm making an app in which the user can move their devices left, right, up, and down in order to view an image. The way I'm doing this is by reading in the accelerometer data to calculate how far in each direction the user moved the device. The issue is that these readings are heavily influenced by the tilt of the device, because the axis change slightly with every tiny tilt. I know the gyroscope measures the rotational accelerations, so is there a way I could use those readings to take tilt out of the picture and fix the axis so they are the same at whatever angle the user holds the device? I realize this is a very eloborate problem, so if you don't understand something feel free to ask
I'm currently developing an iPhone App (on iPhone 5, iOS 7, Xcode 5) which requires a very accurate determination of the current attitude. The "attitude" of CMDeviceMotion does not fulfil these requirements because Apple's sensor fusion algorithm seems to rely too much on the gyroscope which drifts away rather fast (in my experience). That's why I decided to read out the bare sensor data and later I want to combine it within a sensor fusion algorithm by myself.
When asking for magnetometer data one has two possibilities:
via CMMagnetometerData in CMMotionManager
via CMCalibratedMagneticField in CMDeviceMotion about which Apple says
The CMCalibratedMagneticField returned by this property gives you the total magnetic field in the device’s vicinity without device bias. Unlike the magneticField property of the CMMagnetometer class, these values reflect the earth’s magnetic field plus surrounding fields, minus device bias.
In principle (2.) is exactly what I want.
There is a very simple test if magnetometer data is calibrated properly. For simplicity one can restrict oneself to two dimensions. When the device lies on it's back, the combination B_x^2 + B_y^2 must be constant, independent of the direction the device is pointing to. It must just equal the horizontal component of the Earth's magnetic field (assuming no other fields in the vicinity of the device). Thus, when performing a 360 degrees turn of the device which lies on it's back, the measured data B_y over B_x should display a circle. See here for details.
Now the point: the data of CMCalibratedMagneticField does NOT result in a circle!
Does anyone have an explanation for that? Or does anyone know, how the CMCalibratedMagneticField comes about? Is the magnetometer calibrated in the sense of the link from above when performing the "eight-shaped" movement of the device or what is the movement good for?
Btw. why the "eight-shaped" movement and not flipping the device around it's three axis, which would allow a calibration as described in the link from above?
I would be very glad for any clarification with this issue... Thanks!
There is a problem with the magnetometer in iOS 7, it has an error of +-7º. Try using the 7.1 beta version.
EDIT
The magnetometer has zero-drift over time, but is pretty inaccurate for sudden changes in position. The accelerometer and gyroscope on the other hand adjust quickly for sudden changes but, being inertial sensors, they lose accuracy over a period of time.
So when CMCalibratedMagneticField tries compensate for your rotational motion it uses data from the gyroscope and accelerometer. This is when the accelerometer and gyroscope's +-7º error creeps in and throws your circle off track. Check this answer and this wikipedia article for more info.
As regards to the figure of eight:
Both do the same thing, they orient the "North" of your device in each direction in hope of cancelling out magnetic interference. Flipping your device along all three axes will work better but it is harder to perform and not as easily understood by the user.
Hope this helps.