How to measure user distance from wall - ios

I need to measure distance of wall from user. When user open the camera and point to the any surface i need to get the distance. I have read some link Is it possible to measure distance to object with camera? and i used code for find the iphone camera angle from here http://blog.sallarp.com/iphone-accelerometer-device-orientation.
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
// Get the current device angle
float xx = -[acceleration x];
float yy = [acceleration y];
float angle = atan2(yy, xx);
}
d = h * tan angle
But nothing happen in the nslog and camera.

In the comments, you shared a link to a video: http://youtube.com/watch?v=PBpRZWmPyKo.
That app is not doing anything particularly sophisticated with the camera, but rather appears to be calculate distances using basic trigonometry, and it accomplishes this by constraining the business problem in several critical ways:
First, the app requires the user to specify the height at which the phone's camera lens is being held.
Second, the user is measuring the distance to something sitting on the ground and aligning the bottom of that to some known location on the screen (meaning you have a right triangle).
Those two constraints, combined with the accelerometer and the camera's lens focal length, would allow you to calculate the distance.
If your target cross-hair was in the center of the screen, it greatly simplifies the problem and it becomes a matter of simple trigonometry, i.e. your d = h * tan(angle).
BTW, the "angle" code in the question appears to measure the rotation about the z-axis, the clockwise/counter-clockwise rotation as the device faces you. For this problem, though, you want to measure the rotation of the device about its x-axis, the forward/backward tilt. See https://stackoverflow.com/a/16555778/1271826 for example of how to capture the device orientation in space. Also, that answer uses CoreMotion, whereas the article referenced in your question is using an API that has since been deprecated.

The only way this would be possible is if you could read out the setting of the auto-focus mechanism in the lens. To my knowledge this is not possible.

Related

Is there a way to shift the principal point of a Scene Kit camera?

I'd like to simulate the shift of a tilt-shift/perspective-control lens in Scene Kit on MacOS.
Imagine the user has the camera facing a tall building at ground level, I'd like to be able to shift the 'lens' so that the projective distortion shifts (see e.g. Wikipedia).
Apple provides lots of physically-based parameters for SCNCamera (sensor height, aperture blade count), but I can't see anything obvious for this. It seems to exist in Unity.
Crucially I'd like to shift the lens so that the object stays in the same position relative to the camera. Obviously I could move the camera to get the effect, but the object needs to stay centred in the viewport (and I can't see a way to modify the viewport either). I've tried to modify the .projectionTransform matrix directly, but it was unsuccessful.
Thanks!
There's is no API on SCNCamera that does that out of the box. As you guessed one has to create a custom projection matrix and set it to the projectionTransform property.
I finally worked out the correct adjustment to the projection matrix – it's quite confusing to follow the maths, because it is a 4x4 matrix rather than 3x4 or 4x3 as you'd use for a plain camera projection matrix, which additionally makes it especially confusing to work out whether it is expecting row vectors or column vectors.
Anyway, the correct element is .m32 for the y axis
let camera = SCNNode()
camera.camera = SCNCamera()
let yShift: CGFloat = 1.0
camera.camera!.projectionTransform.m32 = yShift
Presumably .m31 will shift in the x axis, but I have to admit I haven't tested this.
When I thought about it a bit more, I also realised that the effect I actually wanted involves moving the camera too. Adjusting .m32 simulates moving the sensor, which will appear to move the subject relative to the camera, as if you had a wide angle lens and you were moving the crop. To keep the subject centred in frame, you need to move the camera's position too.
With a bit (a lot) of help from this blog post and in particular this code, I implemented this too:
let distance: CGFloat = 1.0 // calculate distance from subject here
let fovRadians = camera.camera!.fieldOfView * CGFloat.pi / 180.0
let yAdjust = tan(fovRadians / 2) * distance * yShift
camera.position = camera.position - camera.worldUp * yAdjust
(any interested readers could presumably work out the x axis shift from the source above)

iPhone augmented reality Euler angles rotation – roll issue

I’m working on an iOS augmented reality application.
It is location-based, not marker-based.
I use the GPS, compass and accelerometers to get latitude, longitude, altitude and the 3 euler angles: yaw, pitch and roll. I know using NSLog() that those 6 variables contain valid data.
My application shows some 3d objects over the camera view.
It works fine as long as I use everything but the roll angle.
If I add that third angle, the rotation applied to my opengl world is not good. I do it that way in the main OpenGL draw method
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
//glRotatef(roll, 0, 0, 1);
I think there is something wrong with this approach but am certainly not a specialist. Maybe I should create some sort of unique rotation matrix rather than 3 different ones?
Maybe that’s not possible easily? After all most desktop video games, FPS and the like, just let the user change the yaw and the pitch using the mouse, so only 2 angles, not 3. But unlike the mouse, which is a 2d device, a phone used for augmented reality can move in any angles.
But then again, all AR tutorials I have seen online couldn’t handle ‘roll’ properly. ‘Rolling’ your phone would either completely mess AR stuff up or do nothing at all, using some roll-compensation strategies.
So my question is, assuming I have my 3 Euler angles using the phone sensors, how should I apply them to my 3d opengl view?
I think you're likely talking about gimbal lock.
The essence of the problem is that if you rotate with Eulers then there's always a sequence to it. For example, you rotate around x, then around y, then z. But then one axis can always becomes ambiguous because a preceding can move it onto a different axis.
Suppose the rotation were 0 degrees around x, 90 degrees around y, then 20 degrees around z. So you do the x rotation and nothing has changed. You do the y rotation and everything moves 90 degrees. But now you've moved the z axis onto where the x axis was previously. So the z rotation will appear to be around x.
No matter what most people's instincts tell them, there's no way to avoid the problem. The kneejerk reaction is that you'll always rotate around the global axes rather than the local one. That doesn't resolve the problem, it just reverses the order. The z rotation could then the y rotation — which has already occurred — into an x rotation.
You're right that you should aim to create a unique description of rotation separated from measuring angles.
For augmented reality it's actually not all that difficult.
The accelerometer tells you which way down is. The compass tells you which way north is. The two may not be orthogonal though — the compass reading should vary from being exactly at a right angle to the floor on the equator to being exactly parallel to the accelerometer at the poles.
So:
just accept the accelerometer vector as down;
get the cross product of down and the compass vector to get your side vector — it should point along a line of longitude;
then get the cross product of your side vector and your down vector to get a north vector that is suitably perpendicular.
You could equally use the dot product to remove that portion of the compass vector that is in the direction of gravity and cross product from there.
You'll want to normalise everything.
That gives you three basis vectors, so just put them directly into a matrix. No further work required.

Using CMDeviceMotion and CMAttitude to isolate vertical or horizontal acceleration

I'm trying to isolate either a vertical or horizontal acceleration component assuming device orientation may be continuously changing.
Prior to having gyroscope data and CMAttitude, this was impossible because we only had the acceleration data. Now that we have both acceleration is userAcceleration and orientation via CMAttitude, it seems it should be possible to adjust the acceleration data by the attitude data in order to isolate a particular absolute direction of acceleration. This is a bit different from using a reference frame because I'm expecting the device orientation to be constantly changing. Think armband, etc... In my case,
I'd like to be able to capture either strictly vertical, or strictly horizontal acceleration values regardless of how the device orientation may be changing. The geometry for this is a little beyond me and I'd appreciate some advice.
I'm not familiar with the iOS APIs but I can give the math answer.
Determine the orientation with gyroscope+accelerometer sensor fusion. I hope iOS has an API for that as it's relatively complicated (if not let us know and I will elaborate). This seems to be what you call CMAttitude. The orientation should be expressed as a quaternion or a matrix, it represents the rotation between a fixed reference frame (most probably North-East-Down, but it depends on the API) and the local reference frame attached to your device.
Take the vector read from the accelerometer (in the local reference frame), and rotate it with the opposite rotation of your orientation. The opposite of a rotation is the quaternion conjugate or the matrix transpose. Rotating a vector is done via quaternion multiplication or matrix multiplication. This gives you the acceleration vector in the fixed reference frame.
The Z-component of the acceleration in the fixed reference frame is the vertical acceleration. The norm of the XY-components (sqrt(x^2+y^2)) is the horizontal acceleration. Don't forget to subtract the gravity from vertical acceleration. This assumes a North-East-Down reference frame again, but most other fixed reference frames would only require you to swap X,Y,Z appropriately.
The implementation shall be trivial if iOS has the right APIs. If you have the choice prefer quaternions over matrices as the implementation would run faster. Let us know how it goes.
I just implemented marcv81 answer.
-(void)isolateHorizontalMotionFromMotionData:(CMDeviceMotino *)newMotion
{
//Quaternion Conjugation
CMQuaternion quaternion = newMotion.attitude.quaternion;
GLKQuaternion original_quaternion = GLKQuaternionMake(quaternion.x, quaternion.y, quaternion.z, quaternion.w);
GLKQuaternion conjugated_quaternion = GLKQuaternionConjugate(original_quaternion);
//Rotation of Accelerometer vector with quanternion
GLKVector3 acceleromationVector = GLKVector3Make(newMotion.userAcceleration.x, newMotion.userAcceleration.y, newMotion.userAcceleration.z);
GLKVector3 accelerometionVector_toReferenceFrame = GLKQuaternionRotateVector3(conjugated_quaternion, acceleromationVector);
//Horizontal Acceleration
float horizontalAcceleration = sqrtf(powf(accelerometionVector_toReferenceFrame.x,2)+powf(accelerometionVector_toReferenceFrame.y,2));
}

how can i measure distance of an detected object from camera in video using opencv?

All i know is that the height and width of an object in video. can someone guide me to calculate distance of an detected object from camera in video using c or c++? is there any algorithm or formula to do that?
thanks in advance
Martin Ch was correct in saying that you need to calibrate your camera, but as vasile pointed out, it is not a linear change. Calibrating your camera means finding this matrix
camera_matrix = [fx,0 ,cx,
0,fy,cy,
0,0, 1];
This matrix operates on a 3 dimensional coordinate (x,y,z) and converts it into a 2 dimensional homogeneous coordinate. To convert to your regular euclidean (x,y) coordinate just divide the first and second component by the third. So now what are those variables doing?
cx/cy: They exist to let you change coordinate systems if you like. For instance you might want the origin in camera space to be in the top left of the image and the origin in world space to be in the center. In that case
cx = -width/2;
cy = -height/2;
If you are not changing coordinate systems just leave these as 0.
fx/fy: These specify your focal length in units of x pixels and y pixels, these are very often close to the same value so you may be able to just give them the same value f. These parameters essentially define how strong perspective effects are. The mapping from a world coordinate to a screen coordinate (as you can work out for yourself from the above matrix) assuming no cx and cy is
xsc = fx*xworld/zworld;
ysc = fy*yworld/zworld;
As you can see the important quantity that makes things bigger closer up and smaller farther away is the ratio f/z. It is not linear, but by using homogenous coordinates we can still use linear transforms.
In short. With a calibrated camera, and a known object size in world coordinates you can calculate its distance from the camera. If you are missing either one of those it is impossible. Without knowing the object size in world coordinates the best you can do is map its screen position to a ray in world coordinates by determining the ration xworld/zworld (knowing fx).
i don´t think it is easy if have to use camera only,
consider about to use 3rd device/sensor like kinect/stereo camera,
then you will get the depth(z) from the data.
https://en.wikipedia.org/wiki/OpenNI

Finding distance from camera to object of known size

I am trying to write a program using opencv to calculate the distance from a webcam to a one inch white sphere. I feel like this should be pretty easy, but for whatever reason I'm drawing a blank. Thanks for the help ahead of time.
You can use triangle similarity to calibrate the camera angle and find the distance.
You know your ball's size: D units (e.g. cm). Place it at a known distance Z, say 1 meter = 100cm, in front of the camera and measure its apparent width in pixels. Call this width d.
The focal length of the camera f (which is slightly different from camera to camera) is then f=d*Z/D.
When you see this ball again with this camera, and its apparent width is d' pixels, then by triangle similarity, you know that f/d'=Z'/D and thus: Z'=D*f/d' where Z' is the ball's current distance from the camera.
To my mind you will need a camera model = a calibration model if you want to measure distance or other things (int the real-world).
The pinhole camera model is simple, linear and gives good results (but won't correct distortions, (whether they are radial or tangential).
If you don't use that, then you'll be able to compute disparity-depth map, (for instance if you use stereo vision) but it is relative and doesn't give you an absolute measurement, only what is behind and what is in front of another object....
Therefore, i think the answer is : you will need to calibrate it somehow, maybe you could ask the user to approach the sphere to the camera till all the image plane is perfectly filled with the ball, and with a prior known of the ball measurement, you'll be able to then compute the distance....
Julien,

Resources