I am working on camera app which supports above iOS8.Due to my requirement i need to change the lens values.(Not focus point).My lens position values are something like 36 mm,45 mm and so on.How can i apply these values to camera or any other default values available? I am using AVCapture for taking photos. Any help would be thankful.
Not sure if I understand your question because when you move the lens position, you are in fact changing camera's focal point (camera's optical properties), not the same thing as focus point of interest witch is the area of interest in the the image.
So, to change the lens position you can do it with this method:
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition
completionHandler:(void (^)(CMTime syncTime))handler
You can find it in AVCaptureDevice class reference (this will be the device you selected has camera for your app). lensPosition value it's beetween 0 and 1, it's a relative positioning system, so you can't use those values (36mm and 45mm).
Related
I am working on a machine vision project and need to determine the angle of an object in x and y relative to the center of the frame (center in my mind being where the camera is pointed). I originally did NOT do a camera calibration (calculated angle per pixel by taking a picture of a dense grid and doing some simple math). While doing some object tracking I was noticing some strange behaviour which I suspected was due to some distortion. I also noticed that an object that should be dead center of my frame was not, the camera had to be shifted or the angle changed for that to be true.
I performed a calibration in OpenCV and got a principal point of (363.31, 247.61) with a resolution of 640x480. The angle per pixel obtained by cv2.calibrationMatrixVales() was very close to what I had calculated, but up to this point I was assuming center of the frame was based on 640/2, 480/2. I'm hoping that someone can confirm, but going forward do I assume that my (0,0) in cartesian coordinates is now at the principal point? Perhaps I can use my new camera matrix to correct the image so my original assumption is true? Or I am out to lunch and need some direction on how to achieve this.
Also was my assumption of 640/2 correct or should it technically have been (640-1)/2. Thanks all!
I have 2D image data with respective camera location in latitude and longitude. I want to translate pixel co-ordinates to 3D world co-ordinates. I have access to intrinsic calibration parameters and Yaw, pitch and roll. Using Yaw, pitch and roll I can derive rotation matrix but I am not getting how to calculate translation matrix. As I am working on data set, I don't have access to camera physically. Please help me to derive translation matrix.
Cannot be done at all if you don't have the elevation of the camera with respect to the ground (AGL or ASL) or another way to resolve the scale from the image (e.g. by identifying in the image an object of known size, for example a soccer stadium in an aerial image).
Assuming you can resolve the scale, the next question is how precisely you can (or want to) model the terrain. For a first approximation you can use a standard geodetical ellipsoid (e.g. WGS-84). For higher precision - especially for images shot from lower altitudes - you will need use a DTM and register it to the images. Either way, it is a standard back-projection problem: you compute the ray from the camera centre to the pixel, transform it into world coordinates, then intersect with the ellipsoid or DTM.
There are plenty of open source libraries to help you do that in various languages (e.g GeographicLib)
Edited to add suggestions:
Express your camera location in ECEF.
Transform the ray from the camera in ECEF as well taking into account the camera rotation. You can both transformations using a library, e.g. nVector.
Then proceeed to intersect the ray with the ellipsoid, as explained in this answer.
From AVFoundation objects like AVCaptureDevice I can read a value of fieldOfView using
captureDevice.activeFormat.videoFieldOfView
And this gives me a value of about 58 degrees on iPhone 6s. But what I need is to get current value of actual fieldOfView displayed in current frame while AVCaptureSession is running. What I mean by this is that the value read from activeFormat does not reflect real fieldOfView when lens is moved to get image in focus. For example when I point camera at a distant object, lens moves to keep in focus, and fieldOfView increases a little bit, to about 61 degrees. You can observe this behavior in ARKit camera - even when camera is catching focus displayed objects are getting bigger and smaller focusing along with the camera preview. I would like to know how can I achieve such behavior and adjust fieldOfView to current lens position. I need to apply this to SceneKit camera.
In shortcut: How to find current exact FieldOfView of camera taking into consideration current focus position (or lens position)
To help understand what I mean please look at this gif:
Focusing on iPhone
At the beginning you can see it focusing, and video kinda like zoomes in/out a little to catch correct lens position. So when this lens moves actual visible fieldOfView is getting a little different.
I have camera parameters and I know the distance between the camera and the flat region (for example, a wall). Roll and pitch values of camera are constant (assume as in this). But, yaw value can be any value between -60 and 60 degrees, and also I know this. Is it possible to calculate the distance of any point in the image to the camera location ?
No, not without additional information. An object that's not on the "flat region" can be anywhere. To convince yourself that this is the case, note that, given an image of the object, you can always "shrink" it and move it closer to the camera to produce the same image.
If the object has known size and shape, then you can trivially find its distance from its apparent magnification in the image.
I have solved the problem. In my scenario, there is no object. Thank you.
I am trying to write a program using opencv to calculate the distance from a webcam to a one inch white sphere. I feel like this should be pretty easy, but for whatever reason I'm drawing a blank. Thanks for the help ahead of time.
You can use triangle similarity to calibrate the camera angle and find the distance.
You know your ball's size: D units (e.g. cm). Place it at a known distance Z, say 1 meter = 100cm, in front of the camera and measure its apparent width in pixels. Call this width d.
The focal length of the camera f (which is slightly different from camera to camera) is then f=d*Z/D.
When you see this ball again with this camera, and its apparent width is d' pixels, then by triangle similarity, you know that f/d'=Z'/D and thus: Z'=D*f/d' where Z' is the ball's current distance from the camera.
To my mind you will need a camera model = a calibration model if you want to measure distance or other things (int the real-world).
The pinhole camera model is simple, linear and gives good results (but won't correct distortions, (whether they are radial or tangential).
If you don't use that, then you'll be able to compute disparity-depth map, (for instance if you use stereo vision) but it is relative and doesn't give you an absolute measurement, only what is behind and what is in front of another object....
Therefore, i think the answer is : you will need to calibrate it somehow, maybe you could ask the user to approach the sphere to the camera till all the image plane is perfectly filled with the ball, and with a prior known of the ball measurement, you'll be able to then compute the distance....
Julien,