From AVFoundation objects like AVCaptureDevice I can read a value of fieldOfView using
captureDevice.activeFormat.videoFieldOfView
And this gives me a value of about 58 degrees on iPhone 6s. But what I need is to get current value of actual fieldOfView displayed in current frame while AVCaptureSession is running. What I mean by this is that the value read from activeFormat does not reflect real fieldOfView when lens is moved to get image in focus. For example when I point camera at a distant object, lens moves to keep in focus, and fieldOfView increases a little bit, to about 61 degrees. You can observe this behavior in ARKit camera - even when camera is catching focus displayed objects are getting bigger and smaller focusing along with the camera preview. I would like to know how can I achieve such behavior and adjust fieldOfView to current lens position. I need to apply this to SceneKit camera.
In shortcut: How to find current exact FieldOfView of camera taking into consideration current focus position (or lens position)
To help understand what I mean please look at this gif:
Focusing on iPhone
At the beginning you can see it focusing, and video kinda like zoomes in/out a little to catch correct lens position. So when this lens moves actual visible fieldOfView is getting a little different.
Related
I am working on a machine vision project and need to determine the angle of an object in x and y relative to the center of the frame (center in my mind being where the camera is pointed). I originally did NOT do a camera calibration (calculated angle per pixel by taking a picture of a dense grid and doing some simple math). While doing some object tracking I was noticing some strange behaviour which I suspected was due to some distortion. I also noticed that an object that should be dead center of my frame was not, the camera had to be shifted or the angle changed for that to be true.
I performed a calibration in OpenCV and got a principal point of (363.31, 247.61) with a resolution of 640x480. The angle per pixel obtained by cv2.calibrationMatrixVales() was very close to what I had calculated, but up to this point I was assuming center of the frame was based on 640/2, 480/2. I'm hoping that someone can confirm, but going forward do I assume that my (0,0) in cartesian coordinates is now at the principal point? Perhaps I can use my new camera matrix to correct the image so my original assumption is true? Or I am out to lunch and need some direction on how to achieve this.
Also was my assumption of 640/2 correct or should it technically have been (640-1)/2. Thanks all!
I'm writing an application in c++ which gets the camera pose using fiducial markers and also as input get a lat/lon coordinate in the real world and as output streams a video with X marker which shows the location of the coordinate on the screen.
When I move my head , the X stays in the same place spatially (because I know how to move it on the screen based on the camera pose or even hide it when I look away.
My only problem is to convert the coordinate from real life to coordinate on the screen.
I know my own gps coordinate and the target gps coordinate.
I also have the screen size (height / width) .
How can I in openCV translate all these to x,y pixel on the screen ?
In my point, your question isn't so clear.
The opencv is an image processing library
You can't convert your needs with opencv. You've need a solution with your own algorithms. So I have some advices and some experiments to explain somethings.
You can simulate to show your real life position on screen with any programming language. Imagine it, you want to develop a measurement software, it can measure a house plan image on screen with drawing lines to edges of all walls (You know some length of walls owing to an image like below)
If you want to measure wall of WC at bottom, you must know how much pixels are how ft, so firstly you should draw a line from start to end of known length for how much pixel width it. For example, If 12'4"" ft equals 9 pixels width. no longer, you can calculate length wall of WC at bottom with use basic proportion. Of course this is basic ratio for you.
I know this is not your need but this answer is helpful for you, I hope it will give some ideas.
I have camera parameters and I know the distance between the camera and the flat region (for example, a wall). Roll and pitch values of camera are constant (assume as in this). But, yaw value can be any value between -60 and 60 degrees, and also I know this. Is it possible to calculate the distance of any point in the image to the camera location ?
No, not without additional information. An object that's not on the "flat region" can be anywhere. To convince yourself that this is the case, note that, given an image of the object, you can always "shrink" it and move it closer to the camera to produce the same image.
If the object has known size and shape, then you can trivially find its distance from its apparent magnification in the image.
I have solved the problem. In my scenario, there is no object. Thank you.
I am working on camera app which supports above iOS8.Due to my requirement i need to change the lens values.(Not focus point).My lens position values are something like 36 mm,45 mm and so on.How can i apply these values to camera or any other default values available? I am using AVCapture for taking photos. Any help would be thankful.
Not sure if I understand your question because when you move the lens position, you are in fact changing camera's focal point (camera's optical properties), not the same thing as focus point of interest witch is the area of interest in the the image.
So, to change the lens position you can do it with this method:
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition
completionHandler:(void (^)(CMTime syncTime))handler
You can find it in AVCaptureDevice class reference (this will be the device you selected has camera for your app). lensPosition value it's beetween 0 and 1, it's a relative positioning system, so you can't use those values (36mm and 45mm).
I need to display an OpenGL cubemap (360deg panoramic image used as a texture on a cube) 'aligned' with the North on an iPhone.
0) The panoramic image is split into six images, applied onto the faces of the cube as a texture.
1) Since the 'front' face of the cubemap does not point towards North, I rotate the look-at matrix by theta degrees (found manually). This way when the GL view is displayed it shows the face containing the North view.
2) I rotate the OpenGL map using the attitude from CMDeviceMotion of a CMMotionManager. The view moves correctly. However, it is not yet 'aligned' with the North.
So far everything is fine. I need only to align the front face with North and then rotate it according to the phone motion data.
3) So I access the heading (compass heading) from a CLLocationManager. I read just one heading (the first update I receive) and use this value in step 1 when building the look-at matrix.
After step 3, the OpenGL view is aligned with the surrounding environment. The view is kept (more or less) aligned at step 2, by the CMMotionManager. If I launch the app facing South, the 'back' face of the cube is shown: it is aligned.
However, sometimes the first compass reading is not very accurate. Furthermore, its accuracy improves with the user moving the phone. The idea is to continuously modify the rotation applied to the look-at matrix by keeping into account the continuous readings of the compass heading.
So I have implemented also step 4.
4) Instead of using only the first reading of the heading, I keep reading updates from the CLLocationManager and use them to continuously align the look-at matrix, which is not rotate by the angle theta (found manually at step 1) and by the angle returned by the compass service.
After step 4 nothing works: the view is fixed in a position and moving the phone does not change the view. The cube is rotated with the phone, meaning that I see always the same face of the cube.
From my point of view (but I am clearly wrong) by first rotating the look-at matrix to align with North and then applying the rotation computed by "DeviceMotion attitude" nothing should change with respect to step 3.
Which step of my reasoning is wrong?