Compensate screen rotation by modifying GL_PROJECTION in OpenGL - ios

Is it possible to set up GL_PROJECTION in OpenGL to compensate screen rotations?
I think there is a lot of applications to that, in augmented reality or stereoscopic views, for instance.
Particularly, I would like to make a "fake" change of perspective when the mobile device is tilted.
This effect is shown in the image

Actually your particular case requires adjustment of both the projection and the modelview. The modelview is responsible for setting the point of origin. By having an angled view the vantage point shifts. However also the lens get shifted (literaly, it's just like a shift lens on a real camera), that requires a shift term.
Now your sketch is a bit unclear on what actually is desired. What I can clearly say is, that it's not rotated, but shifted. Suggestion: Download Blender, set up a simple scene and tinker with the "Shift" parameters of the camera object; as you'll see you will have to apply a combination of lens shift and camera shift.
But generally speaking: Yes, adjustment of the projection matrix is required in some situations.

Related

Why we use fixed focus for AR tracking in ARCore?

I am using ARCore to track an image. Based on the following reference, the FOCUSMODE of the camera should be set to FIXED for better AR tracking performance.
Since for each frame we can get camera intrinsic parameter of focal length, why we need to use a fixed focus?
With Fixed Camera Focus ARCore can better calculate a parallax (no near or distant real-world objects must be out of focus), so your Camera Tracking will be reliable and accurate. At Tracking Stage, your gadget should be able to clearly distinguish all textures of surrounding objects and feature points – to build correct 3D scene.
Also, Scene Understanding stage requires fixed focus as well (to correctly detect planes, catch lighting intensity and direction, etc). That's what you expect from ARCore, don't you?
Fixed Focus also guarantees that your "in-focus" rendered 3D model will be placed in scene beside the real-world objects that are "in-focus" too. However, if we're using Depth API we can defocus real-world and virtual objects.
P.S.
In the future ARCore engineers may change the aforementioned behaviour of camera focus.

is Camera calibration required if I change the height of the camera

I use single-camera calibration with checkerboard and I used one fix position of the camera to do the calibration. Now my question is if I use the same position but change the height of the camera then do I need to do calibration again? If no then will I get the same result by using the different height of the camera?
In my case, I changed the height of the camera but the position of the camera was the same. And I got a different result when I changed height. So I was wondering that may I need to do again calibration of the camera or not?
please help me out.
Generally speaking, and to achieve greatest accuracy, you will need to recalibrate the camera whenever it is moved. However, if the lens mount is rigid enough w.r.t the sensor, you may get away with only updating the extrinsic calibration, especially if your accuracy requirements are modest.
To see why this is the case notice that, unless you have a laboratory-grade rig holding and moving the camera, you can't just change the height only. With a standard tripod, for example, there will in general be a motion in all three axes amounting to a significant fraction of the sensor's size, which will be reflected in visible motion of several pixel with respect to the scene.
Things get worse / more complicated when you also add rotation to re-orient the field of view, since a mechanical mount will not, in general, rotate the camera around its optical center (i.e. the exit pupil of the lens), and therefore every rotation necessarily comes with an additional translation.
In your specific case, since you are only interested in measurements on a plane, and therefore can compute everything using homographies, refining the extrinsic calibration amounts to just recomputing the world-to-image scale. This can easily be achieved by taking one or more images of objects of known size on the plane - a calibration checkerboard is just such an object.

How to remove "Fish Eye" effect on SceneKit camera?

I'm using SceneKit. I have created and assigned my own camera to the scene and I have adjusted its xFov and yFov. When I set a value higher than 50, there starts to be some distortion. Everything near the edges of the screen is stretched – almost like the camera suddenly becomes a "Fish Eye."
I need the xFov and yFov to be above 50 (I actually need it to be 100), but I can't have that distortion. What do I do?
What you're asking isn't theoretically impossible per se, but theoretically interesting at least.
What happens to a physical camera when you increase the field of view? The wider it gets, the more "fisheye" it looks. The projection matrix and perspective divide of a 3D graphics pipeline like SceneKit works in a similar way. It looks a little different because it's a rectilinear transformation rather than the effect of a spherical lens, but it's the same general idea — it maps a volume (called a frustum) of 3D space "seen" by the camera onto the viewing plane. This is a general aspect of 3D graphics, not something specific to SceneKit, so you can find plenty of good tutorials that cover the underlying math pretty well.
That frustum projection fixes a certain relationship between the amount of viewing angle something takes up and its width on the viewing plane. You can't really change that relationship and still have a linear (well, rational, but mostly linear) transformation that 3D hardware can apply with a single matrix multiplication (and perspective divide).
You could, in theory, define a different relationship — say, one where a large angular size corresponds to a much larger part of the viewing plane near the center of the view, but to a much smaller part farther away from the center. But you can't do that in the camera transform... You'd have to do such calculations pixel by pixel in some kind of post-processing shader. (In fact, this is generally how rendering for the lenses of a VR headset works.)

Extrinsic Camera Calibration Using OpenCV's solvePnP Function

I'm currently working on an augmented reality application using a medical imaging program called 3DSlicer. My application runs as a module within the Slicer environment and is meant to provide the tools necessary to use an external tracking system to augment a camera feed displayed within Slicer.
Currently, everything is configured properly so that all that I have left to do is automate the calculation of the camera's extrinsic matrix, which I decided to do using OpenCV's solvePnP() function. Unfortunately this has been giving me some difficulty as I am not acquiring the correct results.
My tracking system is configured as follows:
The optical tracker is mounted in such a way that the entire scene can be viewed.
Tracked markers are rigidly attached to a pointer tool, the camera, and a model that we have acquired a virtual representation for.
The pointer tool's tip was registered using a pivot calibration. This means that any values recorded using the pointer indicate the position of the pointer's tip.
Both the model and the pointer have 3D virtual representations that augment a live video feed as seen below.
The pointer and camera (Referred to as C from hereon) markers each return a homogeneous transform that describes their position relative to the marker attached to the model (Referred to as M from hereon). The model's marker, being the origin, does not return any transformation.
I obtained two sets of points, one 2D and one 3D. The 2D points are the coordinates of a chessboard's corners in pixel coordinates while the 3D points are the corresponding world coordinates of those same corners relative to M. These were recorded using openCV's detectChessboardCorners() function for the 2 dimensional points and the pointer for the 3 dimensional. I then transformed the 3D points from M space to C space by multiplying them by C inverse. This was done as the solvePnP() function requires that 3D points be described relative to the world coordinate system of the camera, which in this case is C, not M.
Once all of this was done, I passed in the point sets into solvePnp(). The transformation I got was completely incorrect, though. I am honestly at a loss for what I did wrong. Adding to my confusion is the fact that OpenCV uses a different coordinate format from OpenGL, which is what 3DSlicer is based on. If anyone can provide some assistance in this matter I would be exceptionally grateful.
Also if anything is unclear, please don't hesitate to ask. This is a pretty big project so it was hard for me to distill everything to just the issue at hand. I'm wholly expecting that things might get a little confusing for anyone reading this.
Thank you!
UPDATE #1: It turns out I'm a giant idiot. I recorded colinear points only because I was too impatient to record the entire checkerboard. Of course this meant that there were nearly infinite solutions to the least squares regression as I only locked the solution to 2 dimensions! My values are much closer to my ground truth now, and in fact the rotational columns seem correct except that they're all completely out of order. I'm not sure what could cause that, but it seems that my rotation matrix was mirrored across the center column. In addition to that, my translation components are negative when they should be positive, although their magnitudes seem to be correct. So now I've basically got all the right values in all the wrong order.
Mirror/rotational ambiguity.
You basically need to reorient your coordinate frames by imposing the constraints that (1) the scene is in front of the camera and (2) the checkerboard axes are oriented as you expect them to be. This boils down to multiplying your calibrated transform for an appropriate ("hand-built") rotation and/or mirroring.
The basic problems is that the calibration target you are using - even when all the corners are seen, has at least a 180^ deg rotational ambiguity unless color information is used. If some corners are missed things can get even weirder.
You can often use prior info about the camera orientation w.r.t. the scene to resolve this kind of ambiguities, as I was suggesting above. However, in more dynamical situation, of if a further degree of automation is needed in situations in which the target may be only partially visible, you'd be much better off using a target in which each small chunk of corners can be individually identified. My favorite is Matsunaga and Kanatani's "2D barcode" one, which uses sequences of square lengths with unique crossratios. See the paper here.

OpenCV - calibrate camera using static images in water

I have a photocamera mounted vertically under water in a tank, looking downwards.
There is a flat grid on the bottom of the tank (approx 2m away from the camera).
I want to be able to place markers on the bottom, and use computer vision to know their real life exact position.
So, I need to map from pixels to mm.
If I am not mistaken, cv::calibrateCamera(...) does just this, but is dependent on moving a pattern in front of the camera.
I have just static pictures of the scene, and the camera never moves in relation to the grid. Thus, I have only a "single" image to find the parameters.
How can I do this using the grid?
Thank you.
Interesting problem! The "cute" part is the effect on the intrinsic parameters of the refraction at the water-glass interface, namely to increase the focal length (or, conversely, to reduce the field of view) compared to the same lens in air. In theory, you could calibrate in air and then correct for the difference in refraction index, but calibrating directly in water is likely to give you more accurate results.
Do know your accuracy requirements? And have you verified that your lens/sensor combination is adequate to meet them (with an adequate margin)? To answer the question you need to estimate (either by calculation from the lens and sensor specifications, or experimentally using a resolution chart) whether you can resolve in an image the minimal distances required by your application.
From the wording of your question I think that you are interested only in measurements on a single plane. So you only need to (a) remove the nonlinear (barrel or pincushion) lens distortion and (b) estimate the homography between the plane of interest and the image. Once you have the latter, you can directly convert from undistorted image coordinates to world ones by matrix multiplication. Additionally if (as I imagine) the plane of interest is roughly parallel to the image plane, you should not have any problem keeping the entire field-of-view in focus.
Of course, for all of this to work as expected, you should make sure that the tank bottom is really flat, within the measurement tolerances of your application. Otherwise you are really dealing with a 3D problem, and need to modify your procedures accordingly.
The actual procedure depends a lot on the size of the tank, which you don't indicate clearly. If it's small enough that it is practical to manufacture a chessboard-like movable calibration target, by all means go for it. You may want to take a look at this other answer for suggestions. In the following I'll discuss the more interesting case in which your tank is large, e.g. the size of a swimming pool.
I'd proceed by sticking calibration markers in a regular grid at the pool bottom. I'd probably choose checker-like markers like these, maybe printing them myself with a good laser printer on plastic with an adhesive backing (assuming you can leave them in place forever). You should plan on having quite a few of them, say, an 8x8 or 10x10 grid, covering as much as possible of the field of view of the camera in its operating position and pose. To help with lining up the grid nicely you might use a laser line projector of suitable fan angle, or a laser pointer attached to a rotating support. Note carefully that it is not necessary that they be affixed in a precise X-Y grid (which may be complicated, depending on the size of your pool), only that their positions with respect any arbitrarily chosen (but fixed) three of them be known. In other words, you can attach them to the bottom approximately in a grid, then measure the distances of three extreme corners from each other as accurately as you can, thus building a base triangle, then measure the distances of all the other corners from the vertices of the triangle, and finally reconstruct their true positions with a bit of trigonometry. It's basically a surveying problem and, depending on your accuracy requirements and budget, you may want to enroll a local friendly professional surveyor (and their tools) to get it done as precisely as necessary.
Once you have your grid, you can fill the pool, get your camera, focus and f-stop the lens as needed for the application. From now on you may not touch the focus and f-stop ever again, under penalty of miscalibrating - exposure can only be controlled by the exposure time, so make sure to have enough light. Disable any and all auto-focus and auto-iris functions, if any. If the camera has a non-rigid lens mount (e.g. a DLSR), you'll need some kind of mechanical rig to ensure that the lens-body pair stay rigid. F-stop as close as you can, given the available lighting and sensor, so to have a fair bit of depth of field available. Then take several photos (~ 10) of the grid, moving and rotating the camera, and going a bit closer and farther away than your expected operating distance from the plane. You'll want to "see" in some images some significant perspective foreshortening of the grid - this is needed to accurately calibrate the focal length. Avoid JPG and any other lossy compression format when storing the images - use lossless PNG or TIFF.
Once you have the images, you can manually mark and identify the checker markers in the images. For a once-off project like this I would not bother with automatic identification, just do it manually (e.g. in Matlab, or even in Photoshop or Gimp). To help identify the markers, you could, e.g. print a number next to them. Once you have the manual marks, you can refine them automatically to subpixel accuracy, e.g. using cv::findCornerSubpix.
You're almost done. Feed the "reference" measured position of the real corners, and the observed ones in all images, to your favorite camera calibration routine, e.g. cv::calibrateCamera. You use the nominal focal length of the camera (converted to pixels) for an initial estimate, along with null distortion. If all goes well, you will obtain the camera intrinsic parameters, which you will keep, and the camera poses at all images, which you'll throw away.
Now you can mount the camera in your final setup, as needed by your application, and take one further image of the grid. Mark and refine the corner positions as before. Undistort their image positions using the distortion parameters returned by the calibration. Finally compute the homography between the reference positions of the real markers (in meters) and their undistorted positions, and you're done.
HTH
To calibrate the camera you do need multiple images of the checkerboard (or one of the other patterns found here). What you can do, is calibrate the camera outside of the water or do a calibration sequence once.
Once you have that information (focal length, center of lens, distortion, etc). You can use the solvePNP function to estimate the orientation of a single board. This estimation provides you with a distance from the camera to the board.
A completely different alternative could be to find what kind of lens the camera uses and manually fill in the data. I've not tried this, so I'm uncertain how well this would work.

Resources