Reconstructing a non-planar polygon in 3D given a 2d projection and known polygon dimensions - image-processing

I have a non-planar object with 9 points with known dimensions in 3D i.e. length of all sides is known. Now given a 2D projection of this shape, I want to reconstruct the 3D model of it. I basically want to retrieve the shape of this object in the real world i.e. angles between different sides in 3D. For eg: given all the dimensions of every part of the table and a 2D image, I'm trying to reconstruct its 3D model.
I've read about homography, perspective transform, procrustes and fundamental/essential matrix so far but haven't found a solution that'll apply here. I'm new to this, so might have missed out something. Any direction on this will be really helpful.

In your question, you mention that you want to achieve this using only a single view of the object. In that case, homographies or Essential/Fundamental matrices wont help you, because these require at least two views of the scene to make sense. If you don't have any priors on the shape of the objects that you want to reconstruct, the key information that you'll be missing is (relative) depth, and in that case I think those are the two possible solutions:
Leverage a learning algorithm. There is a rich literature on 6dof object pose estimation with deep networks, see this paper for example. You wont have to deal with depth directly if you use those since those networks are trained end to end to estimate a pose in SO(3).
Add many more images and use a dense photometric SLAM/SFM pipeline, such as elastic fusion. However, in that case you will need to segment the resulting models since the estimation they produce is of the entire environment, which can be difficult depending on the scene.
However, as you mentioned in your comment, it is possible to reconstruct the model up to scale if you have very strong priors on its geometry. In the case of a planar object (a cuboid will just be an extension of that), you can use this simple algorithm (that is more or less what they do here, there are other methods but I find them a bit messy, equation-wise):
//let's note A,B,C,D the rectangle in 3d that we are after, such that
//AB is parellel with CD. Let's also note a,b,c,d their respective
//reprojections in the image, i.e. a=KA where K is the calibration matrix, and so on.
1) Compute the common vanishing point of AB and CD. This is just the intersection
of ab and cd in the image plane. Let's call it v_1.
2) Do the same for the two other edges, i.e bc and da. Let's call this
vanishing point v_2.
3) Now, you can compute the vanishing line, which will just be
crossproduct(v_1, v_2), i.e. the line going through both v_1 and v_2. This gives
you the orientation of your plane. Let's write its normal N.
5) All you need to find now is the boundaries of the rectangle. To do
that, just consider any plane with normal N that doesn't go through
the camera center. Now find the intersections of K^{-1}a, K^{-1}b,
K^{-1}c, K^{-1}d with that plane.
If you need a refresher on vanishing points and lines, I suggest you take a look at pages 213 and 216 of Hartley-Zisserman's book.

Related

How to 3d reconstruct robustly from multiple images with known poses in OpenCV

The traditional solution for high resolution images examples :
extract features (dense) for all images
match features to find tracks through images
triangulate features to 3d points.
I can give two problem here for my case (many 640*480 images with small movements between each others) , first: matching is very slow , especially if the number of images is big, so a better solution can be optical flow tracking.., but it's getting sparse with big moves, ( a mix could solve the problem !!)
second: triangulate tracks , though it is over-determined problem, I find it hard to code a solution, .. (here am asking for simplifying what I read in references )
I searched quite a bit for libraries in that direction, with no useful result.
again, I have ground truth camera matrices and need only 3d positions as first estimate (without BA),
A coded software solution can be of great help as I don't need to reinvent the wheel, though a detailed instructions maybe helpful
this basically shows the underlying geometry for estimating the depth.
As you said, we have camera pose Q, and we are picking a point X from world, X_L is it's projection on left image, now, with Q_L, Q_R and X_L, we are able to make up this green colored epipolar plane, the rest job is easy, we search through points on line (Q_L, X), this line exactly describe the depth of X_L, with different assumptions: X1, X2,..., we can get different projections on the right image
Now we compare the pixel intensity difference from X_L and the reprojected point on right image, just pick the smallest one and that corresponding depth is exactly what we want.
Pretty easy hey? Truth is it's way harder, image is never strictly convex:
This makes our matching extremely hard, since the non-convex function will result any distance function have multiple critical points (candidate matches), how do you decide which one is the correct one?
However, people proposed path based match to handle this problem, methods like: SAD, SSD, NCC, they are introduced to create the distance function as convex as possible, still, they are unable to handle large scale repeated texture problem and low texture problem.
To solve this, people start to search over a long range in the epipolar line, and suddenly found that we can describe this whole distribution of matching metrics into a distance along the depth.
The horizontal axis is depth, and the vertical axis is matching metric score, and this illustration lead us found the depth filter, and we usually describe this distribution with gaussian, aka, gaussian depth filter, and use this filter to discribe the uncertainty of depth, combined with the patch matching method, we can roughly get a proposal.
Now what, let's use some optimization tools, like GN or gradient descent to finally refine the depth estimaiton.
To sum up, the total process of the depth estimation is like the following steps:
assume all depth in all pixel following a initial gaussian distribution
start search through epipolar line and reproject points into target frame
triangulate depth and calculate the uncertainty of the depth from depth filter
run 2 and 3 again to get a new depth distribution and merge with previous one, if they converged then break, ortherwise start again from 2.

Computer Vision 3D transform from Images

Here is a simplified (but gives the essence) of the problem. Suppose I have a box in space in some reference position/orientation and a calibrated camera C with known position/orientation. I take a picture of the box and can identify N feature points x_i on the projected image B.
Now suppose someone moves the box (rigid-body transform) a relatively small amount. I take a picture of the box and can again identify N feature points x*_i. I want to solve for the rigid-body transform T.
My strategy is to equivalently suppose the box did not move, and suppose I have another camera C* that is found by transforming camera C by the inverse of the transform T. So the N points x_i are the projected feature points on image B relative to camera C*.
So then I believe I can solve for the essential matrix E from the two sets of projected image points (provided I have enough--I think I need 8). (Since the cameras are calibrated I think I can just use essential matrix, not fundamental matrix?) From there I can use matrix decomposition to extract the rotation and translation transform that describes how the cameras differ. The inverse of that is the transform I want.
Does that sound like it will work? What happens if I can't find 8 feature points, but say only 3? Will I be able to get an estimate of the essential matrix or will it totally be wrong?
Yes, it will work and it is possible to solve even for some lost features as far as you are able to tell which is which. As far as I know you need at least 8 points like you said. What you described is how "Structure from motion" algorithms work. Please look it up on slide 3 in this lecture, and next at slide 20: The 8 point algorithm. It relates exactly to what you are talking about. If you realized all this without even knowing about structure form motion, then I am really impressed.
Here's the link to the lecture:
https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws14-15/3DCV_lec06_SFM1.pdf

How to determine distance of objects from camera using Epipolar Plane Image?

I am working on converting 2d images to 3d environment. The images were collected from a video made in a lateral motion. Then the images were placed one behind the other, so it would be easy to find the correspondences between the two images. This is called a spatial-temporal volume.
Next I take a slice from the spatiotemporal volume. That slice is called the Epipolar Plane Image.
Using the Epipolar Plane Image, I want to calculate the depth of the objects in the scene and make a 3D enviornment. I have listed the reference but I have not been able to figure out the math described in the paper. Can someone help me figure this out? Any help is appreciated.
Reference
Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion* !
The math in this situation is easy and straight forward.
First let's define two the coordinate systems for two overlapping images taken by the same camera with the focal length with the following schema:
Let us say that first camera position is defined as follows:
While it's orientation by using three Euler angles is:
By using this definition the corresponding rotation matrix is the identity matrix
The second camera position can be defined as follows:
And since the orientation is the same as the first camera, all Euler angles remain zero:
Which also means that the corresponding rotation matrix is the identity matrix.
If the images overlap and the orientation is the same, the situation in the image space looks like this:
Here the image coordinates and their measurement accuracy are defined as follows:
This geometrical situation can be described by using the Intercept Theorem:
As you see it's not complicated. But be aware that this solution is certainly not the best, since it's base assumption that all orientation angles are the same can't be fulfilled in reality.
If you need to be accurate then you have to perform an bundle adjustment. However, this equations are often used to determine the approximated solution for this geometric situation, where the values are used to linearize the collinearity equations.

Project a 2D point from one camera view onto the corresponding 2D point in another camera view of the same scene

I'm using open cv in C++ in multi-view scene with two cameras. I have the intrinsic and extrinsic parameters for both cameras.
I would like to map a (X,Y) point in View 1 to the same point in the second View. I'm am slightly unsure how I should use the intrinsic and extrinsic matrices in order to convert the points to a 3D world and finally end up with the new 2D point in view 2.
It is (normally) not possible to take a 2D coordinate in one image and map it into another 2D coordinate without some additional information.
The main problem is that a single point in the left image will map to a line in the right image (an epipolar line). There are an infinite number of possible corresponding locations because depth is a free parameter. Secondly it's entirely possible that the point doesn't exist in the right image i.e. it's occluded. Finally it may be difficult to determine exactly which point is the right correspondence, e.g. if there is no texture in the scene or if it contains lots of repeating features.
Although the fundamental matrix (which you get out of cv::StereoCalibrate anyway) gives you a constraint between points in each camera: x'Fx = 0, for a given x' there will be a whole family of x's which will satisfy the equation.
Some possible solutions are as follows:
You know the 3D location of a 2D point in one image. Provided that 3D point is in a common coordinate system, you just use cv::projectPoints with the calibration parameters of the other camera you want to project into.
You do some sparse feature detection and matching using something like SIFT or ORB. Then you can calculate a homography to map the points from one image to the other. This makes a few assumptions about things being planes. If you Google panorama homography, there are plenty of lecture slides detailing this.
You calibrate your cameras, perform an epipolar rectification (cv::StereoRectify, cv::initUndistortRectifyMap, cv::remap) and then run them through a stereo matcher. The output is a disparity map which gives you exactly what you want: a per-pixel mapping from one camera to the other. That is, left[y,x] = right[y, x+disparity_map[y,x]].
(1) is by far the easiest, but it's unlikely you have that information already. (2) is often doable and might be suitable, and as another commenter pointed out will be poor where the planarity assumption fails. (3) is the general (ideal) solution, but has its own drawbacks and relies on the images being amenable to dense matching.

Finding simple shapes in 2D point clouds

I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.

Resources