First off, I'd like to state that I'm very new to this field and apologize if the question is a little too repetitive. I've looked around but in vain. I'm working on reading Hartley and Zisserman's book but it's taking me a while.
My problem is That I've got 3 Video Sources of an area and I need to find the camera position at each frame of the video. I do not have any information about the cameras that took the videos (i.e no Intrinsics).
Looking for a solution I came across SfM and tried existing software that exists namely Bundler & Vsfm and they both seem to have worked quite well. However I've got a couple of questions about it.
1) Is SfM really required in my case? Since SfM does a sparse reconstruction and the common points between images are also an output, is it fully necessary? or are there more suitable methods that can do it without since positions are all I really need? Or are there less complex methods I may use instead?
2) From what I've read, I need to calibrate the camera and find it's Intrinsics and Extrinsics. How can I do this without knowing either? I've looked at the 5-pt problem and others but most of them require you to know the intrinsic properties of the camera which I don't have and I cannot use a pattern such as a chessboard to calibrate them since they come from a source outside my control.
Thanks for your time!
Based on my experience, the short answer is:
1) You cannot reliably estimate the 3D pose of the cameras independently from the 3D of the scene. Moreover, since your cameras are moving independently, I think SfM is the right way to approach your problem.
2) You need to estimate the cameras' intrinsics in order to estimate useful (i.e. Euclidian) poses and scene reconstruction. If you cannot use the standard calibration procedure, with chessboard and co, you can have a look at the autocalibration techniques (see also chapter 19 in Hartley's & Zisserman's book). This calibration procedure is done independently for each camera and only require several image samples at different positions, which seems appropriate in your case.
You can actually accomplish your task in a massive bundle adjacent procedure up to a scaling parameter. But is is a very complicated thing even if you aren't novice. You dont need 3d reconstruction, just an essential matrix that can be obtained from 2d projections and decomposed i to rotation and translation but this does require Iintrinsic Paramus. To get them you have to have at least three frames.
Finally, Drop Zimmerman book it will drive you crazy. Read Simon Princes "Computer Vision"instead.
Related
I want to do something like this but in reverse-- so that the cameras are outside and pointing inward. Let's start with the abstract and get specific:
1) Are there any TOOLS that will do this for me? How close can I get using existing software?
2) Say the nearest tool is a graphics library like OpenCV. I've taken linear algebra and have an undergraduate degree in CS but without any special training in graphics. Where should I go from there?
3) If I really am undergoing a decade-long spiritual quest of a self-teaching+programming exercise to make this happen, are there any papers or other resources that you aware of that might aid me?
I think the demo you linked uses a 360° camera (see the black circle on the bottom) and does not involve stitching in any way.
About your question, are you aware of this work? They don't do stitching either, just blending between different views.
If you use inward views, then the objects you will observe will probably be quite close to the cameras, while standard stitching assumes that objects are far away. Close 3D objects mean high distortion when you change the viewpoint (i.e. parallax & occlusions), which makes it difficult to interpolate between two views. Hence, if you want stitching, then your main problem is to correctly handle parallax effects & occlusions between the views.
In my opinion, the most promising approach would be to do live stereo matching (i.e. dense 3D reconstruction) between the two camera images closest to your current viewpoint, and then interpolate the estimated disparities to generate an expected image. However, it's not likely to run in real-time, as demonstrated in the demo you linked, and the result could be quite ugly...
EDIT
You can also have a look at this paper, which uses a different but interesting approach, however maybe not directly useful in your case since it requires the new viewpoint to be visible in the available images.
I am looking for camera calibration techniques with OpenCV and saw the chessboard and circles methods, but I wanted to calibrate the camera with something that is in the real world and you don't have to print (printers are also not very accurate in what they print).
Is it possible to do calibration with complex shapes like the Coca Cola logo on the cans? Is it a problem that the surface is curved?
Thanks
Depending on what you want to achieve this is not at all necessarily a bad idea, and you are not the first one who had it. There was a technology that uses a CD, which is a strongly standardised object which at least used to exist on most households, for a simple camera calibration task. (There is little technical to be found online about this, as the technology was proprietary. This is business document, where the use of the CD is mentioned. Algorithmically, however, it is not difficult if you know camera calibration.)
The question is whether the precision you get is sufficient for your application. Don't expect any miracles here. Generally you can use almost any object you like to learn something about a camera, as long as you can detect it reliably and you know its geometry. Almost certainly you will have to take several pictures of the object. Curved surfaces are no problem per see. I regularly used a cylinder (larger than a beverage can, though, with a simple to detect pattern) to calibrate a complete camera rig of 12 SLRs.
Don't expect to find out of the box solutions and don't expect implementation to be trivial. You will have to work your way through the math. I recommend the book by Hartley and Zisserman, Multiple View Geometry for Computer vision. This paper describes an analysis-by-synthesis approach to calibration, which is the way to go for here (it does not describe exactly what you want, but the approach should generalise to arbitrary objects as long as you can detect them).
i can understand your wish, but it's a bad idea.
the calibration algorithm works by comparing real world points from the cam with a synthetical model ( yes, you have to supply that , too! ). so, while it's easy to calculate a 2d chessboard grid on the fly and use that, it will be very hard to do for your tin can, or any arbitrary household item you grab.
just give in, and print a rectangular chessbord grid to a piece of paper
(opencv comes with a pdf for that already).
don't use a real-life chessboard, a quadratic one is ambiguous to 90° rotation.
interesting idea.
What about displaying a checkerboard pattern (or sth else) on an lcd screen display and use that display as calibration pattern?? You would have to know the displaying size of the pattern though.
Googling I found this paper:
CAMERA CALIBRATION BASED ON LIQUID CRYSTAL DISPLAY (LCD)
ZHAN Zongqian
http://www.isprs.org/proceedings/XXXVII/congress/3b_pdf/04.pdf
comment: this doesn't answer the question about the coca-cola can but gives and idea for a solution to the grounding problem: camera calibration with a common object.
I have a very specific application in which I would like to try structure from motion to get a 3D representation. For now, all the software/code samples I have found for structure from motion are like this: "A fixed object that is photographed from all angle to create the 3D". This is not my case.
In my case, the camera is moving in the middle of a corridor and looking forward. Sometimes, the camera can look on other direction (Left, right, top, down). The camera will never go back or look back, it always move forward. Since the corridor is small, almost everything is visible (no hidden spot). The corridor can be very long sometimes.
I have tried this software and it doesn't work in my particular case (but it's fantastic with normal use). Does anybody can suggest me a library/software/tools/paper that could target my specific needs? Or did you ever needed to implement something like that? Any help is welcome!
Thanks!
What kind of corridors are you talking about and what kind of precision are you aiming for?
A priori, I don't see why your corridor would not be a fixed object photographed from different angles. The quality of your reconstruction might suffer if you only look forward and you can't get many different views of the scene, but standard methods should still work. Are you sure that the programs you used aren't failing because of your picture quality, arrangement or other reasons?
If you have to do the reconstruction yourself, I would start by
1) Calibrating your camera
2) Undistorting your images
3) Matching feature points in subsequent image pairs
4) Extracting a 3D point cloud for each image pair
You can then orient the point clouds with respect to one another, for example via ICP between two subsequent clouds. More sophisticated methods might not yield much difference if you don't have any closed loops in your dataset (as your camera is only moving forward).
OpenCV and the Point Cloud Library should be everything you need for these steps. Visualization might be more of a hassle, but the pretty pictures are what you pay for in commercial software after all.
Edit (2017/8): I haven't worked on this in the meantime, but I feel like this answer is missing some pieces. If I had to answer it today, I would definitely suggest looking into the keyword monocular SLAM, which has recently seen a lot of activity, not least because of drones with cameras. Notably, LSD-SLAM is open source and may not be as vulnerable to feature-deprived views, as it operates directly on the intensity. There even seem to be approaches combining inertial/odometry sensors with the image matching algorithms.
Good luck!
FvD is right in the sense that your corridor is a static object. Your scenario is the same and moving around and object and taking images from multiple views. Your views are just not arranged to provide a 360 degree view of the object.
I see you mentioned in your previous comment that the data is coming from a video? In that case, the problem could very well be the camera calibration. A camera calibration tells the SfM algorithm about the internal parameters of the camera (focal length, principal point, lens distortion etc.) In the absence of knowledge about these, the bundler in VSfM uses information from the EXIF data of the image. However, I don't think video stores any EXIF information (not a 100% sure). As a result, I think the entire algorithm is running with bad focal length information and cannot solve for the orientation.
Can you extract a few frames from the video and see if there is any EXIF information?
If I take a picture with a camera, so I know the distance from the camera to the object, such as a scale model of a house, I would like to turn this into a 3D model that I can maneuver around so I can comment on different parts of the house.
If I sit down and think about taking more than one picture, labeling direction, and distance, I should be able to figure out how to do this, but, I thought I would ask if someone has some paper that may help explain more.
What language you explain in doesn't matter, as I am looking for the best approach.
Right now I am considering showing the house, then the user can put in some assistance for height, such as distance from the camera to the top of that part of the model, and given enough of this it would be possible to start calculating heights for the rest, especially if there is a top-down image, then pictures from angles on the four sides, to calculate relative heights.
Then I expect that parts will also need to differ in color to help separate out the various parts of the model.
As mentioned, the problem is very hard and is often also referred to as multi-view object reconstruction. It is usually approached by solving the stereo-view reconstruction problem for each pair of consecutive images.
Performing stereo reconstruction requires that pairs of images are taken that have a good amount of visible overlap of physical points. You need to find corresponding points such that you can then use triangulation to find the 3D co-ordinates of the points.
Epipolar geometry
Stereo reconstruction is usually done by first calibrating your camera setup so you can rectify your images using the theory of epipolar geometry. This simplifies finding corresponding points as well as the final triangulation calculations.
If you have:
the intrinsic camera parameters (requiring camera calibration),
the camera's position and rotation (it's extrinsic parameters), and
8 or more physical points with matching known positions in two photos (when using the eight-point algorithm)
you can calculate the fundamental and essential matrices using only matrix theory and use these to rectify your images. This requires some theory about co-ordinate projections with homogeneous co-ordinates and also knowledge of the pinhole camera model and camera matrix.
If you want a method that doesn't need the camera parameters and works for unknown camera set-ups you should probably look into methods for uncalibrated stereo reconstruction.
Correspondence problem
Finding corresponding points is the tricky part that requires you to look for points of the same brightness or colour, or to use texture patterns or some other features to identify the same points in pairs of images. Techniques for this either work locally by looking for a best match in a small region around each point, or globally by considering the image as a whole.
If you already have the fundamental matrix, it will allow you to rectify the images such that corresponding points in two images will be constrained to a line (in theory). This helps you to use faster local techniques.
There is currently still no ideal technique to solve the correspondence problem, but possible approaches could fall in these categories:
Manual selection: have a person hand-select matching points.
Custom markers: place markers or use specific patterns/colours that you can easily identify.
Sum of squared differences: take a region around a point and find the closest whole matching region in the other image.
Graph cuts: a global optimisation technique based on optimisation using graph theory.
For specific implementations you can use Google Scholar to search through the current literature. Here is one highly cited paper comparing various techniques:
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms.
Multi-view reconstruction
Once you have the corresponding points, you can then use epipolar geometry theory for the triangulation calculations to find the 3D co-ordinates of the points.
This whole stereo reconstruction would then be repeated for each pair of consecutive images (implying that you need an order to the images or at least knowledge of which images have many overlapping points). For each pair you would calculate a different fundamental matrix.
Of course, due to noise or inaccuracies at each of these steps you might want to consider how to solve the problem in a more global manner. For instance, if you have a series of images that are taken around an object and form a loop, this provides extra constraints that can be used to improve the accuracy of earlier steps using something like bundle adjustment.
As you can see, both stereo and multi-view reconstruction are far from solved problems and are still actively researched. The less you want to do in an automated manner the more well-defined the problem becomes, but even in these cases quite a bit of theory is required to get started.
Alternatives
If it's within the constraints of what you want to do, I would recommend considering dedicated hardware sensors (such as the XBox's Kinect) instead of only using normal cameras. These sensors use structured light, time-of-flight or some other range imaging technique to generate a depth image which they can also combine with colour data from their own cameras. They practically solve the single-view reconstruction problem for you and often include libraries and tools for stitching/combining multiple views.
Epipolar geometry references
My knowledge is actually quite thin on most of the theory, so the best I can do is to further provide you with some references that are hopefully useful (in order of relevance):
I found a PDF chapter on Multiple View Geometry that contains most of the critical theory. In fact the textbook Multiple View Geometry in Computer Vision should also be quite useful (sample chapters available here).
Here's a page describing a project on uncalibrated stereo reconstruction that seems to include some source code that could be useful. They find matching points in an automated manner using one of many feature detection techniques. If you want this part of the process to be automated as well, then SIFT feature detection is commonly considered to be an excellent non-real-time technique (since it's quite slow).
A paper about Scene Reconstruction from Multiple Uncalibrated Views.
A slideshow on Methods for 3D Reconstruction from Multiple Images (it has some more references below it's slides towards the end).
A paper comparing different multi-view stereo reconstruction algorithms can be found here. It limits itself to algorithms that "reconstruct dense object models from calibrated views".
Here's a paper that goes into lots of detail for the case that you have stereo cameras that take multiple images: Towards robust metric reconstruction
via a dynamic uncalibrated stereo head. They then find methods to self-calibrate the cameras.
I'm not sure how helpful all of this is, but hopefully it includes enough useful terminology and references to find further resources.
Research has made significant progress and these days it is possible to obtain pretty good-looking 3D shapes from 2D images. For instance, in our recent research work titled "Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks" took a big step in solving the problem of obtaining 3D shapes from 2D images. In our work, we show that you can not only go from 2D to 3D directly and get a good, approximate 3D reconstruction but you can also learn a distribution of 3D shapes in an efficient manner and generate/synthesize 3D shapes. Below is an image of our work showing that we are able to do 3D reconstruction even from a single silhouette or depth map (on the left). The ground-truth 3D shapes are shown on the right.
The approach we took has some contributions related to cognitive science or the way the brain works: the model we built shares parameters for all shape categories instead of being specific to only one category. Also, it obtains consistent representations and takes the uncertainty of the input view into account when producing a 3D shape as output. Therefore, it is able to naturally give meaningful results even for very ambiguous inputs. If you look at the citation to our paper you can see even more progress just in terms of going from 2D images to 3D shapes.
This problem is known as Photogrammetry.
Google will supply you with endless references, just be aware that if you want to roll your own, it's a very hard problem.
Check out The Deadalus Project, althought that website does not contain a gallery with illustrative information about the solution, it post several papers and info about the working method.
I watched a lecture from one of the main researchers of the project (Roger Hubbold), and the image results are quite amazing! Althought is a complex and long problem. It has a lot of tricky details to take into account to get an approximation of the 3d data, take for example the 3d information from wall surfaces, for which the heuristic to work is as follows: Take a photo with normal illumination of the scene, and then retake the picture in same position with full flash active, then substract both images and divide the result by a pre-taken flash calibration image, apply a box filter to this new result and then post-process to estimate depth values, the whole process is explained in detail in this paper (which is also posted/referenced in the project website)
Google Sketchup (free) has a photo matching tool that allows you to take a photograph and match its perspective for easy modeling.
EDIT: It appears that you're interested in developing your own solution. I thought you were trying to obtain a 3D model of an image in a single instance. If this answer isn't helpful, I apologize.
Hope this helps if you are trying to construct 3d volume from 2d stack of images !! You can use open source tool such as ImageJ Fiji which comes with 3d viewer plugin..
https://quppler.com/creating-a-classifier-using-image-j-fiji-for-3d-volume-data-preparation-from-stack-of-images/
Having searched around on SO, and also checked on OpenCV list but not having found an answer, posting my query here.
Problem: Match 2 photos of the same scene, shot from 2 slightly different camera angles, and with slightly different lens distortions, with slightly different zoom-levels, and shot under slightly different lighting conditions.
Constraints:
Slightly different in the above statements can be taken to mean max. of 10% in most cases.
The scene in question is to be considered an indoor scene, or an outdoor seen with limited details.
Matching accuracy of 75% would be acceptable.
The photos aren't high-resolution (shot with consumer grade mostly budget / cell-phone cameras)
What gives me the hope that this problem might be solvable is the existence of software that stitches photos to create panoramas. They seem to figure out the overlapping sections auto-magically. They do so even when the horizon orientations don't match exactly, slight differences exist in exposure level or background illumination, and minor zoom level differences exist. I think, what I need is a very similar workflow and set of algorithms.
Note that while my question might seem similar to one here, actually it is not.
You need to compute the homography between the images which needs point correspondences such as SURF interest points.
Once you have the homography, you can do a projective transformation of the images so that they match up. Following that, you can try some sort of blending at the seams to make it look seamless.
This paper describes it pretty well. You can replace the Multi-scale Oriented Patches used by Szeliski with SURF interest points. Here are some more resources to get you started:
CMU Lecture on homography and mosaics
Same thing with a bit about feathering
Detailed PPT