opencv translation vector in known unit - opencv

Is there any way to compute translation vector in a known unit like 'mm' or 'cm' between two planar objects using corresponding feature points. If yes, please let me know how can I do that.
So far I've tried with Essential Matrix computation (for non planer objects) and didn't get them in a known unit.
Thanking You in advance!

Only if you know the physical distance between some of those points (at least two) on the plane, so to fix the scale. Otherwise you are out of luck.
Without a reference distance, there is no way to tell whether you are looking at two buildings hundred of yards away, or two matchboxes nearby: this is how miniature visual effects in movies are done.

This Measuring size of objects in an image with OpenCV here can help you as a reference I feel.
Hope it helps others as well.

Related

Orienting VTK PolyData objects in the same direction

I have different objects coming from DICOM files (isolated bones) loaded with vtk as meshes (vtkPolyData). Each one has a different orientation and I'm trying to rotate them at the appropriate angles so that all of them share the same maximum variance direction (that I expect to be the longest dimension of the bone). The idea is like ordering the bones parallelly. I was thinking in getting the maximum variance direction with PCA or a similar technique and rotating the bone at the corresponding degrees to match a particular direction (for example Z-axis).
I have no idea how to compute the maximum variance direction of a vtkPolyData object. Any idea?. Could I extract this information from the Cell Data Normals? Any other proposal to re-orient the bones?
Any suggestion will be highly appreciated.
Thanks a lot.
Found this but it's extremely resource-demanding for high resolutions meshes:
https://kitware.github.io/vtk-examples/site/Python/PolyData/AlignTwoPolyDatas/
But I can do the job if no alternatives are found.

Finding displacement between two camera frames

I'm currently working on a visual odometry project. Currently I've implemented up to Essential Matrix decomposition stage. But the resulting translation vector is normalized and cannot be able to plot the movement.
Now how can I compute the displacement in some scale? I have seen some suggestions to use planner homography to compute the absolute translation. I didn't got the idea of doing it as, the outdoor environment is not simply planner. At least, by considering the ground as planner, how to obtain, the translation of it. I've seen a suggestion here. Is it possible to use this approach to get the displacement between two frames?
What you are referring to is called registration. This is a vast field. There are methods for linear transformation across the entire image, and per pixel methods ( the two ends of the spectrum). Naturally per pixel methods are far slower typically and have many local errors.
Typically two frames have very little transformation between them and simple Homography will do to find the general scaling between them. Especially if you are talking about aerial photos. If your case is very far from planar then you may want to use something closer to pixel-wise. For example using spline fitting: https://www.mathworks.com/matlabcentral/fileexchange/20057-b-spline-grid--image-and-point-based-registration
You cannot recover scale, generally speaking, unless you can recognize in the scene 1 or more objects of known physical size.

How to calculate the similarity of two line drawing images in swift

We need to compare two hand drawn images..these images are drawn on the sprite kit.we need to see whether these pictures are roughly match or not.
For Example, if someone draws a smile pic, we need to check whether the redrawing smile pic is looks like the first drawn smile pic or not.we need to know whether the two images look alike or not...and to calculate the accuracy percentage of how similar they are..Please suggest some solutions.Thanks in advance.
You could try draw each of the paths into bitmaps and comparing them. Here are a few suggestions for doing the comparison. If nothing else this will put you onto the correct track for resolution. The following project can give you a head start but needs to be translated to objc or Swift. This answer on code review may also prove useful.
One suggestion that seems intriguing is trying to use kCGBlendModeDestinationOver to draw the bitmaps as a trace over each other and comparing the results.
There exists a mathematical tool for this, that is called Hausdorff distance.
Entry for Hausdorff distance in Wikipedia may help you to understand how it works. I can also suggest you a scientific paper about comparing images with this Comparing images with Hausdorff distance.
You may also find using Euclidian distance for this, have a look at Euclidian distance of Images.

Position Estimation From Multiple Images

First off, I'd like to state that I'm very new to this field and apologize if the question is a little too repetitive. I've looked around but in vain. I'm working on reading Hartley and Zisserman's book but it's taking me a while.
My problem is That I've got 3 Video Sources of an area and I need to find the camera position at each frame of the video. I do not have any information about the cameras that took the videos (i.e no Intrinsics).
Looking for a solution I came across SfM and tried existing software that exists namely Bundler & Vsfm and they both seem to have worked quite well. However I've got a couple of questions about it.
1) Is SfM really required in my case? Since SfM does a sparse reconstruction and the common points between images are also an output, is it fully necessary? or are there more suitable methods that can do it without since positions are all I really need? Or are there less complex methods I may use instead?
2) From what I've read, I need to calibrate the camera and find it's Intrinsics and Extrinsics. How can I do this without knowing either? I've looked at the 5-pt problem and others but most of them require you to know the intrinsic properties of the camera which I don't have and I cannot use a pattern such as a chessboard to calibrate them since they come from a source outside my control.
Thanks for your time!
Based on my experience, the short answer is:
1) You cannot reliably estimate the 3D pose of the cameras independently from the 3D of the scene. Moreover, since your cameras are moving independently, I think SfM is the right way to approach your problem.
2) You need to estimate the cameras' intrinsics in order to estimate useful (i.e. Euclidian) poses and scene reconstruction. If you cannot use the standard calibration procedure, with chessboard and co, you can have a look at the autocalibration techniques (see also chapter 19 in Hartley's & Zisserman's book). This calibration procedure is done independently for each camera and only require several image samples at different positions, which seems appropriate in your case.
You can actually accomplish your task in a massive bundle adjacent procedure up to a scaling parameter. But is is a very complicated thing even if you aren't novice. You dont need 3d reconstruction, just an essential matrix that can be obtained from 2d projections and decomposed i to rotation and translation but this does require Iintrinsic Paramus. To get them you have to have at least three frames.
Finally, Drop Zimmerman book it will drive you crazy. Read Simon Princes "Computer Vision"instead.

Histogram comparision

Is it possible to compare two intensity histograms (derived from gray-scale images) and obtain a likeness factor? In other words, I'm trying to detect the presence or absence of an soccer ball in an image. I've tried feature detection algorithms (such as SIFT/SURF) but they are not reliable enough for my application. I need something very reliable and robust.
Many thanks for your thoughts everyone.
This answer (Comparing two histograms) might help you. Generally, intensity comparisons are quite sensitive as e.g. white during day time is different from white during night time.
I think you should be able to derive something from compareHist() in openCV (http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html) to suit your needs if compareHist() does fit your purpose.
If not, this paper http://www.researchgate.net/publication/222417618_Tracking_the_soccer_ball_using_multiple_fixed_cameras/file/32bfe512f7e5c13133.pdf
tracks the ball from multiple cameras and you might get some more ideas from that even though you might not be using multiple cameras.
As kkuilla have mentioned, there is an available method to compare histogram, such as compareHist() in opencv
But I am not certain if it's really applicable for your program. I think you will like to use HoughTransfrom to detect circles.
More details can be seen in this paper:
https://files.nyu.edu/jb4457/public/files/research/bristol/hough-report.pdf
Look for the part with coins for the circle detection in the paper. I did recall reading up somewhere before of how to do ball detection using Hough Transform too. Can't find it now. But it should be similar to your soccer ball.
This method should work. Hope this helps. Good luck(:

Resources