I'm trying to get the 3D coordinate of a point from the triangulation of two view.
I'm not really sure if my results are correct or not.
For example I'm not really sure if the sign of the coordinates are correct because I'm not sure how the camera frame is oriented.
The z axis positive verse is entering or exiting the image plane?
And x and y? Do they follow the right hand rule?
Thanks in advance for a clarification!
The coordinate system is set according to the image and the description on this webpage
Related
I am working on a machine vision project and need to determine the angle of an object in x and y relative to the center of the frame (center in my mind being where the camera is pointed). I originally did NOT do a camera calibration (calculated angle per pixel by taking a picture of a dense grid and doing some simple math). While doing some object tracking I was noticing some strange behaviour which I suspected was due to some distortion. I also noticed that an object that should be dead center of my frame was not, the camera had to be shifted or the angle changed for that to be true.
I performed a calibration in OpenCV and got a principal point of (363.31, 247.61) with a resolution of 640x480. The angle per pixel obtained by cv2.calibrationMatrixVales() was very close to what I had calculated, but up to this point I was assuming center of the frame was based on 640/2, 480/2. I'm hoping that someone can confirm, but going forward do I assume that my (0,0) in cartesian coordinates is now at the principal point? Perhaps I can use my new camera matrix to correct the image so my original assumption is true? Or I am out to lunch and need some direction on how to achieve this.
Also was my assumption of 640/2 correct or should it technically have been (640-1)/2. Thanks all!
I'm writing an application in c++ which gets the camera pose using fiducial markers and also as input get a lat/lon coordinate in the real world and as output streams a video with X marker which shows the location of the coordinate on the screen.
When I move my head , the X stays in the same place spatially (because I know how to move it on the screen based on the camera pose or even hide it when I look away.
My only problem is to convert the coordinate from real life to coordinate on the screen.
I know my own gps coordinate and the target gps coordinate.
I also have the screen size (height / width) .
How can I in openCV translate all these to x,y pixel on the screen ?
In my point, your question isn't so clear.
The opencv is an image processing library
You can't convert your needs with opencv. You've need a solution with your own algorithms. So I have some advices and some experiments to explain somethings.
You can simulate to show your real life position on screen with any programming language. Imagine it, you want to develop a measurement software, it can measure a house plan image on screen with drawing lines to edges of all walls (You know some length of walls owing to an image like below)
If you want to measure wall of WC at bottom, you must know how much pixels are how ft, so firstly you should draw a line from start to end of known length for how much pixel width it. For example, If 12'4"" ft equals 9 pixels width. no longer, you can calculate length wall of WC at bottom with use basic proportion. Of course this is basic ratio for you.
I know this is not your need but this answer is helpful for you, I hope it will give some ideas.
I have camera parameters and I know the distance between the camera and the flat region (for example, a wall). Roll and pitch values of camera are constant (assume as in this). But, yaw value can be any value between -60 and 60 degrees, and also I know this. Is it possible to calculate the distance of any point in the image to the camera location ?
No, not without additional information. An object that's not on the "flat region" can be anywhere. To convince yourself that this is the case, note that, given an image of the object, you can always "shrink" it and move it closer to the camera to produce the same image.
If the object has known size and shape, then you can trivially find its distance from its apparent magnification in the image.
I have solved the problem. In my scenario, there is no object. Thank you.
So i have my code posted below and it seems to be detecting faces properly. As i was attempting to build a rectangle around the detected face i saw the options for .bounds.origin.x and .bounds.origin.y and I had a few questions i was hoping that you guys could help me answer:
background: I have an image view that displays the AVVideoOutput with the CIDetector working on the image view.
Are the x and y points for the middle of the face that the CIDetector has found? or is it the distance of the face from the parent view's x & y? or is it the relationship of the detected face to the center of the image view?
What is the layout of the x and y plane that the values are being drawn from? (For example if the detected face is in the bottom right of the front camera the values for x and y are smaller, close to 0 even)
Is the plane(s) documented somewhere?
can this plane be manipulated or is it standard?
are there positive and negatives because I haven't found any?
are these values/ is this plane ever subject to changing if the camera moves or are the values changing entirely due to the center of the detected face?
Thank you for any help or documentation posted! I'll be searching as well but i was hoping someone could help out cause I haven't found much considering I (obviously) don't really know what I am asking for...
I am looking for an efficient way to calculate the position of an oject on a surface based on an image taken from a certain perspective.
Let me explain a little further.
There is an object on a rectangular flat surface.
I have a picture taken of this setup with the camera positioned at one of the corners of the surface area at a rather low angle.
On the picture I will thus see a somewhat distorted, diamond-shaped view of the surface area and somewhere on it the object.
Through some image processing I do have the coordinates of the object on the picture but now have to calculate the actual position of the object on the surface.
So I do know that the center of the object is at the pixel-coordinates (x/y) on the picture and I know the coordinates of the 4 reference points that represent the corners of the area.
How can I now calculate the "real world" position of the object most efficiently (x and y coordinates on the surface)?
Any input is highly appreciated since I have worked so hard on this I can't even think straight anymore.
Best regards,
Tom
You have to find a perspective transformation.
Here you may find an explanation and code in Matlab
HTH!
How good is your linear algebra? A perspective transformation can be described by a homography matrix. You can estimate that matrix using the four corner points, invert it and the calculate the world coordinates of every pixel in your image.
Or you can just let OpenCV do that for you.