I have a 3D object (a helmet) with a bunch a aruco markers on it. I'd like to treat these markers as a board. The markers are not co-planar with each other, but that is fine, per my understanding of aruco boards. The problem is, how do I initialize the board object coordinates (objPoints)?
It's not easy to take a ruler and measure their relative locations, since they do not all exist in the same plane. I could take a photo, detect markers, estimate the pose for each marker, and then figure out their relative locations from that. But I think doing this with a single photo wouldn't be very precise, nor would a single photo necessarily capture every marker.
Is there a common way to obtain objPoints from multiple photos for higher precision? Or is there any better way to do it?
Related
I am trying to figure out how to roughly project the geographic position of an annotated object in an image?
The Setup
A picture with a known object in it. i.e. we know the width/height.
A bounding box highlighting where that object is in frame. X,Y,Width,Height.
The precise longitude and latitude of the camera that took the picture. The Origin.
The heading of the camera.
The focal length of the camera.
The camera sensor size.
The height of the camera off the ground.
Can anyone point me toward a solution for roughly projecting the objects location from the image origin location, given those data points?
The solution is simple if you assume an ellipsoidal surface for the Earth. If you need to use a Digital Terrain Model (DTM) things will get quickly more complicated. For example, your object may be visible in the image but occluded on the DTM because of various sources of error. In the following I assume you work with the ellipsoid.
Briefly, what you need to do is backproject the vertices of the image bounding box, obtaining four vectors (rays) in camera coordinates. You then transform them into Earth-Centered Earth-Fixed (ECEF) coordinates and solve for the intersection of the rays with the WGS-72 (or WGS-84) ellipsoid as explained here.
I recommend using the nvector library to help with this kind of calculations. You didin't specify the language you work with, but there are ports of nvector to many common languages, including Python and Matlab.
I have 2 images for the same object from different views. I want to form a camera calibration, but from what I read so far I need to have a 3D world points to get the camera matrix.
I am stuck at this step, who can explain it to me
Popular camera calibration methods use 2D-3D point correspondences to determine the projective properties (intrinsic parameters) and the pose of a camera (extrinsic parameters). The most simple approach is the Direct Linear Transformation (DLT).
You might have seen, that often planar chessboards are used for camera calibrations. The 3D coordinates of it's corners can be chosen by the user itself. Many people choose the chessboard being in x-y plane [x,y,0]'. However, the 3D coordinates need to be consistent.
Coming back to your object: Span your own 3D coordinate system over the object and find at least six spots, from which you can determine easy their 3D position. Once you have that, you have to find their corresponding 2D positions (pixel) in your two images.
There are complete examples in OpenCV. Maybe you get a better picture when reading the code.
I have a set of 3-d points and some images with the projections of these points. I also have the focal length of the camera and the principal point of the images with the projections (resulting from previously done camera calibration).
Is there any way to, given these parameters, find the automatic correspondence between the 3-d points and the image projections? I've looked through some OpenCV documentation but I didn't find anything suitable until now. I'm looking for a method that does the automatic labelling of the projections and thus the correspondence between them and the 3-d points.
The question is not very clear, but I think you mean to say that you have the intrinsic calibration of the camera, but not its location and attitude with respect to the scene (the "extrinsic" part of the calibration).
This problem does not have a unique solution for a general 3d point cloud if all you have is one image: just notice that the image does not change if you move the 3d points anywhere along the rays projecting them into the camera.
If have one or more images, you know everything about the 3D cloud of points (e.g. the points belong to an object of known shape and size, and are at known locations upon it), and you have matched them to their images, then it is a standard "camera resectioning" problem: you just solve for the camera extrinsic parameters that make the 3D points project onto their images.
If you have multiple images and you know that the scene is static while the camera is moving, and you can match "enough" 3d points to their images in each camera position, you can solve for the camera poses up to scale. You may want to start from David Nister's and/or Henrik Stewenius's papers on solvers for calibrated cameras, and then look into "bundle adjustment".
If you really want to learn about this (vast) subject, Zisserman and Hartley's book is as good as any. For code, look into libmv, vxl, and the ceres bundle adjuster.
I'm trying to write a C++ program using Opencv library that will reconstruct 3d points from the corresponding 2d markers placed on human model.
But I've a question. How do commercial mocap(motion capture) industry figure out which markers belong to which bone structure?
What I mean by my last question is: lets suppose there are three markers placed on left upper arm. What method do they use to associate these three markers to left upper arm from frame to frame?
Because it could belong to right upper arm right or to any bones like front chest, femur etc.
So what process do they implement to differentiate between markers and place the right marker to proper bone structure?
Do they use optical flow, SIFT to track markers where in frame-1 the markers' are labelled for proper bones? But even if the mocap industry use this method, aren't these two methods very time consuming? I saw a video on you-tube. And there they associate and reconstruct markers in real-time.
Is it possible to kindly tell me what procedure commercial mocap industry follow to correspond points to individual parts of skeleton structure?
After all you need to do this because you have to write the xRot, yRot and zRot(rotation about x-y-z axis) of bones in .bvh file so that you can view the 2d motion in 3d.
So what's the secret?
For motion capture or tracking objects with markers in general the way to go is to keep track of the markers themselves between two frames and to keep track of the distance between the markers. A combination of this information is used to determine if a marker is the same as one close by to a marker in the previous frame.
Also these systems often use multiple cameras and have calibration objects where the position of markers is known and the correlation between the cameras can be determined. The algorithms to do this detection are highly advanced in these commercial mocap solutions.
i need to find a marker like the ones used in Augmented Reality.
Like this:
I have a solid background on algebra and calculus, but no experience whatsoever on image processing. My thing is Php, sql and stuff.
I just want this to work, i've read the theory behind this and it's extremely hard to see in code for me.
The main idea is to do this as a batch process, so no interactivity is needed. What do you suggest?
Input : The sample image.
Output: Coordinates and normal vector in 3D of the marker.
The use for this will be linking images that have the same marker to spatialize them, a primitive version of photosync we could say. Just a caroussel of pinned images, the marker acting like the pin.
The reps given allowed me to post images, thanks.
You can always look at the open source libraries such as ARToolkit and see how it works but generally in order to get the 3D coordinates of marker you would need to:
Do the camera calibration.
Find marker in image using local features for example.
Using calibrated camera parameters and 2D coordinates of marker do the approximation the 3D coordinates.
I've never implemented sth similar by myself but I think this is a general concept you should apply on your method.
Your problem can be solved by perspective n point camera pose estimation. When you can reasonably assume that all correspondences are correct, a linear algorithm should do.
Since the marker is planar, you can also recover the displacement from the homography between the model plane and the image plane (link). As usual, best results are obtained by iterative algorithms (link).