OpencCV: Camera motion detection - opencv

I have a white board with a black line. I want to find the angle made by the line. I could do this part using background subtraction and Hough line transforms. Even when I rotate the board the angle detected is correct. The problem I face is that if the camera is rotated the value obtained varies. I want to obtain the original angle only even when camera is rotated. What should my approach when the camera is rotated to obtain the original angle?

Related

Image pixel coordinates to world coordinate transformations

I'm asking this question from the perspective of non-mathematician.
So please dumb down answers as much as possible.
I'm using a microscope which has a camera and also a confocal scanning mode. The camera is slightly rotated counter clockwise relative to the physical stage orientation (0.53 degrees).
Furthermore, the camera has a slight lateral translation compared to the center of the stage. In other words, the center of the field of view (FOV) of the camera is offset compared to the center of the stage.
Specifically, my camera image has pixel dimensions of 2560, 2160. So the center of the camera FOV is 1280, 1080.
However, the center of the stage is actually at the image pixel coordinates 1355, 980.
My goal is to map objects detected in the image to their physical stage coordinate. We can assume the stage starts at physical coordinates 0,0 um.
The camera image has pixel size of 65nm.
Im not sure how to apply the transformations. (I know how to apply a simpler rotation matrix).
Could someone show me how to do this with a few example pixel coordinates in the camera image?
Schematic representation of the shifts. WF means Widefield Camera

How to detect object is perpendicular to camera optical axis?

I want to detect whether object is perpendicular to optical axis.
Sample input will be an image that has and object (cylinder shape similar to a bottle) in it and a mostly plain background.
Output will be a boolean determining whether that object is perpendicular to camera axis. This does not require to have much accuracy.
For example following image is not perpendicular to camera axis.
Following image is not perpendicular too because camera angle is little bit from above
Next image is perpendicular and in the correct position
Is is possible to do this in steps using OpenCV. If so what are are the steps (High-level overview is enough)

OpenCV, dlib landmarks rotation

I am new in OpenCV an dlib, and I am not sure if my designe is correct. I want to write C++ face detector for android phone wich should detect faces with differents phone orientation and rotatrion angles. Lets stay when phone orientation is portrait and landscape. I am using OpenCV to rotate/edit image and dlib to detect faces. dlib shape predicats initialized with shape_predictor_68_face_landmarks.dat and it can detect face only in correct phone orientation (it means if I rotate phone by 90 it can not detect face.)
To make possible detect faces I read axis from accelerometor and rotate source image to correct orientation before send it to dlib face detector and it detects ok, but output coordinates in dlib::full_object_detection shape of course matchs to rotated picture but not original. So it means i have to convert (rotate landmarks) to back to original image.
Is there are any existing API in dlib or OpenCV to make possible rotate landmarks (dlib::full_object_detection) for specified angle? It will be good if you can provide some example.
For iPhone apps, EXIF data in images captured using iPhone cameras can be used to rotate images first. But I can't guarantee this for Android phones.
In most practical situations, it is easier to rotate the image and perform face detection when face detection in the original image does not return any results (or returns strange results like very small faces). I have seen this done in several Android apps, and have used it myseklf on a couple of projects.
As I understand, you want to rotate the detected landmark to the coordinates system of the original image. If so, you can use getRotationMatrix2D and transform to rotate the list of point.
For example:
Your image was rotated 90 degree to the right around the center point (the middle point of image), now you need to rotate the landmark points back -90 degree around the center point. The code is
// the center point
Point2f center=(width/2,height/2)
//the angle to rotate, in radiant
// in your case it is -90 degree
double theta_deg= angleInDegree * 180 /M_PI;
// get the matrix to rotate
Mat rotateMatrix = getRotationMatrix2D(center, theta_deg, 1.0);
// the vector to get landmark points
std::vector<cv::Point> inputLandmark;
std::vector<cv::Point> outputLandmark;
// we use the same rotate matrix and use transform
cv::transform(inputLandmark, outputLandmark, rotateMatrix);

Determining the angle in which to rotate the robot in respect to another object

I am currently working on a project where I need to determine whether a robot, with an ArUco marker on top of it, needs to rotate to a certain direction in order for it to point, with its front, towards a particular object, for which its centre point is known. So basically, what I've got is the centre point of the ball and the 4 points of the marker corners.
I'm including an example of what I mean as an image.
Note the little arrow drawn on the marker cardboard. It shows the front side of the robot.
Lastly: I have a camera that captures frames, and the program prints out the rotation vector. For some reason, the values are different during every frame, even though I intentionally left the robot at the same position. Could anyone please explain wy that might be?
Thanks a lot.
EDIT: I've got the issue with the rotation vector fluctuating sorted; now I just need to figure out how to use the output of that to get the orientation of the robot, that is, in respect to a ball (of which I have its centre point), which apparently is done through the X-axis.
I'm adding another image, which shows the x-axis as red, the y-axis as blue and the z-axis as green. The vectors are of type cv::Vec3d.
First, some code:
std::vector<cv::Vec3d> rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, CAMERA_MATRIX, DISTORTION_COEFFICIENTS, rvecs, tvecs);
And the image showing what I mean:

How do you counter a rotated camera?

We are currently using opencv to track a planar rectangular target. While directly straight(no pitch), this works perfectly using findContours with solvePnp and returns a very accurate location of the target.
The problem is, is that obviously we get the different results once we increase the pitch. We know the pitch of the camera at all time.
How would I "cancel out" the pitch of the camera, and obtain coordinates as if the camera was facing straight ahead?
In the general case you can use an affine transform to map the quadrilateral seen by the camera back to the original rectangle. In your case the quadrilateral seen by the camera may be a good approximation of a parallelogram since only one angle is changing, but in real-world applications you can generally assume that the camera can have non-zero values for each of the three rotations (e.g. in pitch, yaw, and roll).
http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
The transform allows you to calculate the matching coordinates (x,y) within the rectangle's plane given coordinates (x', y') in the image of the rectangle.

Resources