OpenCV, dlib landmarks rotation - opencv

I am new in OpenCV an dlib, and I am not sure if my designe is correct. I want to write C++ face detector for android phone wich should detect faces with differents phone orientation and rotatrion angles. Lets stay when phone orientation is portrait and landscape. I am using OpenCV to rotate/edit image and dlib to detect faces. dlib shape predicats initialized with shape_predictor_68_face_landmarks.dat and it can detect face only in correct phone orientation (it means if I rotate phone by 90 it can not detect face.)
To make possible detect faces I read axis from accelerometor and rotate source image to correct orientation before send it to dlib face detector and it detects ok, but output coordinates in dlib::full_object_detection shape of course matchs to rotated picture but not original. So it means i have to convert (rotate landmarks) to back to original image.
Is there are any existing API in dlib or OpenCV to make possible rotate landmarks (dlib::full_object_detection) for specified angle? It will be good if you can provide some example.

For iPhone apps, EXIF data in images captured using iPhone cameras can be used to rotate images first. But I can't guarantee this for Android phones.
In most practical situations, it is easier to rotate the image and perform face detection when face detection in the original image does not return any results (or returns strange results like very small faces). I have seen this done in several Android apps, and have used it myseklf on a couple of projects.

As I understand, you want to rotate the detected landmark to the coordinates system of the original image. If so, you can use getRotationMatrix2D and transform to rotate the list of point.
For example:
Your image was rotated 90 degree to the right around the center point (the middle point of image), now you need to rotate the landmark points back -90 degree around the center point. The code is
// the center point
Point2f center=(width/2,height/2)
//the angle to rotate, in radiant
// in your case it is -90 degree
double theta_deg= angleInDegree * 180 /M_PI;
// get the matrix to rotate
Mat rotateMatrix = getRotationMatrix2D(center, theta_deg, 1.0);
// the vector to get landmark points
std::vector<cv::Point> inputLandmark;
std::vector<cv::Point> outputLandmark;
// we use the same rotate matrix and use transform
cv::transform(inputLandmark, outputLandmark, rotateMatrix);

Related

Image pixel coordinates to world coordinate transformations

I'm asking this question from the perspective of non-mathematician.
So please dumb down answers as much as possible.
I'm using a microscope which has a camera and also a confocal scanning mode. The camera is slightly rotated counter clockwise relative to the physical stage orientation (0.53 degrees).
Furthermore, the camera has a slight lateral translation compared to the center of the stage. In other words, the center of the field of view (FOV) of the camera is offset compared to the center of the stage.
Specifically, my camera image has pixel dimensions of 2560, 2160. So the center of the camera FOV is 1280, 1080.
However, the center of the stage is actually at the image pixel coordinates 1355, 980.
My goal is to map objects detected in the image to their physical stage coordinate. We can assume the stage starts at physical coordinates 0,0 um.
The camera image has pixel size of 65nm.
Im not sure how to apply the transformations. (I know how to apply a simpler rotation matrix).
Could someone show me how to do this with a few example pixel coordinates in the camera image?
Schematic representation of the shifts. WF means Widefield Camera

OpencCV: Camera motion detection

I have a white board with a black line. I want to find the angle made by the line. I could do this part using background subtraction and Hough line transforms. Even when I rotate the board the angle detected is correct. The problem I face is that if the camera is rotated the value obtained varies. I want to obtain the original angle only even when camera is rotated. What should my approach when the camera is rotated to obtain the original angle?

Calculating transformation of an object in an image using OpenCV

I have two images.
Say one is a 10x10 which we call trainImage and then there is another queryImage which is the same chessboard photographed using a phone camera. Now I have to find the position of camera in (x,y,z) coordinates. Using openCV and feature detection I have been able to identify the chessboard object in photographed object, but how to go ahead with calculating the transformations on chessboard so that I can eventually calculate the position of camera. Any pointers to start looking upon will also be really appreciated. Thanks.
Edit:
Reframing the problem statement again, I have two images trainImage and queryImage. I need to find the position of camera i.e. (x,y,z) if we assume that trainImage is at (0,0,0) in queryImage. I did some reading to find this I need rvec(rotation vector) and tvec(translation vector).
When I use findHomography() function on two images I get a 3x3 homgraphy matrix using which I can find the pixels points(x,y) in queryImage by multiplying to pixel points(x,y) in trainImage. How can I use this homographyMatrix for calculating tvec and rvec.

iOS, cubemap, compass and DeviceMotion attitude

I need to display an OpenGL cubemap (360deg panoramic image used as a texture on a cube) 'aligned' with the North on an iPhone.
0) The panoramic image is split into six images, applied onto the faces of the cube as a texture.
1) Since the 'front' face of the cubemap does not point towards North, I rotate the look-at matrix by theta degrees (found manually). This way when the GL view is displayed it shows the face containing the North view.
2) I rotate the OpenGL map using the attitude from CMDeviceMotion of a CMMotionManager. The view moves correctly. However, it is not yet 'aligned' with the North.
So far everything is fine. I need only to align the front face with North and then rotate it according to the phone motion data.
3) So I access the heading (compass heading) from a CLLocationManager. I read just one heading (the first update I receive) and use this value in step 1 when building the look-at matrix.
After step 3, the OpenGL view is aligned with the surrounding environment. The view is kept (more or less) aligned at step 2, by the CMMotionManager. If I launch the app facing South, the 'back' face of the cube is shown: it is aligned.
However, sometimes the first compass reading is not very accurate. Furthermore, its accuracy improves with the user moving the phone. The idea is to continuously modify the rotation applied to the look-at matrix by keeping into account the continuous readings of the compass heading.
So I have implemented also step 4.
4) Instead of using only the first reading of the heading, I keep reading updates from the CLLocationManager and use them to continuously align the look-at matrix, which is not rotate by the angle theta (found manually at step 1) and by the angle returned by the compass service.
After step 4 nothing works: the view is fixed in a position and moving the phone does not change the view. The cube is rotated with the phone, meaning that I see always the same face of the cube.
From my point of view (but I am clearly wrong) by first rotating the look-at matrix to align with North and then applying the rotation computed by "DeviceMotion attitude" nothing should change with respect to step 3.
Which step of my reasoning is wrong?

How do you counter a rotated camera?

We are currently using opencv to track a planar rectangular target. While directly straight(no pitch), this works perfectly using findContours with solvePnp and returns a very accurate location of the target.
The problem is, is that obviously we get the different results once we increase the pitch. We know the pitch of the camera at all time.
How would I "cancel out" the pitch of the camera, and obtain coordinates as if the camera was facing straight ahead?
In the general case you can use an affine transform to map the quadrilateral seen by the camera back to the original rectangle. In your case the quadrilateral seen by the camera may be a good approximation of a parallelogram since only one angle is changing, but in real-world applications you can generally assume that the camera can have non-zero values for each of the three rotations (e.g. in pitch, yaw, and roll).
http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
The transform allows you to calculate the matching coordinates (x,y) within the rectangle's plane given coordinates (x', y') in the image of the rectangle.

Resources