Computer Vision - detecting face angle based on extracted face keypoints - opencv

I have an application which extract the basic keypoints of the face. This includes the the corners of the eyes, the corners of the mouth, the nose, and the face border.
Is there any build application or algorithm which I can apply so that I can detect the face angle(how much is it oriented to the left or how much is it oriented to the right)?

You will have to detect the face plane in 3D system. This would involve finding extrinsic parameters (orientation and pose) of the face plane with respect to camera. Base on how the face plane is related to image sensor plane, you can determine its orientation towards right and left.
There are numerous posts about extrinsic parameters in stackoverflow. A quick search would help you in finding the extrinsic parameters.
P.S : You will have to calibrate your camera, before you find extrinsic parameters

Related

How to rectify my own image to the cameras of the KITTI dataset using OpenCV

Based on the documentation of stereo-rectify from OpenCV, one can rectify an image based on two camera matrices, their distortion coefficients, and a rotation-translation from one camera to another.
I would like to rectify an image I took using my own camera to the stereo setup from the KITTI dataset. From their calibration files, I know the camera matrix and size of images before rectification of all the cameras. All their data is calibrated to their camera_0.
From this PNG, I know the position of each of their cameras relative to the front wheels of the car and relative to ground.
I can also do a monocular calibration on my camera and get a camera matrix and distortion coefficients.
I am having trouble coming up with the rotation and translation matrix/vector between the coordinate systems of the first and the second cameras, i.e. from their camera to mine or vice-versa.
I positioned my camera on top of my car at almost exactly the same height and almost exactly the same distance from the center of the front wheels, as shown in the PNG.
However now I am at a loss as to how I can create the joint rotation-translation matrix. In a normal stereo-calibrate, these are returned by the setereoCalibrate function.
I looked at some references about coordinate transformation but I don't have sufficient practice in them to figure it out on my own.
Any suggestions or references are appreciated!

Getting 3D coordinates with two known correlating points in OpenCV

I am tracking a moving vehicle with a stereo camera system. In both images I use background segmentation to get only the moving parts in the pictures, then put a rectangle around the biggest object.
Now I want to get the 3D coordinates of the center of the rectangle. The identified centers in the two 2D pictures are almost correlating points (I know not exactly). I did a stereo calibration with MATLAB, so I have the intrinsic parameters of both cameras and the extrinsic parameters of the stereo system.
OpenCV doesn't provide any function for doing this as far as I know and to be honest reading Zisserman didn't really help me, but maybe I am just blind to the obvious.
This should work:
1. For both camera's, compute a ray from your camera origin through the rectangle's center.
2. Convert the rays to world coordinates.
3. Compute the intersection between the two rays (or the closest point, in case they do not exactly intersect)

Nose Tip Detection from 3D point cloud

I'm trying to implement a head pose estimation algorithm and I'm using a Time-of-Flight camera. I need to detect the nose tip in the point cloud data I get from the camera.
After I know where the nose tip is I would sample N nearest neighbour points around it and do Least Square Error Plane Fitting on that part of the point cloud to retrieve the Yaw and Pitch angles.
The nose detection should work for different head poses not just for a full frontal head pose.
I implemented the plane fitting and that works fine but I don't know how to detect the nose tip from the 3D data.
Any advice on how this could be done would be much appreciated.
Regards,
V.
I used to work with Kinect images that have a limit on depth z > .5m, see below. I hope you don’t have this restriction with your ToF camera. Nose as an object is not very pronounced but probably can be detected using connected components on depth image. You have to find it as a blob on otherwise flat face. You can further confirm that it is a nose by comparing face depth with nose depth and nose position relative to the face. This of course doesn’t apply to the non frontal pose where nose should be found differently.
I suggest inverting your logical chain of processing: find nose then found face and start looking for a head first (as a larger object with possibly better depth contrast) and then for nose. Head is well defined by its size and shape in 3D and a face 2D detection can also fit a raw head model into your 3D point cloud using similarity transform in 3D.
link to Kinect depth map

Volume of the camera calibration

I am dealing with the problem, which concerns the camera calibration. I need calibrated cameras to realize measurements of the 3D objects. I am using OpenCV to carry out the calibration and I am wondering how can I predict or calculate a volume in which the camera is well calibrated. Is there a solution to increase the volume espacially in the direction of the optical axis? Does the procedure, in which I increase the movement range of the calibration target in 'z' direction gives sufficient difference?
I think you confuse a few key things in your question:
Camera calibration - this means finding out the matrices (intrinsic and extrinsic) that describe the camera position, rotation, up vector, distortion, optical center etc. etc.
Epipolar Rectification - this means virtually "rotating" the image planes so that they become coplanar (parallel). This simplifies the stereo reconstruction algorithms.
For camera calibration you do not need to care about any volumes - there aren't volumes where the camera is well calibrated or wrong calibrated. If you use the chessboard pattern calibration, your cameras are either calibrated or not.
When dealing with rectification, you want to know which areas of the rectified images correspond and also maximize these areas. OpenCV allows you to choose between two extremes - either making all pixels in the returned areas valid and cutting out pixels that don't fit into the rectangular area or include all pixels even with invalid ones.
OpenCV documentation has some nice, more detailed descriptions here: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html

Camera Calibration

I am using OpenCV, a newbie to the entire thing.
I have a scenario, I am projecting on a wall, I am building a kind of a robot which has a camera. I wanted to know how can I process the image so that I could get the real-world values of the co-ordinates of the blobs tracked by my camera?
First of all, you need to calibrate the intrinsic of the camera. Use checkerboard-patterns printed on cardboard to do this, OpenCV has methods for this although there are finished tools for this as well.
To get an idea, I have written some python code to calibrate from a live video stream, move the cardboard along the camera in some different angles and distances. Take a look here: http://svn.ioctl.eu/pub/opencv/py-camera_intrinsic/
Then you need to calibrate the extrinsic of the camera, that is the position of the camera wrt. your world coordinates. You can place some markers on the wall, define the 3D-position of those markers and let OpenCV calibrate the extrinsic for this (cvFindExtrinsicCameraParams2).
In my sample code, I calculate the extrinsic wrt. the checkerboard so I can render a Teapot in the correct perspective of the camera. You have to adjust this to your needs.
I assume you project only onto a flat surface. You have to know the geometry to get the 3D coordinates of your detected blob. You can then find the blobs in your camera image and knowing intrinsic, extrinsic and the geometry, you can cast rays for each blob from the camera according to your intrinsic/extrinsic and calculate the intersection of each such ray with your known geometry. The intersection then is your 3D point in world space where the blob is projected to.

Resources