Is there a way to use the face detection API and somehow edit it so it can estimate the distance a person is away from the camera?
Related
Is it possible to find the speed or direction of ships that are moving by, using a camera mounted on another ship?
The information I know is the speed, heading (true north), roll, pitch, and camera parameters of the ship where the camera is installed.
You could of course calculate the speed and direction of objects in terms of pixels per frame.
To get speed values of the real object, you would however need to use something like calibrated stereo cameras to know the distance of the objects to the camera.
Once the distance of objects in the images is know, the parameters of the moving camera could be included into the calculation.
I am working on a project which requires me to find the angle between the person's body w.r.t. the camera. I already have the pose coordinates of the person but I am not sure how to find the angle. Do you think I need to use some camera parameters such as focal length and Intrinsic parameters of the camera? I would appreciate your suggestions or any references. Thank you.
The circle in the center is the person's head and the camera on the other side
Try this link: https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
To make long story short, you have to know the 3D coordinates of some points of the person and its 2D projections on the camera sensor plane and then use a PnP (Perspective-n-Point) alghoritm to estimate camera pose relative to the person. It is better to know the calibration of the camera, but you can use the DLT (Direct Linear Transform) alghoritm if you don't have this data.
I am currently working on pose estimation of one camera with respect to another using opencv, in a setup where camera1 is fixed and camera2 is free to move. I know the intrinsics of both the cameras. I have the pose estimation module using epipolar geometry and computing essential matrix using the five-point algorithm to figure out the R and t of camera2 with respect to camera1; but I would like to get the metric translation. To help achieve this, I have two GPS modules, one on camera1 and one on camera2. For now, if we assume camera1's GPS is flawless and accurate; camera2's GPS exhibits some XY noise, I would need a way to use the opencv pose estimate on top of this noisy GPS to get the final accurate translation.
Given that info, my question has two parts:
Because the extrinsics between the cameras keep changing, would it be possible to use bundle adjustment to refine my pose?
And can I somehow incorporate my (noisy) GPS measurements in a bundle adjustment framework as an initial estimate, and obtain a more accurate estimate of metric translation as my end result?
1) No, bundle adjustment has another function and you would not be able to work with it anyway because you would have an unknown scale for every pair you use with 5-point. You should instead use a perspective-n-point algorithm after the first pair of images.
2) Yes, it's called sensor fusion and you need to first calibrate (or know) the transformation between your GPS sensor coordinates and your camera coordinates. There is an open source framework you can use.
I have a problem in which i have a stationary video camera in a room and several videos from it, i need to transform the image coordinates into world coordinates.
What i know:
1. all the measurements of the room.
2. 16 image coordinates and their respected world coordinates.
The problem i encounter:
At first i thought i just need to create a geometric transformation (According to http://xenia.media.mit.edu/~cwren/interpolator/), but i have a problem since the edge of the room are distorted in the image, and i cant calibrate the camera because i can't get a hold of the room or the camera.
Is there anyway i can overcome those difficulties and measure the distance in the room with some accuracy?
Thanks
You can calibrate the distortion of the camera by extracting first the edges of your room and then finding the best set of distortion parameters (that will minimize edge distortion).
There are few works that implement this approach though:
you can find a skeleton of distortion estimation procedure in R. Szeliski's book, but without an implementation;
alternatively, you can find a method + implementation (+ an online demo where you can upload your images) on IPOL.
Regarding the perspective distortion, after removing the lens distortion just proceed with the link that you have found by applying this method to the image of the four corners of the room floor.
This will give you the mapping between an image pixel and a ground pixel (and thus the object world coordinate, assuning you only want the X-Y coordinates). If you need the height measurement, then you need to find an object with a known height in your images to calibrate it too.
I am working on an algorithm to estimate the height of detected people in a video, and I'm stuck.
The part that I have working is the detection of people using the HoG algorithm, so I have a bounding box for every person in the frame. And I have calibrated the camera, so I have my intrinsic and extrinsic camera parameters.
The problem is that now I have a formula for the perspective projection with 2 unknowns: height of the object and the distance from the object to the camera. I am using one mono web camera to detect people so I have no information about the distance from the object to the camera. And the height is what I'm trying to estimate, so I don't have that as well.
I know this problem is solvable if I use a kinect or a stereo camera in order to get the distance, but I'm limited to only one mono web camera.
Does anyone have an idea on how to approach this problem? I have read about using reference objects but I can't figure out how to use them to help my problem.