OpenCV perspectiveTransform correct target width and height? - image-processing

I am currently working with OpenCV and its perspective transformation functions. I'd like to find a way to accurately determine the target rectangle based on the data (the source image) I have.
I already found this thread: https://stackoverflow.com/questions/7199116/perspective-transform-how-to-find-a-target-width-and-height
It states, that it is not possible to determine the correct aspect ratio correctly on the data contained in the source, but is there at least a good algorithm to get a good estimate?

No, there isn't a way to do it from just the image alone. Imagine you were taking a picture of an A4 sheet of paper resting on a table, only you were looking at it near-on horizontal. If you used the aspect ratio from the image, you'd end up with a really long, thin rectangle.
However, if you know the pose of the camera relative to the target (ie rotation matrix) and the camera intrinsic parameters, then you can get the aspect ratio.
Have a look at this paper (it's actually really interesting, though the English isn't the best): equation (20) is the key one. Also, look at this blog post where someone's implemented the approach.
If you don't know the orientation of the camera then the best bet is to put in some sort of aspect ratio that is at least ballpark. If you have any other info about the rectangle, use that (for example if I was always taking photos of A[0,1,2,...] pieces of paper, these have a known fixed aspect ratio).
good luck!

Related

Fit CVPixelBuffer into a square image with resizing and aspect ratio saving

I have an image that should be preprocessed before passing to CoreML to 640x640 square with resizing and saving aspect ratio (check the image). I found a lot of helpful links about resizing using vImageScale_*, but haven't found anything similar to adding coloured paddings to the resized image.
I know that Vision has the scaleFit option, but the final output is a bit different, so I'm trying to make image centered.
The vImageScale_* functions scale to fit the destination, so the aspect ratio will change if the source and destination are different ratios.
vImage provides affine transform operations (with support for a background colour!). Take a look at https://developer.apple.com/documentation/accelerate/applying_geometric_transforms_to_images for more information. The final example, Apply a Complex Affine Transform to a vImage Buffer, does exactly what you need (you just need to remove the rotate step).
The colored padding is just going to be something like vImageBufferFill_ARGB8888, and the middle bits will be vImageScale_ARGB8888. Since the scale operation works with integer heights, widths and origin, you won't need to worry about antialiasing the content at the seams between regions.
Note that this is also relatively easy to do with CGBitmapContextRef and CGContextDrawImage(CGContextRef, CGRect, CGImageRef), though CG just uses a Lanczos2 resampler. You will have less details to micromanage with image orientation and colorspace conversion, and it allows for fractional pixel placement.
You can also get fractional pixel placement with vImage, but will need to use AffineWarp or the shear functions. For ML, I'd probably not attempt to do too much image manipulation in case the ringing has some odd effect. A simple low pass filter + bilinear interpolation might be something to try. This will be a bit blurrier to the human eye but might avoid feeding artifacts to the training workload. Presumably there is ample literature about ML image preprocessing to guide you.

How to find the distance of the camera to an object with known physical size on iOS?

Assume that you have a square shaped and red coloured paper with a length of 5 centimetres.
I can detect it's (good enough) size (bounding box) in pixels on the image that camera takes.
I know the physical size.
On the other hand,
I do not know how to use the camera image's pixel size in the formula with the actual physical size of the paper. To do this, I would perhaps need a constant that will come from the specifications of the camera.
I believe mobile each iOS model has different calibration (and probably even the same models may be different than each other slightly?).
I think, if somehow I can get that information from the camera, I can map certain things and use the ratio to find the distance to a specific object.
Do you see any problems in the above ideas?
What would be the best way to get that constant from the camera? iOS Camera API? Announced specifications of the lens? Individually measuring and comparing size of the box on the image at the same camera distance on different iOS device models and saving those values per model type?
If all of these do not make any sense, what would you recommend me to look into? I appreciate the time taken by you to read this question and comment on it.

Measuring an object from a picture using a known object size

So what I need to do is measuring a foot length from an image taken by an ordinary user. That image will contain a foot with a black sock wearing, a coin (or other known size object), and a white paper (eg A4) where the other two objects will be upon.
What I already have?
-I already worked with opencv but just simple projects;
-I already started to read some articles about Camera Calibration ("Learn OpenCv") but still don't know if I have to go so far.
What I am needing now is some orientation because I still don't understand if I'm following right way to solve this problem. I have some questions: Will I realy need to calibrate camera to get two or three measures of the foot? How can I find the points of interest to get the line to measure, each picture is a different picture or there are techniques to follow?
Ps: sorry about my english, I really have to improve it :-/
First, some image acquisition things:
Can you count on the black sock and white background? The colors don't matter as much as the high contrast between the sock and background.
Can you standardize the viewing angle? Looking directly down at the foot will reduce perspective distortion.
Can you standardize the lighting of the scene? That will ease a lot of the processing discussed below.
Lastly, you'll get a better estimate if you zoom (or position the camera closer) so that the foot fills more of the image frame.
Analysis. (Note this discussion will directed to your question of identifying the axes of the foot. Identifying and analyzing the coin would use a similar process, but some differences would arise.)
The next task is to isolate the region of interest (ROI). If your camera is looking down at the foot, then the ROI can be limited to the white rectangle. My answer to this Stack Overflow post is a good start to square/rectangle identification: What is the simplest *correct* method to detect rectangles in an image?
If the foot lies completely in the white rectangle, you can clip the image to the rect found in step #1. This will limit the image analysis to region inside the white paper.
"Binarize" the image using a threshold function: http://opencv.willowgarage.com/documentation/cpp/miscellaneous_image_transformations.html#cv-threshold. If you choose the threshold parameters well, you should be able to reduce the image to a black region (sock pixels) and white regions (non-sock pixel).
Now the fun begins: you might try matching contours, but if this were my problem, I would use bounding boxes for a quick solution or moments for a more interesting (and possibly robust) solution.
Use cvFindContours to find the contours of the black (sock) region: http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
Use cvApproxPoly to convert the contour to a polygonal shape http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#approxpoly
For the simple solution, use cvMinRect2 to find an arbitrarily oriented bounding box for the sock shape. The short axis of the box should correspond to the line in largura.jpg and the long axis of the box should correspond to the line in comprimento.jpg.
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#minarearect2
If you want more (possible) accuracy, you might try cvMoments to compute the moments of the shape. http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#moments
Use cvGetSpatialMoment to determine the axes of the foot. More information on the spatial moment may be found here: http://en.wikipedia.org/wiki/Image_moments#Examples_2 and here http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#getspatialmoment
With the axes known, you can then rotate the image so that the long axis is axis-aligned (i.e. vertical). Then, you can simply count pixels horizontally and vertically to obtains the lengths of the lines. Note that there are several assumptions in this moment-oriented process. It's a fun solution, but it may not provide any more accuracy - especially since the accuracy of your size measurements is largely dependent on the camera positioning issues discussed above.
Lastly, I've provided links to the older C interface. You might take a look at the new C++ interface (I simply have not gotten around to migrating my code to 2.4)
Antonio Criminisi likely wrote the last word on this subject years ago. See his "Single View Metrology" paper , and his PhD thesis if you have time.
You don't have to calibrate the camera if you have a known-size object in your image. Well... at least if your camera doesn't distort too much and if you're not expecting high quality measurements.
A simple approach would be to detect a white (perspective-distorted) rectangle, mapping the corners to an undistorted rectangle (using e.g. cv::warpPerspective()) and use the known size of that rectangle to determine the size of other objects in the picture. But this only works for objects in the same plane as the paper, preferably not too far away from it.
I am not sure if you need to build this yourself, but if you just need to do it, and not code it. You can use KLONK Image Measurement for this. There is a free and payable versions.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

How would you find the height of objects given an image?

This isn't exactly a programming question exactly. I just want to know what your approach would be to a common problem in Digital image processing.
Let's say you have an image of a few trees in say jpg format. How would you go about finding the heights of each of these trees? The photo is the only input you have.
I want to know the approaches you have not to code. So it doesn't matter if your answers are vague, or non DIP-ish.
Small correction :
The height need not be the actual height of the tree. The height can be taken to any scale. But should be consistent to all objects in the pic.
Yes it is possible. What you are describing has an entire industry around it, called Photogrammetry
There is a fair amount of computer vision research in this area. Assuming you don't know the camera constraints, you'll have to make assumptions about the scene and camera to determine the heights up to a scale factor. Note that without camera constraints or a reference height in the image it is impossible to tell the difference between a tall tree photographed from a distance or a short tree photographed up close. A great start is the Single View Metrology work by Criminisi.
It is simple to find the size of an object from images using Photogrammetry.
Photogrammetry is the science of making measurements from photographs.
For this we need to know two things,
the distance between the camera and the image plane(distance from camera to object).
Focal-length(in mm and pixels per mm) or physical size of the image sensor.
Following are the steps:
Calibrate the Camera
Use openCV to calibrate the camera.You can use the OpenCV calibrate.py tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. Camera calibration is done to find the camera parameters. I took about a dozen of photos of the chessboard photos from many angles as I could with my webcam (to calibrate my webcam). For more details check openCV camera calibration.
We will get f_x,f_y,c_x,c_y from calibration matrix.
Checking the details of the photos you took, you will find the native resolution of the photos(heightXwidth) and in their EXIF headers you can find the focal length value(f). These items may vary depending on your camera.
Pixels per millimeter
We need to know the pixels per millimeter(px/mm) on the image sensor.
f_x=f*m_x
f_y=f*m_y
Since we have two of the variables for each formula we can solve for m_x and m_y.I just averaged f_x and f_y to get f_xy.
m=f_xy/focal_length_of_camera
Insert the image
Insert your image from which you need to find the actual size of image. You should know the distance between object and camera. Find the dimension of the image (height1Xwidth1)
Find the Object size in pixels
Determine the size of object in pixels. I simply use distance formula to find length of a selected line. You can adopt any other method.
Convert px/mm in the lower resolution
pxpermm_in_lower_resolution = (width1*m)/width
Size of object in the image sensor
size_of_object_in_image_sensor = object_size_in_pixels/(pxpermm_in_lower_resolution)
Actual size of object
The actual size of object can be found with the above data as,
real_size = (dist*size_of_object_in_image_sensor)/focal_length
Assuming they're all the same distance away, all to scale, you'd want to find a single unit of measurement you can guarantee. For example, if there's a person in the photo, again, same scale, and you know they're exactly 6 feet tall, you use that as your measure. You then take that, and count how many stacked make the tree. For example, if you need 3.5 of this person, then:
3.5 * 6 = 21
gives you a 21 foot tall tree.
Without a single point of reference for everything, or if they're all on different scales, you would need a lot more information than you could easily get without having been there.
I would rely on an object of known dimensions to be present in the picture. For instance, a man.
Or perhaps, we could use the EXIF data to reverse engineer the size of the object based on the camera's sensor dimensions, the lens and the focal length used. This again depends on the angle. We should be getting most accurate results when the camera has been held perpendicular to the subject.
If your image is 3*3 and you want to find out the size of image (i.e 3x3..so 3x3 = 9) now we have 8 pixels starting from 0 up to 8. So 9/8=(___)kb.
If you want to find the size of image in MB, like doing above example, just do like that (9/8)/(1024)=(----)MB..
So you will get the result in Mb.

Resources