Crop half of an image in OpenCV - opencv

How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.

Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img

The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));

To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI

Related

How do I convert pixel/screen coordinates to cartesian coordinates?

How do I convert pixel/screen coordinates to cartesian coordinates(x,y)?
The info I have on the pictures is (see image below):
vFov in degrees, hFov in degrees, pixel width, pixel height
Basically what I want is to take any pixel on the image, and calculate the pixel position from image center in degrees.
For my answer I assume that your image represents a projection onto a planar surface.
Then a virtual camera can be constructed such that it sees the width/height of the image in exactly the right field of view. To get the distance d between the image and the camera(in pixels) a constructed right triangle can be used:
tan(FOV/2) = width/2 / d → d = width/(2tan(FOV/2))
The same equation should hold for the height.
In a similar way the angle of the pixel can be calculated(assuming the center of the image is (0, 0)):
tan(angleX) = x/d → angleX = arctan(x/d) = arctan(x/width * 2tan(hFov))
tan(angleY) = y/d → angleY = arctan(y/d) = arctan(y/width * 2tan(vFov))
In case the image is warped the d's of the vertical and the horizontal can be different and therefor you should not precalculate d.

Image auto cropping when rotate in OpenCV.js

I'm using OpenCV.js to rotate image to the left and right, but it was cropped when I rotate.
This is my code:
let src = cv.imread('img');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
let center = new cv.Point(src.cols/2, src.rows/2);
let M = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete();
Here is an example:
This is my source image:
This is what I want:
But it returned like this:
What should I do to fix this problem?
P/s: I don't know how to use different languages except javascript.
A bit late but given the scarcity of opencv.js material I'll post the answer:
The function cv.warpAffine crops the image because it only does a mathematical transformation as documented on OpenCV and other sources, if you wish to do rotations to any angle you'll need to calculate the padding in order to compensate that.
If you wish to only rotate in multiples of 90 degrees you could use cv.rotate as follows:
cv.rotate(src, dst, cv.ROTATE_90_CLOCKWISE);
Where src is the matrix with your source image, dst is the destination matrix which could be defined empty as follows let dst = new cv.Mat(); and cv.ROTATE_90_CLOCKWISE is the rotate flag indicating the angle of rotation, there are three different options:
cv.ROTATE_90_CLOCKWISE
cv.ROTATE_180
cv.ROTATE_90_COUNTERCLOCKWISE
You can find which OpenCV functions are implemented on OpenCV.js on the repository's opencv_js.congif.py file if the function is indicated as whitelisted then is working on opencv.js even if it is not included in the opencv.js tutorial.
The info about how to use each function can be found in the general documentation, the order of the parameters is generally the indicated on the C++ indications (don't be distracted by the oscure C++ vector types sintax) and the name of the flags (like rotate flag) is usually indicated on the python indications.
I was also experiencing this issue so had a look into #fernando-garcia's answer, however I couldn't see that rotate had been implemented in opencv.js so it seems that the fix in the post #dan-mašek's links is the best solution for this, however the functions required are slightly different.
This is the solution I came up with (note, I haven't tested this exact code and there is probably a more elegant/efficient way of writing this, but it gives the general idea. Also this will only work with images rotated by multiples of 90°):
const canvas = document.getElementById('canvas');
const image = cv.imread(canvas);
let output = new cv.Mat();
const size = new cv.Size();
size.width = image.cols;
size.height = image.rows;
// To add transparent borders
const scalar = new cv.Scalar(0, 0, 0, 0);
let center;
let padding;
let height = size.height;
let width = size.width;
if (height > width) {
center = new cv.Point(height / 2, height / 2);
padding = (height - width) / 2;
// Pad out the left and right before rotating to make the width the same as the height
cv.copyMakeBorder(image, output, 0, 0, padding, padding, cv.BORDER_CONSTANT, scalar);
size.width = height;
} else {
center = new cv.Point(width / 2, width / 2);
padding = (width - height) / 2;
// Pad out the top and bottom before rotating to make the height the same as the width
cv.copyMakeBorder(image, output, padding, padding, 0, 0, cv.BORDER_CONSTANT, scalar);
size.height = width;
}
// Do the rotation
const rotationMatrix = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(
output,
output,
rotationMatrix,
size,
cv.INTER_LINEAR,
cv.BORDER_CONSTANT,
new cv.Scalar()
);
let rectangle;
if (height > width) {
rectangle = new cv.Rect(0, padding, height, width);
} else {
/* These arguments might not be in the right order as my solution only needed height
* > width so I've just assumed this is the order they'll need to be for width >=
* height.
*/
rectangle = new cv.Rect(padding, 0, height, width);
}
// Crop the image back to its original dimensions
output = output.roi(rectangle);
cv.imshow(canvas, output);

iOs - Image scaling and positioning in larger image

My question is related to calculating position. Scale the INNER IMAGE in FRONT END then i need to find the relative position on same in BACKGROUND PROCESS. If any one experience in it please share here like an equation or something.
The FRONT END FRAME IMAGE have a size of 188x292(WidthxHeight)
and Larger FRAME IMAGE have the size of 500x750(WidthxHeight).
INNER IMAGE 75x75(WidthxHeight) and Larger INNER IMAGE
199.45x199.45(WidthxHeight)
Question : When i scale the INNER IMAGE in FRONT END. That is 75x75 to 100x100, then we have the x and y position of that. And i need to calculate the exact position in BACKGROUND PROCESS. It's for scale that image programatically.
After INNER IMAGE scale you will have x and y position for it, now convert it to %.
if position of INNER IMAGE is
relativeX = (x * 100)/frameImageWidth;
relativeY = (y * 100)/frameImageHeight;
position of INNER IMAGE for Background will be
x = (relativeX * backgroundFrameImageWidth)/100;
y = (relativeY * backgroundFrameImageHeight)/100;
CGPoint foregroundLocation = CGPointMake(x, y);
static float xscale = BACKGROUND_IMAGE_WIDTH / FOREGROUND_IMAGE_WIDTH;
static float yscale = BACKGROUND_IMAGE_HEIGHT / FOREGROUND_IMAGE_HEIGHT;
CGPoint backgroundLocation = CGPointApplyAffineTransform(foregroundLocation, CGAffineTransformMakeScale(xscale, yscale));

OpenCV : wrapPerspective on whole image

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.
Right now I'm using
points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));
Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));
Which gives my these results (look at the right-bottom corner for result of warpPerspective):
As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.
How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
EDIT:
Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.
For example when I've doubled size using:
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));
I've recieved these images:
Your code doesn't seem to be complete, so it is difficult to say what the problem is.
In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.
For example try to double the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.
To make it more clear some sample code:
// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);
// calculate warped position of all corners
cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);
cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);
cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);
cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);
// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));
// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;
// adjust target points accordingly
for (int i=0; i<4; i++) {
points2D[i] += cv::Point2f(x,y);
}
// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);
// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.
import cv2
import numpy as np
from PIL import Image, ImageDraw
import math
def get_transformed_image(src, dst, img):
# calculate the tranformation
mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
# new source: image corners
corners = np.array([
[0, img.size[0]],
[0, 0],
[img.size[1], 0],
[img.size[1], img.size[0]]
])
# Transform the corners of the image
corners_tranformed = cv2.perspectiveTransform(
np.array([corners.astype("float32")]), mat)
# These tranformed corners seems completely wrong/inverted x-axis
print(corners_tranformed)
x_mn = math.ceil(min(corners_tranformed[0].T[0]))
y_mn = math.ceil(min(corners_tranformed[0].T[1]))
x_mx = math.ceil(max(corners_tranformed[0].T[0]))
y_mx = math.ceil(max(corners_tranformed[0].T[1]))
width = x_mx - x_mn
height = y_mx - y_mn
analogy = height/1000
n_height = height/analogy
n_width = width/analogy
dst2 = corners_tranformed
dst2 -= np.array([x_mn, y_mn])
dst2 = dst2/analogy
mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
dst2.astype("float32"))
img_warp = Image.fromarray((
cv2.warpPerspective(np.array(image),
mat2,
(int(n_width),
int(n_height)))))
return img_warp
# image coordingates
src= np.array([[ 789.72, 1187.35],
[ 789.72, 752.75],
[1277.35, 730.66],
[1277.35,1200.65]])
# known coordinates
dst=np.array([[0, 1000],
[0, 0],
[1092, 0],
[1092, 1000]])
# Create the image
image = Image.new('RGB', (img_width, img_height))
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
draw = ImageDraw.Draw(image)
draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
#image.show()
warped = get_transformed_image(src, dst, image)
warped.show()
There are two things you need to do:
Increase the size of the output of cv2.warpPerspective
Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image
Here is how code will look:
# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)

opencv: stitch images together

I am trying to stitch two images together, but only the first one can bee seen in the final image.
Here's my code:
Mat result(1000, 1000, CV_8UC3);
Mat firstPart = result(Rect(0, 0, image1.cols, image1.rows));
Mat secondPart = result(Rect(deltaX, deltaY, image2.cols+deltaX, image2.rows+deltaY));
image1.copyTo(firstPart);
image2.copyTo(secondPart);
imshow("result", result);
image2 is only visible in the result, if deltaX and deltaY are zero and I can't figure out why (image2+deltaX < 1000, same for deltaY).
Coming from android I assumed the parameters of Rect would be left, top, right, bottom, but they are left, top paired with width and height. Therefore it has to be
Rect(deltaX, deltaY, image2.cols, image2.rows)
instead of
Rect(deltaX, deltaY, image2.cols+deltaX, image2.rows+deltaY)
.

Resources