opencv: stitch images together - opencv

I am trying to stitch two images together, but only the first one can bee seen in the final image.
Here's my code:
Mat result(1000, 1000, CV_8UC3);
Mat firstPart = result(Rect(0, 0, image1.cols, image1.rows));
Mat secondPart = result(Rect(deltaX, deltaY, image2.cols+deltaX, image2.rows+deltaY));
image1.copyTo(firstPart);
image2.copyTo(secondPart);
imshow("result", result);
image2 is only visible in the result, if deltaX and deltaY are zero and I can't figure out why (image2+deltaX < 1000, same for deltaY).

Coming from android I assumed the parameters of Rect would be left, top, right, bottom, but they are left, top paired with width and height. Therefore it has to be
Rect(deltaX, deltaY, image2.cols, image2.rows)
instead of
Rect(deltaX, deltaY, image2.cols+deltaX, image2.rows+deltaY)
.

Related

Image auto cropping when rotate in OpenCV.js

I'm using OpenCV.js to rotate image to the left and right, but it was cropped when I rotate.
This is my code:
let src = cv.imread('img');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
let center = new cv.Point(src.cols/2, src.rows/2);
let M = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete();
Here is an example:
This is my source image:
This is what I want:
But it returned like this:
What should I do to fix this problem?
P/s: I don't know how to use different languages except javascript.
A bit late but given the scarcity of opencv.js material I'll post the answer:
The function cv.warpAffine crops the image because it only does a mathematical transformation as documented on OpenCV and other sources, if you wish to do rotations to any angle you'll need to calculate the padding in order to compensate that.
If you wish to only rotate in multiples of 90 degrees you could use cv.rotate as follows:
cv.rotate(src, dst, cv.ROTATE_90_CLOCKWISE);
Where src is the matrix with your source image, dst is the destination matrix which could be defined empty as follows let dst = new cv.Mat(); and cv.ROTATE_90_CLOCKWISE is the rotate flag indicating the angle of rotation, there are three different options:
cv.ROTATE_90_CLOCKWISE
cv.ROTATE_180
cv.ROTATE_90_COUNTERCLOCKWISE
You can find which OpenCV functions are implemented on OpenCV.js on the repository's opencv_js.congif.py file if the function is indicated as whitelisted then is working on opencv.js even if it is not included in the opencv.js tutorial.
The info about how to use each function can be found in the general documentation, the order of the parameters is generally the indicated on the C++ indications (don't be distracted by the oscure C++ vector types sintax) and the name of the flags (like rotate flag) is usually indicated on the python indications.
I was also experiencing this issue so had a look into #fernando-garcia's answer, however I couldn't see that rotate had been implemented in opencv.js so it seems that the fix in the post #dan-mašek's links is the best solution for this, however the functions required are slightly different.
This is the solution I came up with (note, I haven't tested this exact code and there is probably a more elegant/efficient way of writing this, but it gives the general idea. Also this will only work with images rotated by multiples of 90°):
const canvas = document.getElementById('canvas');
const image = cv.imread(canvas);
let output = new cv.Mat();
const size = new cv.Size();
size.width = image.cols;
size.height = image.rows;
// To add transparent borders
const scalar = new cv.Scalar(0, 0, 0, 0);
let center;
let padding;
let height = size.height;
let width = size.width;
if (height > width) {
center = new cv.Point(height / 2, height / 2);
padding = (height - width) / 2;
// Pad out the left and right before rotating to make the width the same as the height
cv.copyMakeBorder(image, output, 0, 0, padding, padding, cv.BORDER_CONSTANT, scalar);
size.width = height;
} else {
center = new cv.Point(width / 2, width / 2);
padding = (width - height) / 2;
// Pad out the top and bottom before rotating to make the height the same as the width
cv.copyMakeBorder(image, output, padding, padding, 0, 0, cv.BORDER_CONSTANT, scalar);
size.height = width;
}
// Do the rotation
const rotationMatrix = cv.getRotationMatrix2D(center, 90, 1);
cv.warpAffine(
output,
output,
rotationMatrix,
size,
cv.INTER_LINEAR,
cv.BORDER_CONSTANT,
new cv.Scalar()
);
let rectangle;
if (height > width) {
rectangle = new cv.Rect(0, padding, height, width);
} else {
/* These arguments might not be in the right order as my solution only needed height
* > width so I've just assumed this is the order they'll need to be for width >=
* height.
*/
rectangle = new cv.Rect(padding, 0, height, width);
}
// Crop the image back to its original dimensions
output = output.roi(rectangle);
cv.imshow(canvas, output);

Flutter - How to rotate an image around the center with canvas

I am trying to implement a custom painter that can draw an image (scaled down version) on the canvas and the drawn image can be rotated and scaled.
I get to know that to scale the image I have to scale the canvas using scale method.
Now the questions is how to rotate the scaled image on its center (or any other point). The rotate method of canvas allow only to rotate on top left corner.
Here is my implementation that can be extended
Had the same problem, Solution was simply making your own rotation method in three lines
void rotate(Canvas canvas, double cx, double cy, double angle) {
canvas.translate(cx, cy);
canvas.rotate(angle);
canvas.translate(-cx, -cy);
}
We thus first move the canvas towards the point you want to pivot around. We then rotate along the the topleft (default for Flutter) which in coordinate space is the pivot you want and then put the canvas back to the desired position, with the rotation applied. Method is very efficient, requiring only 4 additions for the translation and the rotation cost is identical to the original one.
This can achieve by shifting the coordinate space as illustrated in figure 1.
The translation is the difference in coordinates between C1 and C2, which are exactly as between A and B in figure 2.
With some geometry formulas, we can calculate the desired translation and produce the rotated image as in the method below
ui.Image rotatedImage({ui.Image image, double angle}) {
var pictureRecorder = ui.PictureRecorder();
Canvas canvas = Canvas(pictureRecorder);
final double r = sqrt(image.width * image.width + image.height * image.height) / 2;
final alpha = atan(image.height / image.width);
final beta = alpha + angle;
final shiftY = r * sin(beta);
final shiftX = r * cos(beta);
final translateX = image.width / 2 - shiftX;
final translateY = image.height / 2 - shiftY;
canvas.translate(translateX, translateY);
canvas.rotate(angle);
canvas.drawImage(image, Offset.zero, Paint());
return pictureRecorder.endRecording().toImage(image.width, image.height);
}
alpha, beta, angle are all in radian.
Here is the repo of the demo app
If you don't want to rotate the image around the center of the image you can use this way. You won't have to care about what the offset of the canvas should be in relation to the image rotation, because the canvas is moved back to its original position after the image is drawn.
void rotate(Canvas c, Image image, Offset focalPoint, Size screenSize, double angle) {
c.save();
c.translate(screenSize.width/2, screenSize.height/2);
c.rotate(angle);
// To rotate around the center of the image, focal point is the
// image width and height divided by 2
c.drawImage(image, focalPoint*-1, Paint());
c.translate(-screenSize.width/2, -screenSize.height/2);
c.restore();
}

Crop half of an image in OpenCV

How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.
Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img
The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));
To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI

how to count the pixels in roi opencv

I have an cropped image of a coin. and I've already applied mask so i can focus on the coin itself. Next is I want to count the number of pixels of this coin. I've already read similar posts but i they just don't seem to work for me.
here is the original image:
http://s30.postimg.org/eeh3lp99d/IMG_20150414_121300.jpg
and the cropped coin:
http://s3.postimg.org/4k2pdst73/cropped.png
HEre is my code so far:
//get the number of pixels of the coin
//STEP 1: CROP THE COIN
//get the Rect containing he circl
Rect rectCircle(center.x - radius, center.y - radius, radius * 2, radius * 2); //obtain the image ROI:
Mat roi(src_gray, rectCircle);
//make a black mask, same size:
Mat mask(roi.size(), roi.type(), Scalar::all(0));
//with a white,filled circle in it:
circle(mask, Point(radius, radius), radius, Scalar::all(255), -1);
//combine roi and mask:
cv::Mat coin_cropped = roi & mask;
How do i count the number of pixels of the cropped coin?
You need to use countnonzero
countNonZero
Counts non-zero array elements.
C++: int countNonZero(InputArray src)
Use this on the ROI matrix and it will return an int of the number of pixels, in your code it will look like this:
numberOfPixelsInMask = cv::countNonZero(mask);

OpenCV : wrapPerspective on whole image

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.
Right now I'm using
points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));
Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));
Which gives my these results (look at the right-bottom corner for result of warpPerspective):
As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.
How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
EDIT:
Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.
For example when I've doubled size using:
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));
I've recieved these images:
Your code doesn't seem to be complete, so it is difficult to say what the problem is.
In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.
For example try to double the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.
To make it more clear some sample code:
// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);
// calculate warped position of all corners
cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);
cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);
cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);
cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);
// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));
// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;
// adjust target points accordingly
for (int i=0; i<4; i++) {
points2D[i] += cv::Point2f(x,y);
}
// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);
// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.
import cv2
import numpy as np
from PIL import Image, ImageDraw
import math
def get_transformed_image(src, dst, img):
# calculate the tranformation
mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
# new source: image corners
corners = np.array([
[0, img.size[0]],
[0, 0],
[img.size[1], 0],
[img.size[1], img.size[0]]
])
# Transform the corners of the image
corners_tranformed = cv2.perspectiveTransform(
np.array([corners.astype("float32")]), mat)
# These tranformed corners seems completely wrong/inverted x-axis
print(corners_tranformed)
x_mn = math.ceil(min(corners_tranformed[0].T[0]))
y_mn = math.ceil(min(corners_tranformed[0].T[1]))
x_mx = math.ceil(max(corners_tranformed[0].T[0]))
y_mx = math.ceil(max(corners_tranformed[0].T[1]))
width = x_mx - x_mn
height = y_mx - y_mn
analogy = height/1000
n_height = height/analogy
n_width = width/analogy
dst2 = corners_tranformed
dst2 -= np.array([x_mn, y_mn])
dst2 = dst2/analogy
mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
dst2.astype("float32"))
img_warp = Image.fromarray((
cv2.warpPerspective(np.array(image),
mat2,
(int(n_width),
int(n_height)))))
return img_warp
# image coordingates
src= np.array([[ 789.72, 1187.35],
[ 789.72, 752.75],
[1277.35, 730.66],
[1277.35,1200.65]])
# known coordinates
dst=np.array([[0, 1000],
[0, 0],
[1092, 0],
[1092, 1000]])
# Create the image
image = Image.new('RGB', (img_width, img_height))
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
draw = ImageDraw.Draw(image)
draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
#image.show()
warped = get_transformed_image(src, dst, image)
warped.show()
There are two things you need to do:
Increase the size of the output of cv2.warpPerspective
Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image
Here is how code will look:
# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)

Resources