I am having an input image like this
Cropping the redpoints is easy since its a rectangle. How can i crop if the red point on 2,3,6 and 7 are moved to green points dynamically. These points may change how can i crop dynamically in program.
The result may look like this
I tried Warppperspective but i was unable to get expected result.
The program was like this
import matplotlib.pyplot as plt
import numpy as np
import cv2
img = cv2.imread('sudoku_result.png')
pts1 = np.float32([[100,60],[260,60],[100,180],[260,180],[100,300],[260,300]])
pts2 = np.float32([[20,60],[340,60],[60,180],[300,180][100,300],[260,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(360,360))
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()
I am new to image processing an would like to know which is the best method.
Crop the enclosing rectangle the one created by (minX,minY,maxX,maxY) and then for each pixel in the cropped image you can check if the point inside the polygon created by the original points or not and for the points outside the original shape you put zero.
The code:
import cv2
import numpy as np
# Read a image
I = cv2.imread('i.png')
# Define the polygon coordinates to use or the crop
polygon = [[[20,110],[450,108],[340,420],[125,420]]]
# First find the minX minY maxX and maxY of the polygon
minX = I.shape[1]
maxX = -1
minY = I.shape[0]
maxY = -1
for point in polygon[0]:
x = point[0]
y = point[1]
if x < minX:
minX = x
if x > maxX:
maxX = x
if y < minY:
minY = y
if y > maxY:
maxY = y
# Go over the points in the image if thay are out side of the emclosing rectangle put zero
# if not check if thay are inside the polygon or not
cropedImage = np.zeros_like(I)
for y in range(0,I.shape[0]):
for x in range(0, I.shape[1]):
if x < minX or x > maxX or y < minY or y > maxY:
continue
if cv2.pointPolygonTest(np.asarray(polygon),(x,y),False) >= 0:
cropedImage[y, x, 0] = I[y, x, 0]
cropedImage[y, x, 1] = I[y, x, 1]
cropedImage[y, x, 2] = I[y, x, 2]
# Now we can crop again just the envloping rectangle
finalImage = cropedImage[minY:maxY,minX:maxX]
cv2.imwrite('finalImage.png',finalImage)
The final image:
If you want to stretch the croped image
# Now strectch the polygon to a rectangle. We take the points that
polygonStrecth = np.float32([[0,0],[finalImage.shape[1],0],[finalImage.shape[1],finalImage.shape[0]],[0,finalImage.shape[0]]])
# Convert the polygon corrdanite to the new rectnagle
polygonForTransform = np.zeros_like(polygonStrecth)
i = 0
for point in polygon[0]:
x = point[0]
y = point[1]
newX = x - minX
newY = y - minY
polygonForTransform[i] = [newX,newY]
i += 1
# Find affine transform
M = cv2.getPerspectiveTransform(np.asarray(polygonForTransform).astype(np.float32), np.asarray(polygonStrecth).astype(np.float32))
# Warp one image to the other
warpedImage = cv2.warpPerspective(finalImage, M, (finalImage.shape[1], finalImage.shape[0]))
cv2.imshow('a',warpedImage)
Looks like the co-ordinates you mentioned aren't accurate. So tweaking the coordinates to match the shape and using the Cloudinary distort function complemented by custom shapes cropping, here's the result:
http://res.cloudinary.com/demo/image/fetch/e_distort:20:60:450:60:340:410:140:410,l_sample,fl_cutter,g_north_west/e_trim/http://i.stack.imgur.com/oGSKW.png
If you'd like play around with these Cloudinary functions, here are some samples:
http://cloudinary.com/blog/how_to_dynamically_distort_images_to_fit_your_graphic_design
http://cloudinary.com/cookbook/custom_shapes_cropping
Related
I have collected images of S9 phones, added labels with labellmg and trained for a few hours in google colab. I had minimal loss so I thought it is enough. I only selected the rectangles where the phone is displayed and nothing else. What I dont understand is, it draws a lot of rectangles on the phone. I only want 1 or 2 rectangles drawn on the phone itself. Did I do something wrong?
def detect_img(self, img):
blob = cv2.dnn.blobFromImage(img, 0.00392 ,(416,416), (0,0,0), True, crop=False)
input_img = self.net.setInput(blob)
output = self.net.forward(self.output)
height, width, channel = img.shape
boxes = []
trusts = []
class_ids = []
for out in output:
for detect in out:
total_scores = detect[5:]
class_id = np.argmax(total_scores)
trust_factor = total_scores[class_id]
if trust_factor > 0.5:
x_center = int(detect[0] * width)
y_center = int(detect[1] * height)
w = int(detect[2] * width)
h = int(detect[3] * height)
x = int(x_center - w / 2)
y = int(x_center - h / 2)
boxes.append([x,y,w,h])
trusts.append(float(trust_factor))
class_ids.append(class_id)
cv2.rectangle(img, (x_center,y_center), (x + w, y + h), (0,255,0), 2)
When I set the trust_factor to 0.8, a lot of the rectangles are gone but there are still rectangles outside the phone, while I only selected the phone itself in labellmg and not the background.
You can use function "non maximum suppression" that it removes rectangles which have less score. I put a code for NMS
def NMS(boxes, overlapThresh = 0.4):
# Return an empty list, if no boxes given
if len(boxes) == 0:
return []
x1 = boxes[:, 0] # x coordinate of the top-left corner
y1 = boxes[:, 1] # y coordinate of the top-left corner
x2 = boxes[:, 2] # x coordinate of the bottom-right corner
y2 = boxes[:, 3] # y coordinate of the bottom-right corner
# Compute the area of the bounding boxes and sort the bounding
# Boxes by the bottom-right y-coordinate of the bounding box
areas = (x2 - x1 + 1) * (y2 - y1 + 1) # We add 1, because the pixel at the start as well as at the end counts
# The indices of all boxes at start. We will redundant indices one by one.
indices = np.arange(len(x1))
for i,box in enumerate(boxes):
# Create temporary indices
temp_indices = indices[indices!=i]
# Find out the coordinates of the intersection box
xx1 = np.maximum(box[0], boxes[temp_indices,0])
yy1 = np.maximum(box[1], boxes[temp_indices,1])
xx2 = np.minimum(box[2], boxes[temp_indices,2])
yy2 = np.minimum(box[3], boxes[temp_indices,3])
# Find out the width and the height of the intersection box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / areas[temp_indices]
# if the actual boungding box has an overlap bigger than treshold with any other box, remove it's index
if np.any(overlap) > treshold:
indices = indices[indices != i]
#return only the boxes at the remaining indices
return boxes[indices].astype(int)
I have binarized images like this one:
I need to determine the center and radius of the inner solid disk. As you can see, it is surrounded by a textured area which touches it, so that simple connected component detection doesn't work. Anyway, there is a void margin on a large part of the perimeter.
A possible cure could be by eroding until all the texture disappears or disconnects from the disk, but this can be time consuming and the number of iterations is unsure. (In addition, in some unlucky cases there are tiny holes in the disk, which will grow with erosion.)
Any better suggestion to address this problem in a robust and fast way ? (I tagged OpenCV, but this is not mandated, what matters is the approach.)
You can:
Invert the image
Find the largest axis-aligned rectangle containing only zeros, (I used my C++ code from this answer). The algorithm is pretty fast.
Get the center and radius of the circle from the rectangle
Code:
#include <opencv2\opencv.hpp>
using namespace std;
using namespace cv;
// https://stackoverflow.com/a/30418912/5008845
cv::Rect findMaxRect(const cv::Mat1b& src)
{
cv::Mat1f W(src.rows, src.cols, float(0));
cv::Mat1f H(src.rows, src.cols, float(0));
cv::Rect maxRect(0,0,0,0);
float maxArea = 0.f;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (src(r, c) == 0)
{
H(r, c) = 1.f + ((r>0) ? H(r-1, c) : 0);
W(r, c) = 1.f + ((c>0) ? W(r, c-1) : 0);
}
float minw = W(r,c);
for (int h = 0; h < H(r, c); ++h)
{
minw = std::min(minw, W(r-h, c));
float area = (h+1) * minw;
if (area > maxArea)
{
maxArea = area;
maxRect = cv::Rect(cv::Point(c - minw + 1, r - h), cv::Point(c+1, r+1));
}
}
}
}
return maxRect;
}
int main()
{
cv::Mat1b img = cv::imread("path/to/img", cv::IMREAD_GRAYSCALE);
// Correct image
img = img > 127;
cv::Rect r = findMaxRect(~img);
cv::Point center ( std::round(r.x + r.width / 2.f), std::round(r.y + r.height / 2.f));
int radius = std::sqrt(r.width*r.width + r.height*r.height) / 2;
cv::Mat3b out;
cv::cvtColor(img, out, cv::COLOR_GRAY2BGR);
cv::rectangle(out, r, cv::Scalar(0, 255, 0));
cv::circle(out, center, radius, cv::Scalar(0, 0, 255));
return 0;
}
My method is to use morph-open, findcontours, and minEnclosingCircle as follow:
#!/usr/bin/python3
# 2018/11/29 20:03
import cv2
fname = "test.png"
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, threshed = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
morphed = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, kernel, iterations = 3)
cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
cnt = max(cnts, key=cv2.contourArea)
pt, r = cv2.minEnclosingCircle(cnt)
pt = (int(pt[0]), int(pt[1]))
r = int(r)
print("center: {}\nradius: {}".format(pt, r))
The final result:
center: (184, 170)
radius: 103
My second attempt on this case. This time I am using morphological closing operation to weaken the noise and maintain the signal. This is followed by a simple threshold and a connectedcomponent analysis. I hope this code can run faster.
Using this method, i can find the centroid with subpixel accuracy
('center : ', (184.12244328746746, 170.59771290442544))
Radius is derived from the area of the circle.
('radius : ', 101.34704439389715)
Here is the full code
import cv2
import numpy as np
# load image in grayscale
image = cv2.imread('radius.png',0)
r,c = image.shape
# remove noise
blured = cv2.blur(image,(5,5))
# Morphological closing
morph = cv2.erode(blured,None,iterations = 3)
morph = cv2.dilate(morph,None,iterations = 3)
cv2.imshow("morph",morph)
cv2.waitKey(0)
# Get the strong signal
th, th_img = cv2.threshold(morph,200,255,cv2.THRESH_BINARY)
cv2.imshow("th_img",th_img)
cv2.waitKey(0)
# Get connected components
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(th_img)
print(num_labels)
print(stats)
# displat labels
labels_disp = np.uint8(255*labels/np.max(labels))
cv2.imshow("labels",labels_disp)
cv2.waitKey(0)
# Find center label
cnt_label = labels[r/2,c/2]
# Find circle center and radius
# Radius calculated by averaging the height and width of bounding box
area = stats[cnt_label][4]
radius = np.sqrt(area / np.pi)#stats[cnt_label][2]/2 + stats[cnt_label][3]/2)/2
cnt_pt = ((centroids[cnt_label][0]),(centroids[cnt_label][1]))
print('center : ',cnt_pt)
print('radius : ',radius)
# Display final result
edges_color = cv2.cvtColor(image,cv2.COLOR_GRAY2BGR)
cv2.circle(edges_color,(int(cnt_pt[0]),int(cnt_pt[1])),int(radius),(0,0,255),1)
cv2.circle(edges_color,(int(cnt_pt[0]),int(cnt_pt[1])),5,(0,0,255),-1)
x1 = stats[cnt_label][0]
y1 = stats[cnt_label][1]
w1 = stats[cnt_label][2]
h1 = stats[cnt_label][3]
cv2.rectangle(edges_color,(x1,y1),(x1+w1,y1+h1),(0,255,0))
cv2.imshow("edges_color",edges_color)
cv2.waitKey(0)
Here is an example of using hough circle. It can work if you set the min and max radius to a proper range.
import cv2
import numpy as np
# load image in grayscale
image = cv2.imread('radius.png',0)
r , c = image.shape
# remove noise
dst = cv2.blur(image,(5,5))
# Morphological closing
dst = cv2.erode(dst,None,iterations = 3)
dst = cv2.dilate(dst,None,iterations = 3)
# Find Hough Circle
circles = cv2.HoughCircles(dst
,cv2.HOUGH_GRADIENT
,2
,minDist = 0.5* r
,param2 = 150
,minRadius = int(0.5 * r / 2.0)
,maxRadius = int(0.75 * r / 2.0)
)
# Display
edges_color = cv2.cvtColor(image,cv2.COLOR_GRAY2BGR)
for i in circles[0]:
print(i)
cv2.circle(edges_color,(i[0],i[1]),i[2],(0,0,255),1)
cv2.imshow("edges_color",edges_color)
cv2.waitKey(0)
Here is the result
[185. 167. 103.6]
Have you tried something along the lines of the Circle Hough Transform?
I see that OpenCv has its own implementation. Some preprocessing (median filtering?) might be necessary here, though.
Here is a simple approach:
Erode the image (using a large, circular SE), then find the centroid of the result. This should be really close to the centroid of the central disk.
Compute the mean as a function of the radius of the original image, using the computed centroid as the center.
The output looks like this:
From here, determining the radius is quite simple.
Here is the code, I'm using PyDIP (we don't yet have a binary distribution, you'll need to download and build form sources):
import matplotlib.pyplot as pp
import PyDIP as dip
import numpy as np
img = dip.Image(pp.imread('/home/cris/tmp/FDvQm.png')[:,:,0])
b = dip.Erosion(img, 30)
c = dip.CenterOfMass(b)
rmean = dip.RadialMean(img, center=c)
pp.plot(rmean)
r = np.argmax(rmean < 0.5)
Here, r is 102, as the radius in integer number of pixels, I'm sure it's possible to interpolate to improve precision. c is [184.02, 170.45].
Question: How can one position a polygon relative to one of it's known vertice points?
In other words how could I calculate where the auto generated center of the polygon is relative to one of the known vertices (i.e. used in the path)?
e.g. Image placing a specific shape on a map which you make polygon, you then want to position it on the map, however you can't do this accurately without knowing where it's Corona engine created centre is. Extract from API: "The local origin is at the center of the polygon and the anchor point is initialized to this local origin."
PS Actually wondering if I should be using a line and appending points to create effectively a polygon, however perhaps you can't add background color in this case(?)
The center calculated by corona is the center of the bounding box of the polygon.
I assume you have a table with all the points of your polygon stored like that:
local polygon = {x1,y1,x2,y2,...,xn,yn}
1) to find the bounding box of your original points, loop thru all the points; the smallest x and smallest y values will give you the coordinates of the top-left point; the largest x and y values are for the bottom-right point;
local minX = -math.huge
local minY = -math.huge
local maxX = math.huge
local maxY = math.huge
for i=1, #polygon, 2 do
local px = polygon[i]
local py = polygon[i+1]
if px > maxX then maxX = px end
if py > maxY then maxY = py end
if px < minX then minX = py end
if py < minY then minY = py end
end
2) find the center of this bounding box:
local centerX = (maxX - minX)/2
local centerY = (maxY - minY)/2
3) add the center point to the top-left point
local offsetX = centerX + minX
local offsetY = centerY + minY
4) add this offset to the corona polygon to place it in the same position as the original polygon.
Should work bot I have not tested it. Let me know.
I used a variant on the solution above as I couldn't get it to work. Essentially I found the minimum vertex coordinates in each dimension and added them to the polygon position. By comparing them to the contentBounds positions, I can compute the difference between where I thought the minimums would be and where they are.
local min_x = math.huge
local min_y = math.huge
for v = 1, #vertices, 2 do
min_x = math.min(min_x, vertices[v])
min_y = math.min(min_y, vertices[v + 1])
end
local poly = display.newPolygon(x, y, vertices)
local offset_x = (x + min_x) - poly.contentBounds.xMin
local offset_y = (x + min_y) - poly.contentBounds.yMin
poly:translate(offset_x, offset_y)
I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.
Right now I'm using
points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));
Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));
Which gives my these results (look at the right-bottom corner for result of warpPerspective):
As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.
How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
EDIT:
Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.
For example when I've doubled size using:
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));
I've recieved these images:
Your code doesn't seem to be complete, so it is difficult to say what the problem is.
In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.
For example try to double the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.
To make it more clear some sample code:
// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);
// calculate warped position of all corners
cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);
cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);
cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);
cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);
// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));
// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;
// adjust target points accordingly
for (int i=0; i<4; i++) {
points2D[i] += cv::Point2f(x,y);
}
// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);
// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.
import cv2
import numpy as np
from PIL import Image, ImageDraw
import math
def get_transformed_image(src, dst, img):
# calculate the tranformation
mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
# new source: image corners
corners = np.array([
[0, img.size[0]],
[0, 0],
[img.size[1], 0],
[img.size[1], img.size[0]]
])
# Transform the corners of the image
corners_tranformed = cv2.perspectiveTransform(
np.array([corners.astype("float32")]), mat)
# These tranformed corners seems completely wrong/inverted x-axis
print(corners_tranformed)
x_mn = math.ceil(min(corners_tranformed[0].T[0]))
y_mn = math.ceil(min(corners_tranformed[0].T[1]))
x_mx = math.ceil(max(corners_tranformed[0].T[0]))
y_mx = math.ceil(max(corners_tranformed[0].T[1]))
width = x_mx - x_mn
height = y_mx - y_mn
analogy = height/1000
n_height = height/analogy
n_width = width/analogy
dst2 = corners_tranformed
dst2 -= np.array([x_mn, y_mn])
dst2 = dst2/analogy
mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
dst2.astype("float32"))
img_warp = Image.fromarray((
cv2.warpPerspective(np.array(image),
mat2,
(int(n_width),
int(n_height)))))
return img_warp
# image coordingates
src= np.array([[ 789.72, 1187.35],
[ 789.72, 752.75],
[1277.35, 730.66],
[1277.35,1200.65]])
# known coordinates
dst=np.array([[0, 1000],
[0, 0],
[1092, 0],
[1092, 1000]])
# Create the image
image = Image.new('RGB', (img_width, img_height))
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
draw = ImageDraw.Draw(image)
draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
#image.show()
warped = get_transformed_image(src, dst, image)
warped.show()
There are two things you need to do:
Increase the size of the output of cv2.warpPerspective
Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image
Here is how code will look:
# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)
I was wondering if someone helps me understand how to convert the top image to the bottom image.
The images are available in the following link.The top image is in Cartesian coordinate. The bottom image is the converted image in polar coordinate
This is a basic rectangular to polar coordinate transform. To do the conversion, scan across the output image and treat x and y as if they were r and theta. Then use them as r and theta to look up the corresponding pixel in the input image. So something like this:
int x, y;
for (y = 0; y < outputHeight; y++)
{
Pixel* outputPixel = outputRowStart (y); // <- get a pointer to the start of the output row
for (x = 0; x < outputWidth; x++)
{
float r = y;
float theta = 2.0 * M_PI * x / outputWidth;
float newX = r * cos (theta);
float newY = r * sin (theta);
*outputPixel = getInputPixel ( newX, newY ); // <- Should probably do at least bilinear resampling in this function
outputPixel++;
}
}
Note that you may want to handle wrapping depending on what you're trying to achieve. The theta value wraps at 2pi.