Is there a way to adjust xy coordinates to fit within a bounding box in Prawn PDF if they are larger then the height of the box?
I'm using the gem 'signature-pad-rails' to capture signatures which then stores the following:
[{"lx":98,"ly":23,"mx":98,"my":22},{"lx":98,"ly":21,"mx":98,"my":23},{"lx":98,"ly":18,"mx":98,"my":21}, ... {"lx":405,"ly":68,"mx":403,"my":67},{"lx":406,"ly":69,"mx":405,"my":68}]
I have the follow to show the signature in my pdf:
bounding_box([0, cursor], width: 540, height: 100) do
stroke_bounds
#witness_signature.each do |e|
stroke { line [e["lx"], 100 - e["ly"]],
[e["mx"], 100 - e["my"] ] }
end
end
But the signature runs off the page in some cases, isn't centre and just generally runs amuck.
Your question is pretty vague, so I'm guessing what you mean.
To rescale a sequence of coordinates (x[i], y[i]), i = 1..n to fit in a given bounding box of size (width, height) with origin (0,0) as in Postscript, first decide whether to preserve the aspect ratio of the original image. Fitting to a box won't generally do that. Since you probably don't want to distort the signature, say the answer is "yes."
When scaling an image into a box preserving aspect ratio, either the x- or y-axis determines the scale factor unless the box happens to have exactly the image's aspect. So next is to decide what to do with the "extra space" on the alternate axis. E.g. if the image is tall and thin compared to the bounding box, the extra space will be on the x-axis; if short and fat, it's the y-axis.
Let's say center the image within the extra space; that seems appropriate for a signature.
Then here is pseudocode to re-scale the points to fit the box:
x_min = y_min = +infty, x_max = y_max = -infty
for i in 1 to n
if x[i] < x_min, x_min = x[i]
if x[i] > x_max, x_max = x[i]
if y[i] < y_min, y_min = y[i]
if y[i] > y_max, y_max = y[i]
end for
dx = x_max - x_min
dy = y_max - y_min
x_scale = width / dx
y_scale = height / dy
if x_scale < y_scale then
// extra space is on the y-dimension
scale = x_scale
x_org = 0
y_org = 0.5 * (height - dy * scale) // equal top and bottom extra space
else
// extra space is on the x_dimension
scale = y_scale
x_org = 0.5 * (width - dx * scale) // equal left and right extra space
y_org = 0
end
for i in 1 to n
x[i] = x_org + scale * (x[i] - x_min)
y[i] = y_org + scale * (y[i] - y_min)
end
Related
I have some code, largely taken from various sources linked at the bottom of this post, written in Python, that takes an image of shape [height, width] and some bounding boxes in the [x_min, y_min, x_max, y_max] format, both numpy arrays, and rotates an image and its bounding boxes counterclockwise. Since after rotation the bounding box becomes more of a "diamond shape", i.e. not axis aligned, then I perform some calculations to make it axis aligned. The purpose of this code is to perform data augmentation in training an object detection neural network through the use of rotated data (where flipping horizontally or vertically is common). It seems flips of other angles are common for image classification, without bounding boxes, but when there is boxes, the resources for how to flip the boxes as well as the images is relatively sparse/niche.
It seems when I input an angle of 45 degrees, that I get some less than "tight" bounding boxes, as in the four corners are not a very good annotation, whereas the original one was close to perfect.
The image shown below is the first image in the MS COCO 2014 object detection dataset (training image), and its first bounding box annotation. My code is as follows:
import math
import cv2
import numpy as np
# angle assumed to be in degrees
# bbs a list of bounding boxes in x_min, y_min, x_max, y_max format
def rotateImageAndBoundingBoxes(im, bbs, angle):
h, w = im.shape[0], im.shape[1]
(cX, cY) = (w//2, h//2) # original image center
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0) # 2 by 3 rotation matrix
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the dimensions of the rotated image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation of the new centre
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
rotated_im = cv2.warpAffine(im, M, (nW, nH))
rotated_bbs = []
for bb in bbs:
# get the four rotated corners of the bounding box
vec1 = np.matmul(M, np.array([bb[0], bb[1], 1], dtype=np.float64)) # top left corner transformed
vec2 = np.matmul(M, np.array([bb[2], bb[1], 1], dtype=np.float64)) # top right corner transformed
vec3 = np.matmul(M, np.array([bb[0], bb[3], 1], dtype=np.float64)) # bottom left corner transformed
vec4 = np.matmul(M, np.array([bb[2], bb[3], 1], dtype=np.float64)) # bottom right corner transformed
x_vals = [vec1[0], vec2[0], vec3[0], vec4[0]]
y_vals = [vec1[1], vec2[1], vec3[1], vec4[1]]
x_min = math.ceil(np.min(x_vals))
x_max = math.floor(np.max(x_vals))
y_min = math.ceil(np.min(y_vals))
y_max = math.floor(np.max(y_vals))
bb = [x_min, y_min, x_max, y_max]
rotated_bbs.append(bb)
// my function to resize image and bbs to the original image size
rotated_im, rotated_bbs = resizeImageAndBoxes(rotated_im, w, h, rotated_bbs)
return rotated_im, rotated_bbs
The good bounding box looks like:
The not-so-good bounding box looks like :
I am trying to determine if this is an error of my code, or this is expected behavior? It seems like this problem is less apparent at integer multiples of pi/2 radians (90 degrees), but I would like to achieve tight bounding boxes at any angle of rotation. Any insights at all appreciated.
Sources:
[Open CV2 documentation] https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gafbbc470ce83812914a70abfb604f4326
[Data Augmentation Discussion]
https://blog.paperspace.com/data-augmentation-for-object-detection-rotation-and-shearing/
[Mathematics of rotation around an arbitrary point in 2 dimension]
https://math.stackexchange.com/questions/2093314/rotation-matrix-of-rotation-around-a-point-other-than-the-origin
It seems for the most part this is expected behavior as per the comments. I do have a kind of hacky solution to this problem, where you can write a function like
# assuming box coords = [x_min, y_min, x_max, y_max]
def cropBoxByPercentage(box_coords, image_width, image_height, x_percentage=0.05, y_percentage=0.05):
box_xmin = box_coords[0]
box_ymin = box_coords[1]
box_xmax = box_coords[2]
box_ymax = box_coords[3]
box_width = box_xmax-box_xmin+1
box_height = box_ymax-box_ymin+1
dx = int(x_percentage * box_width)
dy = int(y_percentage * box_height)
box_xmin = max(0, box_xmin-dx)
box_xmax = min(image_width-1, box_xmax+dx)
box_ymin = max(0, box_ymax - dy)
box_ymax = min(image_height - 1, box_ymax + dy)
return np.array([box_xmin, box_xmax, box_ymin, box_ymax])
Where computing the x_percentage and y_percentage can be computed using a fixed value, or could be computed using some heuristic.
I have collected images of S9 phones, added labels with labellmg and trained for a few hours in google colab. I had minimal loss so I thought it is enough. I only selected the rectangles where the phone is displayed and nothing else. What I dont understand is, it draws a lot of rectangles on the phone. I only want 1 or 2 rectangles drawn on the phone itself. Did I do something wrong?
def detect_img(self, img):
blob = cv2.dnn.blobFromImage(img, 0.00392 ,(416,416), (0,0,0), True, crop=False)
input_img = self.net.setInput(blob)
output = self.net.forward(self.output)
height, width, channel = img.shape
boxes = []
trusts = []
class_ids = []
for out in output:
for detect in out:
total_scores = detect[5:]
class_id = np.argmax(total_scores)
trust_factor = total_scores[class_id]
if trust_factor > 0.5:
x_center = int(detect[0] * width)
y_center = int(detect[1] * height)
w = int(detect[2] * width)
h = int(detect[3] * height)
x = int(x_center - w / 2)
y = int(x_center - h / 2)
boxes.append([x,y,w,h])
trusts.append(float(trust_factor))
class_ids.append(class_id)
cv2.rectangle(img, (x_center,y_center), (x + w, y + h), (0,255,0), 2)
When I set the trust_factor to 0.8, a lot of the rectangles are gone but there are still rectangles outside the phone, while I only selected the phone itself in labellmg and not the background.
You can use function "non maximum suppression" that it removes rectangles which have less score. I put a code for NMS
def NMS(boxes, overlapThresh = 0.4):
# Return an empty list, if no boxes given
if len(boxes) == 0:
return []
x1 = boxes[:, 0] # x coordinate of the top-left corner
y1 = boxes[:, 1] # y coordinate of the top-left corner
x2 = boxes[:, 2] # x coordinate of the bottom-right corner
y2 = boxes[:, 3] # y coordinate of the bottom-right corner
# Compute the area of the bounding boxes and sort the bounding
# Boxes by the bottom-right y-coordinate of the bounding box
areas = (x2 - x1 + 1) * (y2 - y1 + 1) # We add 1, because the pixel at the start as well as at the end counts
# The indices of all boxes at start. We will redundant indices one by one.
indices = np.arange(len(x1))
for i,box in enumerate(boxes):
# Create temporary indices
temp_indices = indices[indices!=i]
# Find out the coordinates of the intersection box
xx1 = np.maximum(box[0], boxes[temp_indices,0])
yy1 = np.maximum(box[1], boxes[temp_indices,1])
xx2 = np.minimum(box[2], boxes[temp_indices,2])
yy2 = np.minimum(box[3], boxes[temp_indices,3])
# Find out the width and the height of the intersection box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / areas[temp_indices]
# if the actual boungding box has an overlap bigger than treshold with any other box, remove it's index
if np.any(overlap) > treshold:
indices = indices[indices != i]
#return only the boxes at the remaining indices
return boxes[indices].astype(int)
How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.
Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img
The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));
To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI
I am working on this Lua script and I need to be able to find the largest 16:9 rectangle within another rectangle that doesn't have a specific aspect ratio. So can you tell me how I can do that? You don't have to write Lua - pseudocode works too.
Thanks!
This I have tried and can confirm that won't work on lower ratio outer rects.
if wOut > hOut then
wIn = wOut
hIn = (wIn / 16) *9
else
hIn = hOut
wIn = (hIn / 9) * 16
end
heightCount = originalHeight / 9;
widthCount = originalWidth / 16;
if (heightCount == 0 || widthCount == 0)
throw "No 16/9 rectangle";
recCount = min(heightCount, widthCount);
targetHeight = recCount * 9;
targetWidth = recCount * 16;
So far, any rectangle with left = 0..(originalWidth - targetWidth) and top = 0..(originalHeight - targetHeight) and width = targetWidth and height = targetHeight should satisfy your requirements.
Well, your new rectangle can be described as:
h = w / (16/9)
w = h * (16/9)
Your new rectangle should then be based on the width of the outer rectangle, so:
h = w0 / (16/9)
w = w0
Depending on how Lua works with numbers, you might want to make sure it is using real division as opposed to integer division - last time I looked was 2001, and my memory is deteriorating faster than coffee gets cold, but I seem to remember all numbers being floats anyway...
I got the contours of source image. I have drawn 4 lines to approximate these contours:
from minimum width to minimum height of contour.
from minimum width to maximum height of contour.
from maximum width to minimum height of contour.
from maximum width to maximum height of contour.
I'd like to rotate this rectangle such that it is aligned to width (i.e. x-coordinate of image).
This may help you
rect = cv2.minAreaRect(yourcontour)
angle = rect[2]
if angle < -45:
angle = (90 + angle)
# otherwise, just take the inverse of the angle to make
# it positive
else:
angle = -angle
# rotate the image to deskew it
(h, w) = img.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(img, M, (w, h),
flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)