How do I get my plug-in to work with BIMP? - gimp

This plug-in works fine when I run it by itself, but when I add it as a procedure to BIMP, it only outputs the image as 412x316 without the borders that I want at 640x360.
from gimpfu import *
def resize_canvas(image, drawable):
# Resize the image
pdb.gimp_image_scale(image, 412, 316)
# Get the new width and height of the image
width, height = pdb.gimp_image_width(image), pdb.gimp_image_height(image)
# Change the canvas size
pdb.gimp_image_resize(image, 640, 360, 0, 0)
# Calculate the x and y coordinates to center the image
x = (640 - width) / 2
y = (360 - height) / 2
# Center the image on the canvas
pdb.gimp_layer_set_offsets(drawable, x, y)
register(
"python_fu_resize_canvas",
"Resize and center an image",
"Resize an image and center it on a canvas",
"Name",
"Name",
"2023",
"Resize and Center",
"",
[
(PF_IMAGE, "image", "Input image", None),
(PF_DRAWABLE, "drawable", "Input drawable", None)
],
[],
resize_canvas,
menu="<Image>/Filters/Misc"
)
main()
I'm trying to make it so there's a transparent border around the 412x316 image that makes the image 640x360.

Related

How to calculate dimension of image by knowing pixels and distance between camera and object?

I have written a code in opencv-python for calculating the number of pixels in height and width of an object.
Lets say,
height = 567 pixels
width = 324 pixels
I also have KNOWN_FOCAL_LENGTH of camera and distance between the original object and camera.
# Getting the biggest contour
cnt = max(contours, key = cv2.contourArea)
cv2.drawContours(image, cnt, -1, (0, 255, 0), 3)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.005* peri, True)
# get the bounding rect
x, y, w, h = cv2.boundingRect(approx)
print("Width in pixels : {}, Height in pixels : {}".format(w,h))
# draw a green rectangle to visualize the bounding rect
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
WidthData = "Width : {} mm".format(round(w/scaling_factor,2))
HeightData = "Height : {} mm".format(round(h/scaling_factor,2))
textData = WidthData + ", " + HeightData
cv2.putText(image, textData, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 0), 2)
print(WidthData)
print(HeightData)
cv2.imshow('Result', image)
How can I calculate the factor with which I can convert my number_of_pixels to original_length of
image ?
How to calculate scaling_factor ?
You need a way to convert a measurement in pixels to one in meters. Possible ways to do that are, in order of accuracy.
Look up the physical dimensions of the camera sensors, e.g. from the spec sheet of the camera or, if you are lucky, from the metadata stored in the image along with the pixels (EXIF header)
Place an object of known size in the scene.
Directly scale the focal length from pixels to mm as given by the lens.

Opencv find the coordinates of a roi image

I have this image and need to coordinates of the starting point and ending point of the head(until the neck).
I use the below code to crop the image but get the below error :-
import cv2
img = cv2.imread("/Users/pr/images/dog.jpg")
print img.shape
crop_img = img[400:500, 500:400] # Crop from x, y, w, h -> 100, 200, 300, 400
# NOTE: its img[y: y + h, x: x + w] and *not* img[x: x + w, y: y + h]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
Error:-
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /Users/travis/build/skvark/opencv-python/opencv/modules/highgui/src/window.cpp, line 325
Question:-
How can I find the coordinates of region of interest items?
If you want pick rectangle: x = 100, y =200, w = 300, h = 400, you should use code:
crop_img = img[200:600, 100:300]
and if you want cut dog's head you need:
crop_img = img[0:230, 250:550]
If you are trying to find the pixel co-ordinate of image that has to be used in the img[] you can simply use ms paint to find the pixel location. for example
img[y1:y2, x1:x2], here to find the values of x1,x2,y1 and y2 you can open the image in ms paint and place you cursor on the location where you need the co-ordinates. Paint will display the co-ordinates of that pixel at the bottom left corner of you mspaint window. consider this location as (x,y).
Screenshot of using MSpaint for getting pixel location.

Read circular text using OCR

I want to read text on the object. But OCR program can't recognize it. When I give the small part, it can recognize. I have to transform circle text to linear text. How can I do this? Thanks.
you can transform the image from Cartesian coordinate system to Polar coordinate system to prepare circle path text image for OCR program. This function logPolar() can help.
Here are some steps to prepare circle path text image:
Find the circles' centers using HoughCircles().
Get the mean and do some offset, so get the center.
(Optinal) Crop a square of the image from the center.
Do logPolar(), then rotate it if necessary.
After detect circles and get the mean of centers and do offset.
The croped image:
After logPolar() and rotate()
My Python3-OpenCV3.3 code is presented here, maybe it helps.
#!/usr/bin/python3
# 2017.10.10 12:44:37 CST
# 2017.10.10 14:08:57 CST
import cv2
import numpy as np
##(1) Read and resize the original image(too big)
img = cv2.imread("circle.png")
img = cv2.resize(img, (W//4, H//4))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## (2) Detect circles
circles = cv2.HoughCircles(gray, method=cv2.HOUGH_GRADIENT, dp=1, minDist=3, circles=None, param1=200, param2=100, minRadius = 200, maxRadius=0 )
## make canvas
canvas = img.copy()
## (3) Get the mean of centers and do offset
circles = np.int0(np.array(circles))
x,y,r = 0,0,0
for ptx,pty, radius in circles[0]:
cv2.circle(canvas, (ptx,pty), radius, (0,255,0), 1, 16)
x += ptx
y += pty
r += radius
cnt = len(circles[0])
x = x//cnt
y = y//cnt
r = r//cnt
x+=5
y-=7
## (4) Draw the labels in red
for r in range(100, r, 20):
cv2.circle(canvas, (x,y), r, (0, 0, 255), 3, cv2.LINE_AA)
cv2.circle(canvas, (x,y), 3, (0,0,255), -1)
## (5) Crop the image
dr = r + 20
croped = img[y-dr:y+dr+1, x-dr:x+dr+1].copy()
## (6) logPolar and rotate
polar = cv2.logPolar(croped, (dr,dr),80, cv2.WARP_FILL_OUTLIERS )
rotated = cv2.rotate(polar, cv2.ROTATE_90_COUNTERCLOCKWISE)
## (7) Display the result
cv2.imshow("Canvas", canvas)
cv2.imshow("croped", croped)
cv2.imshow("polar", polar)
cv2.imshow("rotated", rotated)
cv2.waitKey();cv2.destroyAllWindows()

Crop half of an image in OpenCV

How can I crop an image and only keep the bottom half of it?
I tried:
Mat cropped frame = frame(Rect(frame.cols/2, 0, frame.cols, frame.rows/2));
but it gives me an error.
I also tried:
double min, max;
Point min_loc, max_loc;
minMaxLoc(frame, &min, &max, &min_loc, &max_loc);
int x = min_loc.x + (max_loc.x - min_loc.x) / 2;
Mat croppedframe = = frame(Rect(x, min_loc.y, frame.size().width, frame.size().height / 2));
but it doesn't work as well.
Here's a the python version for any beginners out there.
def crop_bottom_half(image):
cropped_img = image[image.shape[0]/2:image.shape[0]]
return cropped_img
The Rect function arguments are Rect(x, y, width, height). In OpenCV, the data are organized with the first pixel being in the upper left corner, so your rect should be:
Mat croppedFrame = frame(Rect(0, frame.rows/2, frame.cols, frame.rows/2));
To quickly copy paste:
image = YOURIMAGEHERE #note: image needs to be in the opencv format
height, width, channels = image.shape
croppedImage = image[int(height/2):height, 0:width] #this line crops
Explanation:
In OpenCV to select a part of an image,you can simply select the start and end pixels from the image. The meaning is:
image[yMin:yMax, xMin:xMax]
In human speak: yMin = top | yMax = bottom | xMin = left | xMax = right |
" : " means from the value on the left of the : to the value on the right
To keep the bottom half we simply do [int(yMax/2):yMax, xMin:xMax] which means from half the image to to the bottom. x is 0 to the max width.
Keep in mind that OpenCV starts from the top left of an image and increasing the Y value means downwards.
To get the width and height of an image you can do image.shape which gives 3 values:
yMax,xMax, amount of channels of which you probably won't use the channels. To get just the height and width you can also do:
height, width = image.shape[0:2]
This is also known as getting the Region of Interest or ROI

OpenCV : wrapPerspective on whole image

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.
Right now I'm using
points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));
Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));
Which gives my these results (look at the right-bottom corner for result of warpPerspective):
As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.
How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
EDIT:
Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.
For example when I've doubled size using:
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));
I've recieved these images:
Your code doesn't seem to be complete, so it is difficult to say what the problem is.
In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.
For example try to double the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.
To make it more clear some sample code:
// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);
// calculate warped position of all corners
cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);
cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);
cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);
cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);
// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));
// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;
// adjust target points accordingly
for (int i=0; i<4; i++) {
points2D[i] += cv::Point2f(x,y);
}
// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);
// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.
import cv2
import numpy as np
from PIL import Image, ImageDraw
import math
def get_transformed_image(src, dst, img):
# calculate the tranformation
mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
# new source: image corners
corners = np.array([
[0, img.size[0]],
[0, 0],
[img.size[1], 0],
[img.size[1], img.size[0]]
])
# Transform the corners of the image
corners_tranformed = cv2.perspectiveTransform(
np.array([corners.astype("float32")]), mat)
# These tranformed corners seems completely wrong/inverted x-axis
print(corners_tranformed)
x_mn = math.ceil(min(corners_tranformed[0].T[0]))
y_mn = math.ceil(min(corners_tranformed[0].T[1]))
x_mx = math.ceil(max(corners_tranformed[0].T[0]))
y_mx = math.ceil(max(corners_tranformed[0].T[1]))
width = x_mx - x_mn
height = y_mx - y_mn
analogy = height/1000
n_height = height/analogy
n_width = width/analogy
dst2 = corners_tranformed
dst2 -= np.array([x_mn, y_mn])
dst2 = dst2/analogy
mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
dst2.astype("float32"))
img_warp = Image.fromarray((
cv2.warpPerspective(np.array(image),
mat2,
(int(n_width),
int(n_height)))))
return img_warp
# image coordingates
src= np.array([[ 789.72, 1187.35],
[ 789.72, 752.75],
[1277.35, 730.66],
[1277.35,1200.65]])
# known coordinates
dst=np.array([[0, 1000],
[0, 0],
[1092, 0],
[1092, 1000]])
# Create the image
image = Image.new('RGB', (img_width, img_height))
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
draw = ImageDraw.Draw(image)
draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
#image.show()
warped = get_transformed_image(src, dst, image)
warped.show()
There are two things you need to do:
Increase the size of the output of cv2.warpPerspective
Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image
Here is how code will look:
# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)

Resources