Vips - add text on top of image, after resize in Ruby - ruby-on-rails

I'm using Vips to resize images via Shrine, hoping it's possible to use the Vips library to merge a layer of text on top of the image.
ImageProcessing::Vips.source(image).resize_to_fill!(width, height)
This code works great, how can I add a layer of text after resize_to_fill?
The goal is to write 'Hello world' in white text, with a CSS text-shadow in the center of the image.
I've tried writing something like this, but I'm only getting errors so far:
Vips\Image::text('Hello world!', ['font' => 'sans 120', 'width' => $image->width - 100]);

Your example looks like PHP -- in Ruby you'd write something like:
text = Vips::Image.text 'Hello world!', font: 'sans 120', width: image.width - 100
I made a demo for you:
#!/usr/bin/ruby
require "vips"
image = Vips::Image.new_from_file ARGV[0], access: :sequential
text_colour = [255, 128, 128]
shadow_colour = [128, 255, 128]
h_shadow = 2
v_shadow = 5
blur_radius = 10
# position to render the top-left of the text
text_left = 100
text_top = 200
# render some text ... this will make a one-band uchar image, with 0
# for black, 255 for white and intermediate values for anti-aliasing
text_mask = Vips::Image.text "Hello world!", dpi: 300
# we need to enlarge the text mask before we blur so that the soft edges
# don't get clipped
shadow_mask = text_mask.embed(blur_radius, blur_radius,
text_mask.width + 2 * blur_radius,
text_mask.height + 2 * blur_radius)
# gaussblur() takes sigma as a parameter -- approximate as radius / 2
shadow_mask = shadow_mask.gaussblur(blur_radius / 2) if blur_radius > 0.1
# make an RGB image the size of the text mask with each pixel set to the
# constant, then attach the text mask as the alpha
rgb = text_mask.new_from_image(text_colour).copy(interpretation: "srgb")
text = rgb.bandjoin(text_mask)
rgb = shadow_mask.new_from_image(shadow_colour).copy(interpretation: "srgb")
shadow = rgb.bandjoin(shadow_mask)
# composite the three layers together
image = image.composite([shadow, text], "over",
x: [text_left + h_shadow, text_left],
y: [text_top + v_shadow, text_top])
image.write_to_file ARGV[1]
Run like this:
$ ./try319.rb ~/pics/PNG_transparency_demonstration_1.png x.png
To make:

Related

Is it common for sift.compute to eliminate almost all key points generated by shift.detect?

I am trying to align multispectral drone images using opencv, and when I try and use homography to align the images I get an error stating I need at least 4 matching points. I went back and broke up the sift.detectandcompute function into two separate lines and printed the number of detected points after each. after sift.detect I had over 100,000 points, when I ran sift.compute, it eliminated that number down to just two. Is there a way to make it less restrictive?
Ive included my code below in case that helps.
import cv2
import numpy as np
import os
def final_align(file_paths):
# Load the images from the file paths
images = [cv2.imread(file_path) for file_path in file_paths]
# Define the calibrated optical centers for each image
calibrated_optical_centers = {
"1": (834.056702, 643.766418),
"2": (836.952271, 631.696899),
"3": (832.183411, 642.485901),
"4": (795.311279, 680.615906),
"5": (807.490295, 685.338379),
}
# Create a list to store the aligned images
aligned_images = []
for file_path in file_paths:
# Get the 5th from last character in the file path
image_id = file_path[-5]
# Get the calibrated optical center for the image
calibrated_optical_center = calibrated_optical_centers[image_id]
# Load the image
image = cv2.imread(file_path)
# Get the shape of the image
height, width = image.shape[:2]
# Calculate the center of the image
center_x = width // 2
center_y = height // 2
# Calculate the shift needed to align the image
shift_x = float(calibrated_optical_center[0] - center_x)
shift_y = float(calibrated_optical_center[1] - center_y)
# Create a translation matrix
M = np.float32([[1, 0, shift_x], [0, 1, shift_y]])
# Apply the translation to the image
aligned_image = cv2.warpAffine(image, M, (width, height))
# Add the aligned image to the list of aligned images
aligned_images.append(aligned_image)
return aligned_images
file_paths = [
"G:\Shared\Mulitband\\flights\\flight_1\\2611_DJI_0011.TIF",
"G:\Shared\Mulitband\\flights\\flight_1\\2612_DJI_0012.TIF",
"G:\Shared\Mulitband\\flights\\flight_1\\2613_DJI_0013.TIF",
"G:\Shared\Mulitband\\flights\\flight_1\\2614_DJI_0014.TIF",
"G:\Shared\Mulitband\\flights\\flight_1\\2615_DJI_0015.TIF",
]
# Call the final_align function
final_aligned_images = final_align(file_paths)
# Get the center of the first image
height, width = final_aligned_images[0].shape[:2]
center_y = height // 2
center_x = width // 2
# Specify the crop size in the y and x direction
crop_y = 1220
crop_x = 1520
#crop function
def crop_images(final_aligned_images, center_y, center_x, crop_y, crop_x):
cropped_images = []
for image in final_aligned_images:
height, width = image.shape[:2]
start_y = center_y - crop_y // 2
end_y = center_y + crop_y // 2 + 1
start_x = center_x - crop_x // 2
end_x = center_x + crop_x // 2 + 1
cropped_image = image[start_y:end_y, start_x:end_x]
cropped_images.append(cropped_image)
return cropped_images
cropped_images = crop_images(final_aligned_images, center_y, center_x, crop_y, crop_x)
#print(cropped_images)
for i, final_complete_image in enumerate(cropped_images):
# Create the Results/aligned directory if it doesn't exist
os.makedirs("G:\Shared\Mulitband\Results\\aligned", exist_ok=True)
# Construct the file path for the aligned image
final_aligned_image_path = "G:\Shared\Mulitband\Results\\aligned\\aligned_{}.tif".format(i)
# Save the final aligned image to the file path
cv2.imwrite(final_aligned_image_path, final_complete_image)
"""
# TEST OF FUNCTION
img = cropped_images[1]
# Call the sift_align function
sift = cv2.xfeatures2d.SIFT_create()
kp = sift.detect(cropped_images[1], None)
img=cv2.drawKeypoints(cropped_images[1] ,
kp ,
img,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('G:\Shared\Mulitband\Results\\aligned\image-with-keypoints.jpg', img)"""
#Create the SIFT Function
def sift_align(cropped_images):
# Create the SIFT detector and descriptor
sift = cv2.SIFT_create()
# Create a list to store the aligned images
aligned_images = []
# Choose the first image as the reference image
reference_image = cropped_images[0]
# reference_image = cv2.convertScaleAbs(reference_image, alpha=(255.0/65535.0))
# Detect the keypoints and compute the descriptors for the reference image ", reference_descriptors"
reference_keypoints = sift.detect(reference_image, None)
reference_keypoints = sift.compute(reference_image, reference_keypoints)
print("Number of keypoints in reference image:", len(reference_keypoints))
# Iterate over the remaining images
for i, image in enumerate(cropped_images[1:]):
# Detect the keypoints and compute the descriptors for the current image
image_keypoints, = sift.detect(image, None)
# Use the BFMatcher to find the best matches between the reference and current image descriptors
bf = cv2.BFMatcher()
# matches = bf.match(image_descriptors, image_descriptors)
# Sort the matches based on their distances
matches = sorted(matches, key = lambda x:x.distance)
# Use the best matches to estimate the homography between the reference and current image
src_pts = np.float32([reference_keypoints[m.queryIdx].pt for m in matches[:50]]).reshape(-1,1,2)
dst_pts = np.float32([image_keypoints[m.trainIdx].pt for m in matches[:50]]).reshape(-1,1,2)
homography, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
# Use the homography to align the current image with the reference image
aligned_image = cv2.warpPerspective(image, homography, (reference_image.shape[1], reference_image.shape[0]))
# Add the aligned image to the list of aligned images
aligned_images.append(aligned_image)
# Stack the aligned images along the third dimension
aligned_images = np.stack(aligned_images, axis=-1)
return aligned_images
final_complete_images = sift_align(cropped_images)
"""# Save the final aligned images to the Results/aligned directory
for i, final_complete_image in enumerate(final_complete_images):
# Create the Results/aligned directory if it doesn't exist
os.makedirs("G:\Shared\Mulitband\Results\\aligned", exist_ok=True)
# Construct the file path for the aligned image
final_aligned_image_path = "G:\Shared\Mulitband\Results\\aligned\\aligned_{}.tif".format(i)
# Save the final aligned image to the file path
cv2.imwrite(final_aligned_image_path, final_complete_image)"""

error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' OpenCV

I have this old code that is used to run fine in Python 2.7 a while ago. I just updated the code to run in Python 3.8, but when I try to execute it code in Python 3.8 and OpenCV 3.4 I get a resize error and a warning (below)!
Here is the link to the two tif images that are required to run this code.
It's worth noting that both tif images are in the same folder as the Python code
import cv2
import matplotlib.pyplot as plt
import numpy as np
## Code for C_preferred Mask and C_images##
## There are three outputs to this code:
#"Block_order_C.PNG"
#"Out_img.PNG"
#"Output_C.txt"
## Change the image name here
filename_image = '2.tif'
filename_mask = '1.tif'
## OpenCV verison Checking
#print 'OpenCV version used', cv2.__version__
filename = open("Output_C.txt","w")
filename.write("Processing Image : " + str(filename_image) + '\n\n')
## Function to sort the contours : Parameters that you can tune : tolerance_factor and size 0f the image.Here, I have used a fix size of
## (800,800)
def get_contour_precedence(contour, cols):
tolerance_factor = 10
origin = cv2.boundingRect(contour)
return ((origin[1] // tolerance_factor) * tolerance_factor) * cols + origin[0]
## Loading the colored mask, resizing it to (800,800) and converting it from RGB to HSV space, so that the color values are emphasized
p_mask_c = cv2.cvtColor(cv2.resize(cv2.imread(filename_mask),(800,800)),cv2.COLOR_RGB2HSV);
# Loading the original Image
b_image_1 = cv2.resize(cv2.imread(filename_image),(800,800));
cv2.imshow("c_mask_preferred",p_mask_c)
cv2.waitKey();
# convert the target color to HSV, As our target mask portion to be considered is green. So I have chosen target color to be green
b = 0;
g = 255;
r = 0;
# Converting target color to HSV space
target_color = np.uint8([[[b, g, r]]])
target_color_hsv = cv2.cvtColor(target_color, cv2.COLOR_BGR2HSV)
# boundaries for Hue define the proper color boundaries, saturation and values can vary a lot
target_color_h = target_color_hsv[0,0,0]
tolerance = 20
lower_hsv = np.array([max(0, target_color_h - tolerance), 10, 10])
upper_hsv = np.array([min(179, target_color_h + tolerance), 250, 250])
# apply threshold on hsv image
mask = cv2.inRange(p_mask_c, lower_hsv, upper_hsv)
cv2.imshow("mask",mask)
cv2.waitKey()
# Eroding the binary mask, such that every white portion (grids) are seperated from each other, to avoid overlapping and mixing of
# adjacent grids
b_mask = mask;
kernel = np.ones((5,5))
#kernel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
sharp = cv2.erode(b_mask,kernel, iterations=2)
# Finding all the grids (from binary image)
contours, hierarchy = cv2.findContours(sharp,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print (' Number of contours', len(contours))
# Sorting contours
contours.sort(key=lambda x:get_contour_precedence(x, np.shape(b_mask)[0]))
#cv2.drawContours(b_image_1, contours, -1, (0,255,0), 1)
# Label variable for each grid/panel
label = 1;
b_image = b_image_1.copy();
temp =np.zeros(np.shape(b_image_1),np.uint8)
print (' size of temp',np.shape(temp), np.shape(b_image))
out_img = b_image_1.copy()
# Processing in each contour/label one by one
for cnt in contours:
cv2.drawContours(b_image_1,[cnt],0,(255,255,0), 1)
## Just to draw labels in the center of each grid
((x, y), r) = cv2.minEnclosingCircle(cnt)
x = int(x)
y = int(y)
r = int(r)
cv2.putText(b_image_1, "#{}".format(label), (int(x) - 10, int(y)),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
##
cv2.drawContours(temp,[cnt],0,(255,255,255), -1)
#crop_img = np.bitwise_and(b_image,temp)
r = cv2.boundingRect(cnt)
crop_img = b_image[r[1]:r[1]+r[3], r[0]:r[0]+r[2]]
mean = cv2.mean(crop_img);
mean = np.array(mean).reshape(-1,1)
print (' Mean color', mean, np.shape(mean))
if mean[1] < 50:
cv2.putText(out_img, "M", (int(x) - 10, int(y)),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 255), 1)
filename.write("Block number #"+ str(label)+ ' is : ' + 'Magenta'+'\n');
else:
cv2.putText(out_img, "G", (int(x) - 10, int(y)),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 255), 1)
filename.write("Block number #"+ str(label)+ ' is : ' +'Gray'+'\n');
label = label+1;
cv2.imwrite("Block_order_C.PNG",b_image_1)
cv2.imwrite("Out_img.PNG",out_img)
filename.close()
cv2.imshow("preferred",b_image_1)
cv2.waitKey()
Error
[ WARN:0] global C:\projects\opencv-python\opencv\modules\imgcodecs\src\grfmt_tiff.cpp (449) cv::TiffDecoder::readData OpenCV TIFF: TIFFRGBAImageOK: Sorry, can not handle images with IEEE floating-point samples
Traceback (most recent call last):
File "Processing_C_preferred.py", line 32, in
p_mask_c = cv2.cvtColor(cv2.resize(cv2.imread(filename_mask),(800,800)),cv2.COLOR_RGB2HSV);
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\imgproc\src\resize.cpp:4045: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
When you read in the image pass the cv::IMREAD_ANYDEPTH = 2 parameter as the second parameter in cv2.imread().
Changing your lines to
p_mask_c = cv2.cvtColor(cv2.resize(cv2.imread(filename_mask, 2),(800,800)),cv2.COLOR_RGB2HSV);
and
b_image_1 = cv2.resize(cv2.imread(filename_image, 2),(800,800));
removes the resize error you're seeing.
But you get another error when changing the color since your TIFF image apparently has only one channel so cv2.COLOR_RGB2HSV won't work..
You could also use multiple flags like cv::IMREAD_COLOR = 1,
p_mask_c = cv2.cvtColor(cv2.resize(cv2.imread(filename_mask, 2 | 1),(800,800)),cv2.COLOR_BGR2HSV);
to read in a color image. But you get a different error. Perhaps you understand this image better than I do and can solve the problem from here on out.

How to split image of table at vertical lines into three images?

I want to split an image of a table at the vertical lines into three images as shown below. Is it possible? The width of each column is variable. And the sad thing is that the left vertical line is drawn down from the header as you can see.
Input image (input.png)
Output image (output1.png)
Output image (output2.png)
Output image (output3.png)
Update 1
And the sad thing is that the left vertical line is drawn down from the header as you can see.
It means I guess the following image B is easier to split. But my case is A.
Update 2
I am trying to do the way #HansHirse gave me. My expectation is sub_image_1.png, sub_image_2.png and sub_image_3.png are stored in the out folder. But no luck so far. I'm looking into it.
https://github.com/zono/ocr/blob/16fd0ec9a2c7d2e26279ec53947fe7fbab9f526d/src/opencv.py
$ git clone https://github.com/zono/ocr.git
$ cd ocr
$ git checkout 16fd0ec9a2c7d2e26279ec53947fe7fbab9f526d
$ docker-compose up -d
$ docker exec -it ocr /bin/bash
$ python3 opencv.py
Since your table is perfectly aligned, you can inverse binary threshold your image, and count (white) pixels along the y-axis to detect the vertical lines:
You'll need to clean the peaks, since you might get plateaus for the thicker lines.
That'd be my idea in Python OpenCV:
import cv2
import numpy as np
from skimage import io # Only needed for web reading images
# Web read image via scikit-image; convert to OpenCV's BGR color ordering
img = cv2.cvtColor(io.imread('https://i.stack.imgur.com/BTqBs.png'), cv2.COLOR_RGB2BGR)
# Inverse binary threshold grayscale version of image
img_thr = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 128, 255, cv2.THRESH_BINARY_INV)[1]
# Count pixels along the y-axis, find peaks
thr_y = 200
y_sum = np.count_nonzero(img_thr, axis=0)
peaks = np.where(y_sum > thr_y)[0]
# Clean peaks
thr_x = 50
temp = np.diff(peaks).squeeze()
idx = np.where(temp > thr_x)[0]
peaks = np.concatenate(([0], peaks[idx+1]), axis=0) + 1
# Save sub-images
for i in np.arange(peaks.shape[0] - 1):
cv2.imwrite('sub_image_' + str(i) + '.png', img[:, peaks[i]:peaks[i+1]])
I get the following three images:
As you can see, you might want to modify the selection by +/- 1 pixel, if an actual line is only 1 pixel wide.
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
NumPy: 1.18.1
OpenCV: 4.2.0
----------------------------------------
OpenCV has a line detection function:
You can filter the lines that are returned by passing min_theta and max_theta. For vertical lines you can specify maybe : 88 and 92 respectively for margin.
This is a edited sample taken from openCV documentation:
import sys
import math
import cv2 as cv
import numpy as np
def main(argv):
default_file = 'img.png'
filename = argv[0] if len(argv) > 0 else default_file
# Loads an image
src = cv.imread(cv.samples.findFile(filename), cv.IMREAD_GRAYSCALE)
#some preparation of the photo
dst = cv.Canny(src, 50, 200, None, 3)
# Copy edges to the images that will display the results in BGR
cdst = cv.cvtColor(dst, cv.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
lines = cv.HoughLines(dst, 1, np.pi / 180, 150, None, 88, 92) #min and max theta
You can get the x, y coordinate of the line and draw them by using the following code.
if lines is not None:
for i in range(0, len(lines)):
rho = lines[i][0][0]
theta = lines[i][0][2]
a = math.cos(theta)
b = math.sin(theta)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000*(-b)), int(y0 + 1000*(a)))
pt2 = (int(x0 - 1000*(-b)), int(y0 - 1000*(a)))
cv.line(cdst, pt1, pt2, (0,0,255), 3, cv.LINE_AA)
Alternatively you can also use HoughLinesP as this allows you to specify a minimum length, which will help your filtering. Also the lines are returned as x,y pairs for each end making it easier to work with.
linesP = cv.HoughLinesP(dst, 1, np.pi / 180, 50, None, 50, 10)
if linesP is not None:
for i in range(0, len(linesP)):
l = linesP[i][0]
cv.line(cdstP, (l[0], l[2]), (l[2], l[3]), (0,0,255), 3, cv.LINE_AA)
cv.imshow("Source", src)
cv.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst)
cv.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP)
cv.waitKey()
return 0
Documentation
To crop your image you can take the x coordinates of the lines you detected and use numpy slicing.
for i in range(0, len(linesP) - 1):
l = linesP[i][0]
xcoords = l[0], linesP[i+1][0][0]
slice = img[:xcoords[0],xcoords[1]]
cv.imshow('slice', slice)
cv.waitKey(0)

How to split the image into chunks without breaking character - python

I am trying to read image from the text.
I am getting better result if I break the images into small chunks but the problem is when i try to split the image it is cutting/slicing my characters.
code I am using :
from __future__ import division
import math
import os
from PIL import Image
def long_slice(image_path, out_name, outdir, slice_size):
"""slice an image into parts slice_size tall"""
img = Image.open(image_path)
width, height = img.size
upper = 0
left = 0
slices = int(math.ceil(height/slice_size))
count = 1
for slice in range(slices):
#if we are at the end, set the lower bound to be the bottom of the image
if count == slices:
lower = height
else:
lower = int(count * slice_size)
#set the bounding box! The important bit
bbox = (left, upper, width, lower)
working_slice = img.crop(bbox)
upper += slice_size
#save the slice
working_slice.save(os.path.join(outdir, "slice_" + out_name + "_" + str(count)+".png"))
count +=1
if __name__ == '__main__':
#slice_size is the max height of the slices in pixels
long_slice("/python_project/screenshot.png","longcat", os.getcwd(), 100)
Sample Image : The image i want to process
Expected/What i am trying to do :
I want to split every line as separate image without cutting the character
Line 1:
Line 2:
Current result:Characters in the image are cropped
I dont want to cut the image based on pixels since each document will have separate spacing and line width
Thanks
Jk
Here is a solution that finds the brightest rows in the image (i.e., the rows without text) and then splits the image on those rows. So far I have just marked the sections, and am leaving the actual cropping up to you.
The algorithm is as follows:
Find the sum of the luminance (I am just using the red channel) of every pixel in each row
Find the rows with sums that are at least 0.999 (which is the threshold I am using) as bright as the brightest row
Mark those rows
Here is the code that will return a list of these rows:
def find_lightest_rows(img, threshold):
line_luminances = [0] * img.height
for y in range(img.height):
for x in range(img.width):
line_luminances[y] += img.getpixel((x, y))[0]
line_luminances = [x for x in enumerate(line_luminances)]
line_luminances.sort(key=lambda x: -x[1])
lightest_row_luminance = line_luminances[0][1]
lightest_rows = []
for row, lum in line_luminances:
if(lum > lightest_row_luminance * threshold):
lightest_rows.add(row)
return lightest_rows
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ... ]
After colouring these rows red, we have this image:

Image normalization

In my perspective, image normalization is to make every pixel to be normalized with an value between 0 and 1, am I right?
But what does the following code mean?
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth # WHY ??
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
Image normalization is merely the process of changing the range of pixel intensity values.
The choice of the new range is up to you.
In the case you've shown, it looks like the range -0.5 .. 0.5 has been chosen.

Resources