How to add transparent border OpenCV copymakeborder - opencv

I have 26 PNG files, each with an image of a letter of the alphabet. They've all been fully cropped to the letter shape with the result that when I insert them into an image, letters with tails all 'sit on the line'
Each letter is in black, with a transparent background. Each PNG has different dimensions, because of the differing letter shapes
I thought I'd remediate this by adding a transparent border of a different size depending on the source file, to make common datum for all the letters, so that 'a' for example would have some transparent space at the bottom.
I've coded up the calculcation for each letter, but I have two issues:
1) Even before applying the operation, I can't seem to read the file in and write it to a new unchanged file in OpenCV. The transparency in the image is replaced with black.
2) While I can add a colour border, I can't seem to add a transparent border.
Original Image:
Read in, and written out:
Apparenly with a blue border, but maximum transparancy:
I have a feeling that if I can sort out the first problem, the second might fall in line. Here is my code:
img = cv2.imread(file)
img_with_border = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[-255,0,0,255])
#img_with_border = img
cv2.imwrite(newfile, img_with_border, [cv2.IMWRITE_JPEG_QUALITY, 100])
I'd appreciate some help on what is going on here with transparancy. Is OpenCV the right tool to use?
Thanks,
Jeff.

To load a PNG image with 4 channels in OpenCV, use im = cv2.imread(file, cv2.IMREAD_UNCHANGED). You will obtain a BGRA image.
To change the alpha value, you have to change the fourth channel of the image. This means that to create your transparent border you have to pass a value (B, G, R, 0) and not [-255, 0, 0, 255]. (What is that -255 by the way ?). B, G and R can be 0, it doesn't matter.
Also, make sure you write to a PNG image to keep the transparency. You seem to be writing your result as JPEG.

Related

Problems with select by colour in GIMP

I'm having problems using pdb.gimp_by_color_select in Gimp
I've already looked at this question
Here's what I have:
# duplicate layer
duplicate_layer(image, "temp")
tempLayer = pdb.gimp_image_get_active_layer(image)
colour = (0,0,0)
operation = 0
pdb.gimp_selection_none(tempLayer)
pdb.gimp_by_color_select(tempLayer, colour, 0, operation, True, False, 0, True)
Only it doesn't select any of the black pixels in the newly duplicated templayer as I would expect it.
Here's a snippet of the image
The lines are not true black (0,0,0) but I do an auto levels
# Auto layers
pdb.gimp_drawable_levels_stretch(tempLayer)
on the image beforehand
If you look at the picture histogram, the "black" is actually a fairly wide range, from 40 to 100 with a peak around 75:
And even after a level-stretch, most of your black pixels are still not completely black:
You would get a better result by thresholding the image around 100, if necessary using another copy of the layer (a selection applies to any any layer of the image, regardless of the layer used to obtain it).

Eliminating various backgrounds from image and segmenting object?

Let say I have this input image, with any number of boxes. I want to segment out these boxes, so I can eventually extract them out.
input image:
The background could anything that is continuous, like a painted wall, wooden table, carpet.
My idea was that the gradient would be the same throughout the background, and with a constant gradient. I could turn where the gradient is about the same, into zero's in the image.
Through edge detection, I would dilate and fill the regions where edges detected. Essentially my goal is to make a blob of the areas where the boxes are. Having the blobs, I would know the exact location of the boxes, thus being able to crop out the boxes from the input image.
So in this case, I should be able to have four blobs, and then I would be able to crop out four images from the input image.
This is how far I got:
segmented image:
query = imread('AllFour.jpg');
gray = rgb2gray(query);
[~, threshold] = edge(gray, 'sobel');
weightedFactor = 1.5;
BWs = edge(gray,'roberts');
%figure, imshow(BWs), title('binary gradient mask');
se90 = strel('disk', 30);
se0 = strel('square', 3);
BWsdil = imdilate(BWs, [se90]);
%figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
What a very interesting problem! Here's my solution in an attempt to solve this problem for you. This is assuming that the background has the same colour distribution throughout. First, transform your image from RGB to the HSV colour space with rgb2hsv. The HSV colour space is an ideal transform for analyzing colours. After this, I would look at the saturation and value planes. Saturation is concerned with how "pure" the colour is, while value is the intensity or brightness of the colour itself. If you take a look at the saturation and value planes for the image, this is what is shown:
im = imread('http://i.stack.imgur.com/1SGVm.jpg');
out = rgb2hsv(im);
figure;
subplot(2,1,1);
imshow(out(:,:,2));
subplot(2,1,2);
imshow(out(:,:,3));
This is what I get:
By taking a look at some locations in the gray background, it looks like the majority of the saturation are less than 0.2 as well as the elements in the value plane are greater than 0.3. As such, we want to find the opposite of those pixels to get our objects. As such, we find those pixels whose saturation is greater than 0.2 or those pixels with a value that is less than 0.3:
seg = out(:,:,2) > 0.2 | out(:,:,3) < 0.3;
This is what we get:
Almost there! There are some spurious single pixels, so I'm going to perform an opening with imopen with a line structuring element.
After this, I'll perform a dilation with imdilate to close any gaps, then use imfill with the 'holes' option to fill in the gaps, then use erosion with imerode to shrink the shapes back to their original form. As such:
se = strel('line', 3, 90);
pre = imopen(seg, c);
se = strel('square', 20);
pre2 = imdilate(pre, se);
pre3 = imfill(pre2, 'holes');
final = imerode(pre3, se);
figure;
imshow(final);
final contains the segmented image with the 4 candy boxes. This is what I get:
Try resizing the image. When you make it smaller, it would be easier to join edges. I tried what's shown below. You might have to tune it depending on the nature of the background.
close all;
clear all;
im = imread('1SGVm.jpg');
small = imresize(im, .25); % resize
grad = (double(imdilate(small, ones(3))) - double(small)); % extract edges
gradSum = sum(grad, 3);
bw = edge(gradSum, 'Canny');
joined = imdilate(bw, ones(3)); % join edges
filled = imfill(joined, 'holes');
filled = imerode(filled, ones(3));
imshow(label2rgb(bwlabel(filled))) % label the regions and show
If you have a recent version of MATLAB, try the Color Thresholder app in the image processing toolbox. It lets you interactively play with different color spaces, to see which one can give you the best segmentation.
If your candy covers are fixed or you know all the covers that are coming into the scene then Template matching is best for this. As it is independent of the background in the image.
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

Automatic approach for removing colord object shadow on white background?

I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.

How to remove watermark from TIFFs to improve OCR

I have a bunch of uncompressed bitonal TIF document images. All of them have a watermark in the middle. When I run them through OCR, the text that overlaps with the watermark does not get recognized. I am trying to see if I can apply some type of cleanup to remove those watermarks to be able to recognize the missing text.
Again, the images are black and white, but when you look at the watermark it appears grey since it has a pattern of black and white pixels that makes the letters in the watermark less "dense" than regular text. At the same time, the watermark letters are very big, much bigger than the regular text.
An example of a somewhat similar image is this (except this one is color and the watermark characters in my case are a lot thicker and bigger; my watermarks are also a lot shorter: only 3 to 4 letters long)
It seems that there might be some sort of clean up filter that would be similar to removing large black borders from an image except borders are ually "denser" than a watermark so they appear "more black".
I have 3 tools at my disposal: GIMP, ImageMagick and IrfanView. Can you recommend any specific features of any subset of these tools that might help me?
Playing with contrast etc did not help, but I found a different way. As stated above, the regular text is a lot "denser" than the watermark text meaning that a regular black pixel has more surrounding black pixels than a watermark black pixel. So I devised a simple window-based filtering and thresholding algorithm.
Here's how I did it in Matlab, using a 5X5 window:
im=imread('imageWithWmark.tif');
imInv = ~im;
nr=size(imInv,1);
nc=size(imInv,2);
d = 2; % for 5X5 window
counts = zeros(nr,nc);
for rr = d+1 : nr-d-1
for cc = d+1 : nc-d-1
counts(rr,cc) = nnz(imInv(rr-d:rr+d,cc-d:cc+d));
end
end
thresh=10; % 10 out of 25 -- the larger the thresh the thinner the resulting letters are
imThresh = (counts>=thresh) & imInv;
imwrite(~imThresh,sprintf('Thresh_%d.tif',thresh),'Compression','none','Resolution',300);
Of course, the size of the window, the threshold and other parameters depend on the parameters of the regular text on the page (letter bigger/smaller, thicker/thinner etc) but even this initial version worked pretty well

Overlay smaller image in a larger image in OpenCV

I would like to replace a part of the image with my image in Opencv
I used
cvGetPerspectiveMatrix() with a warpmatrix and using cvAnd() and cvOr()
but could not get it to work
This is the code that is currently displaying the image and a white polygon for the replacement image. I would like to replace the white polygon for a pic with any dimension to be scaled and replaced with the region pointed.
While the code is in javacv I could convert it to java even if c code is posted
grabber.start();
while(isDisp() && (image=grabber.grab())!=null){
if (dst_corners != null) {// corners of the image to be replaced
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
}
correspondFrame.showImage(image);
}
Any pointers to this will be very helpful.
Update:
I used warpmatrix with this code and I get a black spot for the overlay image
cvSetImageROI(image, cvRect(x1,y1, overlay.width(), overlay.height()));
CvPoint2D32f p = new CvPoint2D32f(4);
CvPoint2D32f q = new CvPoint2D32f(4);
q.position(0).x(0);
q.position(0).y(0);
q.position(1).x((float) overlay.width());
q.position(1).y(0);
q.position(2).x((float) overlay.width());
q.position(2).y((float) overlay.height());
q.position(3).x(0);
q.position(3).y((float) overlay.height());
p.position(0).x((int)Math.round(dst_corners[0]);
p.position(0).y((int)Math.round(dst_corners[1]));
p.position(1).x((int)Math.round(dst_corners[2]));
p.position(1).y((int)Math.round(dst_corners[3]));
p.position(3).x((int)Math.round(dst_corners[4]));
p.position(3).y((int)Math.round(dst_corners[5]));
p.position(2).x((int)Math.round(dst_corners[6]));
p.position(2).y((int)Math.round(dst_corners[7]));
cvGetPerspectiveTransform(q, p, warp_matrix);
cvWarpPerspective(overlay, image, warp_matrix);
I get a black spot for the overlay image and even though the original image is a polygon with 4 vertices the overlay image is set as a rectangle. I believe this is because of the ROI. Could anyone please tell me how to fit the image as is and also why I am getting a black spot instead of the overlay image.
I think cvWarpPerspective(link) is what you are looking for.
So instead of doing
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
Try
cvWarpPerspective(yourimage, image, M, image.size(), INTER_CUBIC, BORDER_TRANSPARENT);
Where M is the matrix you get from cvGetPerspectiveMatrix
One way to do it is to scale the pic to the white polygon size and then copy it to the grabbed image setting its Region of Interest (here is a link explaining the ROI).
Your code should look like this:
resize(pic, resizedImage, resizedImage.size(), 0, 0, interpolation); //resizedImage should have the points size
cvSetImageROI(image, cvRect(the points coordinates));
cvCopy(resizedImage,image);
cvResetImageROI(image);
I hope that helps.
Best regards,
Daniel

Resources