How to merge two image without losing intensity in opencv - opencv

I have two images in opencv: Image A and Image B.
Image A is output frame from camera.
Image B is alpha transparent image obtained by masking one image.
Before masking Image B it is warped with cvWarpPerspective()
I tried cvAddWeighted() - It looses intensity when you give alpha and beta value
I tried aishack - Even here you looses overall intensity of Output Image
I tried silveiraneto.net - Not helpful in my case
Please help me out with something where I don't lose intensity in the output image after blending.
Thanks in advance

When you say, that you lose intensity... you leave the question about, how you lose it?
Do you loose intensity in the sense:
That when you add the images you hit a maximum intensity, and the rest is discarded.
(Example for a 8 bit pixel addition: Pix1 = 200 i, Pix2 = 150 i. "Pix1 + Pix2 = 350" but max value at 255, so Pix1 + Pix2 = 255)
That the former values of image A is compromised by adding it to Image B, which only covers some parts of the image.
(Example for an 8 bit image: Pix1 = 200 i, Pix2 = 150, (Pix1 + Pix2)/2 = 175, but when the value of a pixel of the second image is zero, Pix2 = 0. Then (Pix1 + Pix2)/2 = 100, which is half the value of the original image)
One of these observations should tell you about what you need to do.
I don't quite know, in accordance to the functions you mentioned, which approach they use.

I finally got the answer.It consist of 5 steps....
Step - 1
cvGetPerspectiveTransform(q,pnt,warp_matrix);
//where pnt is four point x and y cordinates and warp_matrix is a 3 x 3 matrix
Step - 2
cvWarpPerspective(dst2, neg_img, warp_matrix,CV_INTER_LINEAR)
//dst2 is overlay image ,neg_img is a blank image
Step - 3
cvSmooth(neg_img,neg_img,CV_MEDIAN); //smoothing the image
Step - 4
cvThreshold(neg_img, cpy_img, 0, 255, CV_THRESH_BINARY_INV);
//cpy_img is a created image from image_n
Step - 5
cvAnd(cpy_img,image_n,cpy_img);// image_n is a input image
cvOr(neg_img,cpy_img,image_n);
Output - image_n (without loosing intensity of input image)

Related

Placing a shape inside another shape using opencv

I have two images and I need to place the second image inside the first image. The second image can be resized, rotated or skewed such that it covers a larger area of the other images as possible. As an example, in the figure shown below, the green circle need to be placed inside the blue shape:
Here the green circle is transformed such that it covers a larger area. Another example is shown below:
Note that there may be some multiple results. However, any similar result is acceptable as shown in the above example.
How do I solve this problem?
Thanks in advance!
I tested the idea I mentioned earlier in the comments and the output is almost good. It may be better but it takes time. The final code was too much and it depends on one of my old personal projects, so I will not share. But I will explain step by step how I wrote such an algorithm. Note that I have tested the algorithm many times. Not yet 100% accurate.
for N times do this:
1. Copy from shape
2. Transform it randomly
3. Put the shape on the background
4-1. It is not acceptable if the shape exceeds the background. Go to
the first step.
4.2. Otherwise we will continue to step 5.
5. We calculate the length, width and number of shape pixels.
6. We keep a list of the best candidates and compare these three
parameters (W, H, Pixels) with the members of the list. If we
find a better item, we will save it.
I set the value of N to 5,000. The larger the number, the slower the algorithm runs, but the better the result.
You can use anything for Transform. Mirror, Rotate, Shear, Scale, Resize, etc. But I used warpPerspective for this one.
im1 = cv2.imread(sys.path[0]+'/Back.png')
im2 = cv2.imread(sys.path[0]+'/Shape.png')
bH, bW = im1.shape[:2]
sH, sW = im2.shape[:2]
# TopLeft, TopRight, BottomRight, BottomLeft of the shape
_inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
cx = random.randint(5, sW-5)
ch = random.randint(5, sH-5)
o = 0
# Random transformed output
_out = np.float32([
[random.randint(-o, cx-1), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(ch+1, sH+o)],
[random.randint(-o, cx-1), random.randint(ch+1, sH+o)]
])
# Transformed output
M = cv2.getPerspectiveTransform(_inp, _out)
t = cv2.warpPerspective(shape, M, (bH, bW))
You can use countNonZero to find the number of pixels and findContours and boundingRect to find the shape size.
def getSize(msk):
cnts, _ = cv2.findContours(msk, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cnts.sort(key=lambda p: max(cv2.boundingRect(p)[2],cv2.boundingRect(p)[3]), reverse=True)
w,h=0,0
if(len(cnts)>0):
_, _, w, h = cv2.boundingRect(cnts[0])
pix = cv2.countNonZero(msk)
return pix, w, h
To find overlaping of back and shape you can do something like this:
make a mask from back and shape and use bitwise methods; Change this section according to the software you wrote. This is just an example :)
mskMix = cv2.bitwise_and(mskBack, mskShape)
mskMix = cv2.bitwise_xor(mskMix, mskShape)
isCandidate = not np.any(mskMix == 255)
For example this is not a candidate answer; This is because if you look closely at the image on the right, you will notice that the shape has exceeded the background.
I just tested the circle with 4 different backgrounds; And the results:
After 4879 Iterations:
After 1587 Iterations:
After 4621 Iterations:
After 4574 Iterations:
A few additional points. If you use a method like medianBlur to cover the noise in the Background mask and Shape mask, you may find a better solution.
I suggest you read about Evolutionary Computation, Metaheuristic and Soft Computing algorithms for better understanding of this algorithm :)

How to add an image over another image using x,y coordinates?

I want to add image 'abc.jpg' on xyz.jpg using openCV and python. I have got the coordinates x,y on which I have to add the image and also resized my 'abc.jpg' so that it will fit on the image. Now how can I add it?
To computers, images are just a grid of numbers. There are a few ways to 'add' a grid of numbers. In this answer, I will explain three ways to add image 'abc' on image 'xyz'. This is a very simple task a + b = c. But, that only works if the images are the same shape. To work with images of different shapes, only certain parts of the images should be modified using the code image[y: y+height, x: x+width].
To start, let's take a look at the sample images I created. Image xyz has vertical bars and a shape of 600,600. The bars are the color 123 (where 0 is black and 255 is white).
Next, I created another image to add on top of image xyz. This image is called image abc. It has a shape of 300,300. The horizontal bars are also the color 123:
You can 'add' the images by replacing the pixels in the xyz image with the pixels in the abc image:
x,y = 123,123
replace = xyz.copy()
replace[y: y + abc_size, x: x + abc_size] = abc
cv2.imshow('replace', replace)
You can 'add' the images by summing the arrays. This will result in an image that is brighter in places than either of the source images. Summing will produce odd results if the values go out of the range (0, 255).
x,y = 123,123
added = xyz.copy()
added[y: y + abc_size, x: x + abc_size] += abc
cv2.imshow('added', added)
If you want to average the pixels in the images you can use the cv2.addWeighted() function.
background = np.zeros_like(xyz)
x,y = 123,123
background[y: y + abc_size, x: x + abc_size] = abc
add_weighted = cv2.addWeighted(background, .5, xyz, .5, 1)
cv2.imshow('add_weighted', add_weighted)

opencv: negative result of matchTemplate with CCOEFF_NORMED

I get several app icons and resize to 36*36. I hope to get similarity between any two of them. I have made them black and white with opencv function threshold. I follow instruction from other questions. I apply matchTemplate with method TM_CCOEFF_NORMED on two icons but get a negative result, which makes me confused.
Based on doc there should not be any negative number in result array. Could anyone explain to me that why I get a negative number and does this negative make sense?
I failed one hour for trying edit my post with code indent error, even if I remove all code part from my edit. That's crazy. I have tried both grayscale and black&white of icon. When two icons are quite different, I will always get negative result.
If I use original icon with size 48*48, thing goes well. I don't know whether it is related with my resize step.
#read in pics
im1 = cv2.imread('./app_icon/pacrdt1.png')
im1g = cv2.resize(cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
im2 = cv2.imread('./app_icon/pacrdt2.png')
im2g = cv2.resize(cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
im3 = cv2.imread('./app_icon/mny.png')
im3g = cv2.resize(cv2.cvtColor(im3, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
#black&white convert
(thresh1, bw1) = cv2.threshold(im1g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
(thresh3, bw3) = cv2.threshold(im3g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
(thresh2, bw2) = cv2.threshold(im2g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#match template
templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED)[0][0]
templ_diff = 1 - templ_match
sample:
edit2: I define icons with different background color or font color as quite similar ones(but viewer will know they are quite same like image 1 and 2 in my sample). That the reason why I input icon picture as black&white. Hope this make sense.
This problem occurs because both the images are of the same size.
I tried out the same approach but using different image sizes. I used to following images:
Image 1: (125 x 108 pixels) Image
Image 2: (48 x 48 pixels) Template
When I ran the given code for these images it returned an array containing values where each value corresponds to how much the region (of Image) around a certain pixel matches with the template (Template).
Now when you execute cv2.minMaxLoc(templ_match) it returns 4 values:
minimum value: pixel the matches the least in the image when compared with template
maximum value: pixel the matches the most in the image when compared with template
minimum_location: position of occurrence of minimum value
maximum_location: position of occurrence of maximum value
This is what I got:
Out[32]: (-0.15977318584918976, 1.0, (40, 12), (37, 32))
^ ^ ^ ^
| | | |
min_val max_val min_loc max_loc
This result is observed when the image and template are of different sizes. In your case you have resized all the images to the same size as a result you are only getting a single value which is the first value of templ_match. Moreover you must avoid doing templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED)[0][0]
But rather perform templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED) and then obtain the maximum and minimum value along with their locations using : cv2.minMaxLoc(templ_match)

Eliminating various backgrounds from image and segmenting object?

Let say I have this input image, with any number of boxes. I want to segment out these boxes, so I can eventually extract them out.
input image:
The background could anything that is continuous, like a painted wall, wooden table, carpet.
My idea was that the gradient would be the same throughout the background, and with a constant gradient. I could turn where the gradient is about the same, into zero's in the image.
Through edge detection, I would dilate and fill the regions where edges detected. Essentially my goal is to make a blob of the areas where the boxes are. Having the blobs, I would know the exact location of the boxes, thus being able to crop out the boxes from the input image.
So in this case, I should be able to have four blobs, and then I would be able to crop out four images from the input image.
This is how far I got:
segmented image:
query = imread('AllFour.jpg');
gray = rgb2gray(query);
[~, threshold] = edge(gray, 'sobel');
weightedFactor = 1.5;
BWs = edge(gray,'roberts');
%figure, imshow(BWs), title('binary gradient mask');
se90 = strel('disk', 30);
se0 = strel('square', 3);
BWsdil = imdilate(BWs, [se90]);
%figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
What a very interesting problem! Here's my solution in an attempt to solve this problem for you. This is assuming that the background has the same colour distribution throughout. First, transform your image from RGB to the HSV colour space with rgb2hsv. The HSV colour space is an ideal transform for analyzing colours. After this, I would look at the saturation and value planes. Saturation is concerned with how "pure" the colour is, while value is the intensity or brightness of the colour itself. If you take a look at the saturation and value planes for the image, this is what is shown:
im = imread('http://i.stack.imgur.com/1SGVm.jpg');
out = rgb2hsv(im);
figure;
subplot(2,1,1);
imshow(out(:,:,2));
subplot(2,1,2);
imshow(out(:,:,3));
This is what I get:
By taking a look at some locations in the gray background, it looks like the majority of the saturation are less than 0.2 as well as the elements in the value plane are greater than 0.3. As such, we want to find the opposite of those pixels to get our objects. As such, we find those pixels whose saturation is greater than 0.2 or those pixels with a value that is less than 0.3:
seg = out(:,:,2) > 0.2 | out(:,:,3) < 0.3;
This is what we get:
Almost there! There are some spurious single pixels, so I'm going to perform an opening with imopen with a line structuring element.
After this, I'll perform a dilation with imdilate to close any gaps, then use imfill with the 'holes' option to fill in the gaps, then use erosion with imerode to shrink the shapes back to their original form. As such:
se = strel('line', 3, 90);
pre = imopen(seg, c);
se = strel('square', 20);
pre2 = imdilate(pre, se);
pre3 = imfill(pre2, 'holes');
final = imerode(pre3, se);
figure;
imshow(final);
final contains the segmented image with the 4 candy boxes. This is what I get:
Try resizing the image. When you make it smaller, it would be easier to join edges. I tried what's shown below. You might have to tune it depending on the nature of the background.
close all;
clear all;
im = imread('1SGVm.jpg');
small = imresize(im, .25); % resize
grad = (double(imdilate(small, ones(3))) - double(small)); % extract edges
gradSum = sum(grad, 3);
bw = edge(gradSum, 'Canny');
joined = imdilate(bw, ones(3)); % join edges
filled = imfill(joined, 'holes');
filled = imerode(filled, ones(3));
imshow(label2rgb(bwlabel(filled))) % label the regions and show
If you have a recent version of MATLAB, try the Color Thresholder app in the image processing toolbox. It lets you interactively play with different color spaces, to see which one can give you the best segmentation.
If your candy covers are fixed or you know all the covers that are coming into the scene then Template matching is best for this. As it is independent of the background in the image.
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

Automatic approach for removing colord object shadow on white background?

I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.

Resources