How to optimize this image iteration in numpy? [duplicate] - image-processing

This question already has answers here:
How to optimize this image iteration in numpy?
(2 answers)
Closed 5 years ago.
I'm using this code to detect green color in the image.
The problem is this iteration is really slow.
How to make it faster? If it is using numpy, How to do it in numpy way?
def convertGreen(rawimg):
width, height, channels = rawimg.shape
size = (w, h, channels) = (width, height, 1)
processedimg = np.zeros(size, np.uint8)
for wimg in range(0,width):
for himg in range(0,height):
blue = rawimg.item(wimg,himg,0)
green = rawimg.item(wimg,himg,1)
red = rawimg.item(wimg,himg,2)
exg = 2*green-red-blue
if(exg > 50):
processedimg.itemset((wimg,himg,0),exg)
return processedimg

I would go for something like this (untested):
def convertGreen(rawimg):
red, green, blue = rawimg[:,:,0], rawimg[:,:,1], rawimg[:,:,2]
exg = 2*green - red - blue
processedimg = exg.copy();
processedimg[processedimg < 50] = 0
return processedimg
The copy operation can actually be omitted, but I kept it to stay a bit more in line with the original code.
Note that in general programming questions are actually offtopic here, and more suitable for StackOverflow.

Related

How to detect contiguos images

I am trying to detect when two images correspond to a chunk that matches the other image but there is no overlap.
That is, suppose we have the Lenna image:
Someone unknown to me has split it vertically in two and I must know if both pieces are connected or not (assume that they are independent images or that one is a piece of the other).
A:
B:
The positive part is that I know the order of the pieces, the negative part is that there may be other images and I must know which of them fit or not to join them.
My first idea has been to check if the MAE between the last row of A and the first row B is low.
def mae(a, b):
min_mae = 256
for i in range(-5, 5, 1):
a_s = np.roll(a, i, axis=1)
value_mae = np.mean(abs(a_s - b))
min_mae = min(min_mae, value_mae)
return min_mae
if mae(im_a[im_a.shape[0] - 1:im_a.shape[0], ...], im_b[0:1, ...]) < threshold:
# join images a and b
The problem is that it is a not very robust metric.
I have done the same using the horizontal derivative, as well as applying various smoothing filters, but I find myself in the same situation.
Is there a way to solve this problem?
Your method seems like a decent one. Even on visual inspection it looks reasonable:
Top (Bottom row expanded)
Bottom (Top row expanded)
Diff of the images:
It might even be more clear if you also check neighboring columns, but this already looks like the images are similar enough.
Code
import cv2
import numpy as np
# load images
top = cv2.imread("top.png");
bottom = cv2.imread("bottom.png");
# gray
tgray = cv2.cvtColor(top, cv2.COLOR_BGR2GRAY);
bgray = cv2.cvtColor(bottom, cv2.COLOR_BGR2GRAY);
# expand rows
texp = tgray;
bexp = bgray;
trow = np.zeros_like(texp);
brow = np.zeros_like(bexp);
trow[:] = texp[-1, :];
brow[:] = bexp[0, :];
trow = trow[:100, :];
brow = brow[:100, :];
# check absolute difference
ldiff = trow - brow;
rdiff = brow - trow;
diff = np.minimum(ldiff, rdiff);
# show
cv2.imshow("top", trow);
cv2.imshow("bottom", brow);
cv2.imshow("diff", diff);
cv2.waitKey(0);
# save
cv2.imwrite("top_out.png", trow);
cv2.imwrite("bottom_out.png", brow);
cv2.imwrite("diff_out.png", diff);

Using map_coordinates to upscale an image

I am tasked with creating a low-resolution version of an image with the same shape by reducing the image's shape randomly (to lose data) and expanding it back. However, I cannot use any of the 'resize' methods such as in scikit/opencv.. and only allowed to use scipy.ndimage.zoom and map_coordinates.
I've managed to do the following: (im is an grayscale image)
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, zoom)
height_range = np.arange(0, im.shape[0])
width_range = np.arange(0, im.shape[1])
col, row = np.meshgrid(width_range, height_range)
zoom_out = map_coordinates(input=zoomed_in, coordinates=[row, col])
however I get the same zoomed in image with the rest of the pixels added as black. I understand this is due to the default parameters of map_coordinates being :
mode='constant'
cval = 0.0
How can I enlarge the image back using interpolation to the same original shape?
You can use a different step size in np.arange():
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, factor)
height_range = np.arange(0, im.shape[0]*factor, step=factor)
width_range = np.arange(0, im.shape[1]*factor, step=factor)

vips - How to achieve edge feather effect

I'm using the vips library for manipulating some images, specifically its Lua binding, lua-vips, and I'm trying to find a way to do a feather effect on the edge of an image.
It's the first time I try a library for this kind of task and I've been looking at this list of functions available, but still no idea on how to it. It's not complex shape, just a basic rectangular image whose top and bottom edges should blend smoothly with the background (another image that I'm currently using vips_composite() on).
Supposing that a "feather_edges" method existed, it would be something like:
local bg = vips.Image.new_from_file("foo.png")
local img = vips.Image.new_from_file("bar.png") --smaller than `bg`
img = img:feather_edges(6) --imagine a 6px feather
bg:composite(img, 'over')
But still it would be nice to specify what parts of the image should be feathered. Any ideas on how to do it?
You need to pull the alpha out of the top image, mask off the edges with a black border, blur the alpha to feather the edges, reattach, then compose.
Something like:
#!/usr/bin/luajit
vips = require 'vips'
function feather_edges(image, sigma)
-- split to alpha + image data
local alpha = image:extract_band(image:bands() - 1)
local image = image:extract_band(0, {n = image:bands() - 1})
-- we need to place a black border on the alpha we can then feather into,
-- and scale this border with sigma
local margin = sigma * 2
alpha = alpha
:crop(margin, margin,
image:width() - 2 * margin, image:height() - 2 * margin)
:embed(margin, margin, image:width(), image:height())
:gaussblur(sigma)
-- and reattach
return image:bandjoin(alpha)
end
bg = vips.Image.new_from_file(arg[1], {access = "sequential"})
fg = vips.Image.new_from_file(arg[2], {access = "sequential"})
fg = feather_edges(fg, 10)
out = bg:composite(fg, "over", {x = 100, y = 100})
out:write_to_file(arg[3])
As jcupitt said, we need to pull the alpha band from the image, blur it, join it again and composite it with the background, but using the function as it was, left a thin black border around the foreground image.
To overcome that, we need to copy the image, resize it according to the sigma parameter, extract the alpha band from the reduced copy, blur it, and replace the alpha band of the original image with it. Like this, the border of the original image will be completely covered by the transparent parts of the alpha.
local function featherEdges(img, sigma)
local copy = img:copy()
:resize(1, { vscale = (img:height() - sigma * 2) / img:height() })
:embed(0, sigma, img:width(), img:height())
local alpha = copy
:extract_band(copy:bands() - 1)
:gaussblur(sigma)
return img
:extract_band(0, { n = img:bands() - 1 })
:bandjoin(alpha)
end

Improving detection of the orange colour in MATLAB

One of my tasks is to detect some colours from ant colonies from the 16000 images. So, I've already done it very good with blue, pink and green, but now I need to improve detection of the orange colour. It's a bit tricky for me, since I am new one in a field of image processing. I put some examples what I have done and what was my problem.
Raw image:http://img705.imageshack.us/img705/2257/img4263u.jpg
Detection of the orange colour:http://img72.imageshack.us/img72/8197/orangedetection.jpg
Detection of the green colour:http://img585.imageshack.us/img585/1347/greendetection.jpg
I had used selectPixelsAndGetHSV.m to get the HSV value, and after it I used colorDetectHSV.m to detect pixels with the same HSV value.
Could you give me any sugesstion how to improve detection of the orange colour and not to detect whole ants and broods around them?
Thank you in advance!
function [K]=colorDetectHSV(RGB, hsvVal, tol)
HSV = rgb2hsv(RGB);
% find the difference between required and real H value:
diffH = abs(HSV(:,:,1) - hsvVal(1));
[M,N,t] = size(RGB);
I1 = zeros(M,N); I2 = zeros(M,N); I3 = zeros(M,N);
T1 = tol(1);
I1( find(diffH < T1) ) = 1;
if (length(tol)>1)
% find the difference between required and real S value:
diffS = abs(HSV(:,:,2) - hsvVal(2));
T2 = tol(2);
I2( find(diffS < T2) ) = 1;
if (length(tol)>2)
% find the difference between required and real V value:
difV = HSV(:,:,3) - hsvVal(3);
T3 = tol(3);
I3( find(diffS < T3) ) = 1;
I = I1.*I2.*I3;
else
I = I1.*I2;
end
else
I = I1;
end
K=~I;
subplot(2,1,1),
figure,imshow(RGB); title('Original Image');
subplot(2,1,2),
figure,imshow(~I,[]); title('Detected Areas');
You don't show what you are using as target HSV values. These may be the problem.
In the example you provided, a lot of areas are wrongly selected whose hue ranges from 30 to 40. These areas correspond to ants body parts. The orange parts you want to select actually have a hue ranging from approximately 7 to 15, and it shouldn't be difficult to differentiate them from ants.
Try adjusting your target values (especially hue) and you should get better results. Actually you can also probably disregard brightness and saturation, hue seems to be sufficient in this case.

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Resources