how to embed a watermark on an image using edge in matlab? - image-processing

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance

You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.

Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/

see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Related

Using map_coordinates to upscale an image

I am tasked with creating a low-resolution version of an image with the same shape by reducing the image's shape randomly (to lose data) and expanding it back. However, I cannot use any of the 'resize' methods such as in scikit/opencv.. and only allowed to use scipy.ndimage.zoom and map_coordinates.
I've managed to do the following: (im is an grayscale image)
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, zoom)
height_range = np.arange(0, im.shape[0])
width_range = np.arange(0, im.shape[1])
col, row = np.meshgrid(width_range, height_range)
zoom_out = map_coordinates(input=zoomed_in, coordinates=[row, col])
however I get the same zoomed in image with the rest of the pixels added as black. I understand this is due to the default parameters of map_coordinates being :
mode='constant'
cval = 0.0
How can I enlarge the image back using interpolation to the same original shape?
You can use a different step size in np.arange():
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, factor)
height_range = np.arange(0, im.shape[0]*factor, step=factor)
width_range = np.arange(0, im.shape[1]*factor, step=factor)

How to rotate a non-squared image in frequency domain

I want to rotate an image in frequency domain. Inspired in the answers in Image rotation and scaling the frequency domain? I managed to rotate square images. (See the following python script using OpenCV)
M = cv2.imread("lenna.png")
M=np.float32(M)
hanning=cv2.createHanningWindow((M.shape[1],M.shape[0]),cv2.CV_32F)
M=hanning*M
sM = fftshift(M)
rotation_center=(M.shape[1]/2,M.shape[0]/2)
rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,rot_matrix,(FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))
This works fine with squared images. (Better results could be achieved by padding the image)
However, when only using a non-squared portion of the image, the rotation in frequency domain shows some kind of shearing effect.
Any idea on how to achieve this? Obivously I could pad the image to make it square, however the final purpose of all this is to rotate FFTs as fast as possible for an iterative image registration algorithm and this would slightly slow down the algorithm.
Following the suggestion of #CrisLuengo I found the affine transform needed to avoid padding the image. Obviously it will depend on the image size and the application but for my case avoidding the padding is very interesting.
The modified script looks now like:
#rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
kx=1.0
ky=1.0
if(M.shape[0]>M.shape[1]):
kx= float(M.shape[0]) / M.shape[1]
else:
ky=float(M.shape[1])/M.shape[0]
affine_transform = np.zeros([2, 3])
affine_transform[0, 0] = np.cos(angle)
affine_transform[0, 1] = np.sin(angle)*ky/kx
affine_transform[0, 2] = (1-np.cos(angle))*rotation_center[0]-ky/kx*np.sin(angle)*rotation_center[1]
affine_transform[1, 0] = -np.sin(angle)*kx/ky
affine_transform[1, 1] = np.cos(angle)
affine_transform[1, 2] = kx/ky*np.sin(angle)*rotation_center[0]+(1-np.cos(angle))*rotation_center[1]
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,affine_transform, (FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))

how to mark and plot boundries using the threshold on an image using skimage

I have an image of these coins
I have tried the algorithms using the skimage.filters on the grayscale version of this. I want to know that how do I find the lines around these coins and plot contours over coins in the original image.
I have used
img = io.imread('coins.jpg')
img = color.rgb2gray(img)
f,ax = filters.try_all_threshold(img,verbose=False,figsize=(8,15))
_ = ax[0].set_title('Grayscale version of Original')
There are quite a few different ways to find out the outer boundaries of the coins. But which method to pick depends on what you want to achieve with that result.
#Belal Homaidan has already pointed one such solution. You can also use canny edge detector and then apply Hough transform. Here's an example:
Circular and Elliptical Hough Transforms
EDIT:
The most relevant part for you as per your question, perhaps, is following:
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius,
shape=image.shape)
image[circy, circx] = (220, 20, 20)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()

vips - How to achieve edge feather effect

I'm using the vips library for manipulating some images, specifically its Lua binding, lua-vips, and I'm trying to find a way to do a feather effect on the edge of an image.
It's the first time I try a library for this kind of task and I've been looking at this list of functions available, but still no idea on how to it. It's not complex shape, just a basic rectangular image whose top and bottom edges should blend smoothly with the background (another image that I'm currently using vips_composite() on).
Supposing that a "feather_edges" method existed, it would be something like:
local bg = vips.Image.new_from_file("foo.png")
local img = vips.Image.new_from_file("bar.png") --smaller than `bg`
img = img:feather_edges(6) --imagine a 6px feather
bg:composite(img, 'over')
But still it would be nice to specify what parts of the image should be feathered. Any ideas on how to do it?
You need to pull the alpha out of the top image, mask off the edges with a black border, blur the alpha to feather the edges, reattach, then compose.
Something like:
#!/usr/bin/luajit
vips = require 'vips'
function feather_edges(image, sigma)
-- split to alpha + image data
local alpha = image:extract_band(image:bands() - 1)
local image = image:extract_band(0, {n = image:bands() - 1})
-- we need to place a black border on the alpha we can then feather into,
-- and scale this border with sigma
local margin = sigma * 2
alpha = alpha
:crop(margin, margin,
image:width() - 2 * margin, image:height() - 2 * margin)
:embed(margin, margin, image:width(), image:height())
:gaussblur(sigma)
-- and reattach
return image:bandjoin(alpha)
end
bg = vips.Image.new_from_file(arg[1], {access = "sequential"})
fg = vips.Image.new_from_file(arg[2], {access = "sequential"})
fg = feather_edges(fg, 10)
out = bg:composite(fg, "over", {x = 100, y = 100})
out:write_to_file(arg[3])
As jcupitt said, we need to pull the alpha band from the image, blur it, join it again and composite it with the background, but using the function as it was, left a thin black border around the foreground image.
To overcome that, we need to copy the image, resize it according to the sigma parameter, extract the alpha band from the reduced copy, blur it, and replace the alpha band of the original image with it. Like this, the border of the original image will be completely covered by the transparent parts of the alpha.
local function featherEdges(img, sigma)
local copy = img:copy()
:resize(1, { vscale = (img:height() - sigma * 2) / img:height() })
:embed(0, sigma, img:width(), img:height())
local alpha = copy
:extract_band(copy:bands() - 1)
:gaussblur(sigma)
return img
:extract_band(0, { n = img:bands() - 1 })
:bandjoin(alpha)
end

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

Resources