Color edge detection + opencv - image-processing

In the canny edge detector the input required is a gray image...
Is there any direct color edge detector function in open-cv ? Or is it same if i convert to gray scale and use canny ?
I ask this because I need to see the edge detection map of a color image for further processing... That is I need to calculate all the horizontal and vertical line segments in a color image... Thus i was thinking of first calculating all edges of the image ...
Can someone help me how i should progress ...

Matthias Odisio is correct thanks you even corrected me and you've explained the reason very well. The solution then would be to perform edge detection on each colour spectrum:
Image<Bgr, Byte> img = new Image<Bgr, Byte>(open.FileName);
Image<Bgr, Byte> Result = new Image<Bgr, Byte>(img.Size);
Result[0] = img[0].Canny(new Gray(10), new Gray(60));
Result[1] = img[0].Canny(new Gray(10), new Gray(60));
Result[2] = img[0].Canny(new Gray(10), new Gray(60));
Hope this helps,
Chris

Related

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

Converting contours found using EMGU.CV

I am new to EMGU.CV and I am struggling a bit. Let me start by giving some background of the project, i am trying to track a users fingers, i.e. calculate the users finger tips, but i am struggling a bit. I have created a set of code which filters the depth information to only a certain range and I generate a Bitmap image, tempBitmap, i then convert this image to a greyscale image using EMGU.CV which can be used by cvCanny. Once this is done i apply dilate filter to the canny image to thicken up the outline of the hand to better improve the chance of generating a successful contour, I then try to get the contours of the hand. Now what i have managed to do is to draw a box around the hand, but i am struggling to find a way to convert the contours generated by FindContours to a set of Points i can use to draw the contour with. the variable depthImage2 is a Bitmap image variable i use to draw on before assinging it to the picturebox variable on my C# form based application. any information or guidance you can provide me with will be greatly appreciated, also if my code isnt correct maybe guiding me in a direction where i can calculate the finger tips. I think i am almost there i am just missing something, so any help of any kind will be appreciated.
Image<Bgr, Byte> currentFrame = new Image<Bgr, Byte>(tempBitmap);
Image<Gray, Byte> grayImage = currentFrame.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyImage = new Image<Gray, Byte>(grayImage.Size);
CvInvoke.cvCanny(grayImage, cannyImage, 10, 60, 3);
StructuringElementEx kernel = new StructuringElementEx(
3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_ELLIPSE);
CvInvoke.cvDilate(cannyImage, cannyImage, kernel, 1);
IntPtr cont = IntPtr.Zero;
Graphics graphicsBitmap = Graphics.FromImage(depthImage2);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours =
cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
contours != null; contours = contours.HNext)
{
IntPtr seq = CvInvoke.cvConvexHull2(contours, storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr = contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(
storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
graphicsBitmap.DrawRectangle(
new Pen(new SolidBrush(Color.Red)), tr.BoundingRectangle);
}
Contour contours = cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE) //to return all points
then:
List<Point[]> convertedContours = new List<Point[]>();
while(cotours!=null)
{
var contourPoints = contours.ToArray(); //put Seq<Point> to Point[], ToList() is also available ?
convertedContours.Add(contourpoints);
contours = contours.HNext;
}
you can draw contour by image Draw functon overload.
just find signature that contains parameter Seq<>
....

Approximating a contour with rotated rectangles

After some color detection and binary thresholding, I use the following code to find the contours and draw them onto the image:
using (MemStorage stor = new MemStorage())
{
Contour<Point> contours = img.FindContours(
Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST,
stor);
for (; contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * poly, stor);
img.Draw(currentContour,new Bgr(255,255,255),1);
Rectangle currentrect = currentContour.BoundingRectangle;
img.Draw(currentrect,new Bgr(255,255,255),2);
}
}
My problem is, as I expected, that if the contour is a rectangle but is rotated in the image, the bounding rectangle does not change its orientation to fit the rotation. Is their another way to accomplish this function? Any help would be greatly appreciated.
Yes, there is another way to accomplish this. You can use
contour.GetConvexHull(ORIENTATION.CV_CLOCKWISE);
using Moments, you can easily get the orientation and adjust the rectangle accordingly.
The method you are looking for is:
PointCollection.MinAreaRect(points);
Worked example is here:
http://www.emgu.com/wiki/index.php/Minimum_Area_Rectangle_in_CSharp
Complete documentation (which has little more than the above) is located here:
http://www.emgu.com/wiki/files/2.4.0/document/html/0d5fd148-0afb-fdbf-e995-6dace8c8848d.htm

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

Find the Most Prominent Color From an Image Using Emgu CV

So, I have this image of a face:
http://i.stack.imgur.com/gsZnh.jpg
and I need to be able to determine the most dominate/prominent RGB and The YCrCB value from it using Emgu CV. Thank you for the help.
You should first get the histogram of every color channel. Then you can use minmax function to get the most dominant color.
The code I'm posting is for an HSV image, you can change channel names for your color space.
Image<Gray, Byte>[] channels = hsv1.Split();
Image<Gray, Byte> ImgHue = channels[0];
Image<Gray, Byte> ImgSat = channels[1];
Image<Gray, Byte> ImgVal = channels[2];
DenseHistogram histo1 = new DenseHistogram(255, new RangeF(0, 255));
histo1.Calculate<byte>(new Image<Gray, byte>[] { ImgHue }, true, null);
float minV, maxV;
int[] minL;
int[] maxL;
histo1.MinMax(out minV, out maxV, out minL, out maxL);
string mystr = Convert.ToString(maxL[0]);
label1.Text = "Hue= " + mystr;
You can do the same thing for Saturation and Value channels too.
You can use histogram to find the distribution of colors and choose the highest value as the dominant color. Don't know about related functions in Emgu CV though for now. Good Luck

Resources