I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.
Related
i was trying to create a trackbar window and get hsv value of the image by adjusting the trackbar. created a mask and then adjusted the trackbar to detect an object of the hsv image
enter code here
def nothing(x):
pass
cv.namedWindow("Tracking")
cv.createTrackbar("LH","Tracking",0,255,nothing)
cv.createTrackbar("LS","Tracking",0,255,nothing)
cv.createTrackbar("LV","Tracking",0,255,nothing)
cv.createTrackbar("UH","Tracking",255,255,nothing)
cv.createTrackbar("US","Tracking",255,255,nothing)
cv.createTrackbar("UV","Tracking",255,255,nothing)
while True:
frame = cv.imread("C:/Users/acer/Desktop/insects/New folder/ins.jpg")
hsv = cv.cvtColor(frame,cv.COLOR_BGR2HSV)
l_h = cv.getTrackbarPos("LH","Tracking")
l_s = cv.getTrackbarPos("LS","Tracking")
l_v = cv.getTrackbarPos("LV","Tracking")
u_h = cv.getTrackbarPos("UH","Tracking")
u_s = cv.getTrackbarPos("US","Tracking")
u_v = cv.getTrackbarPos("UV","Tracking")
l_b = np.array([l_h,l_s,l_v])
u_b = np.array([u_h,u_s,u_v])
mask = (hsv,l_b,u_b)
res = cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("frame",frame)
cv.imshow("mask",mask)
cv.imshow("res",res)
key = cv.waitKey(1)
if key == 27:
break
cv.destroyAllWindows()
There are a few issues with your code:
1) You have no import statements. You need at least:
import cv2 as cv
import numpy as np
2) Your indentation is incorrect. Your function nothing() should not be indented.
3) You omitted to call inRange(), you need:
mask = cv.inRange(hsv,l_b,u_b)
4) You have scaled the Hue into the range 0..255 when it actually has the range 0..180 when used with uint8 images so that 360 degrees comes out as 180 degrees which is less than the 255 upper limit of uint8.
By the way, it is fairly poor practice to do "loop invariant" stuff inside a loop - I mean the part where you hit the disk every millisecond and re-read the image, re-decode the JPEG and convert it to HSV. All that can be done outside the loop, then inside it, just do a quick memory copy of the HSV image.
I am trying to find an efficient way to see if one image is a subset of another (meaning that each unique pixel in one image is also found in the other.) The repetition or ordering of the pixels do not matter.
I am working in Java, so I would like all of my operations to be completed in OpenCV for efficiency's sake.
My first idea was to export a list of unique pixel values, and compare it to the list from the second image.
As there is not a built in function to extract unique pixels, I abandoned this approach.
I also understand that I can find the locations of a particular color with the inclusive inRange, and findNonZero operations.
Core.inRange(image, color, color, tempMat); // inclusive
Core.findNonZero(tempMat, colorLocations);
Unfortunately, this does not provide an adequate answer, as it would need to be executed per color, and would still require extracting unique pixels.
Essentially, I'm asking if there is a clever way to use the built in OpenCV functions to see if an image is comprised of the pixels found in another image.
I understand that this will not work for slight color differences. I am working on a limited dataset, and care about the exact pixel values.
To put the question more mathematically:
Because the only think you are interested in is the pixel values i would suggest to do the following.
Compute the histogram of image 1 using hist1 = calcHist()
Compute the histogram of image 2 using hist2 = calcHist()
Calculate the difference vector diff = hist1 - hist2
Check if each bin of the hist of the subimage is less or equal than the corresponding bin in the hist of the bigger image
Thanks to Miki for the fix.
I will keep Amitay's as the accepted answer, as he absolutely lead me down the correct path. I wanted to also share my exact answer for anyone who finds this in the future.
As I stated in my question, I was looking for an efficient way to see if the RGB values of one image were a subset of the RGB values of another image.
I made a function to the following specification:
The Java code is as follows:
private boolean isSubset(Mat subset, Mat subMask, Mat superset) {
// Get unique set of pixels from both images
subset = getUniquePixels(subset, subMask);
superset = getUniquePixels(superset, null);
// See if the superset pixels encapsulate the subset pixels
// OR the unique pixels together
Mat subOrSuper = new Mat();
Core.bitwise_or(subset, superset, subOrSuper);
//See if the ORed matrix is equal to the superset
Mat notEqualMat = new Mat();
Core.compare(superset, subOrSuper, notEqualMat, Core.CMP_NE);
return Core.countNonZero(notEqualMat) == 0;
}
subset and superset are assumed to be CV_8UC3 matricies, while subMask is assumed to be CV_8UC1.
private Mat getUniquePixels(Mat img, Mat mask) {
if (mask == null) {
mask = new Mat();
}
// int bgrValue = (b << 16) + (g << 8) + r;
img.convertTo(img, CvType.CV_32FC3);
Vector<Mat> splitImg = new Vector<>();
Core.split(img, splitImg);
Mat flatImg = Mat.zeros(img.rows(), img.cols(), CvType.CV_32FC1);
Mat multiplier;
for (int i = 0; i < splitImg.size(); i++) {
multiplier = Mat.ones(img.rows(), img.cols(), CvType.CV_32FC1);
// set powTwo = to 2^i;
int powTwo = (1 << i);
// Set multiplier matrix equal to powTwo;
Core.multiply(multiplier, new Scalar(powTwo), multiplier);
// n<<i == n * 2^i;
// I'm shifting the RGB values into separate parts of the same 32bit
// integer.
Core.multiply(multiplier, splitImg.get(i), splitImg.get(i));
// Add the shifted RGB components together.
Core.add(flatImg, splitImg.get(i), flatImg);
}
// Create a histogram of the pixel values.
List<Mat> images = new ArrayList<>();
images.add(flatImg);
MatOfInt channels = new MatOfInt(0);
Mat hist = new Mat();
// 16777216 == 256*256*256
MatOfInt histSize = new MatOfInt(16777216);
MatOfFloat ranges = new MatOfFloat(0f, 16777216f);
Imgproc.calcHist(images, channels, mask, hist, histSize, ranges);
Mat uniquePixels = new Mat();
Core.inRange(hist, new Scalar(1), new Scalar(Float.MAX_VALUE), uniquePixels);
return uniquePixels;
}
Please feel free to ask questions, or point out problems!
I am new to EMGU.CV and I am struggling a bit. Let me start by giving some background of the project, i am trying to track a users fingers, i.e. calculate the users finger tips, but i am struggling a bit. I have created a set of code which filters the depth information to only a certain range and I generate a Bitmap image, tempBitmap, i then convert this image to a greyscale image using EMGU.CV which can be used by cvCanny. Once this is done i apply dilate filter to the canny image to thicken up the outline of the hand to better improve the chance of generating a successful contour, I then try to get the contours of the hand. Now what i have managed to do is to draw a box around the hand, but i am struggling to find a way to convert the contours generated by FindContours to a set of Points i can use to draw the contour with. the variable depthImage2 is a Bitmap image variable i use to draw on before assinging it to the picturebox variable on my C# form based application. any information or guidance you can provide me with will be greatly appreciated, also if my code isnt correct maybe guiding me in a direction where i can calculate the finger tips. I think i am almost there i am just missing something, so any help of any kind will be appreciated.
Image<Bgr, Byte> currentFrame = new Image<Bgr, Byte>(tempBitmap);
Image<Gray, Byte> grayImage = currentFrame.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyImage = new Image<Gray, Byte>(grayImage.Size);
CvInvoke.cvCanny(grayImage, cannyImage, 10, 60, 3);
StructuringElementEx kernel = new StructuringElementEx(
3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_ELLIPSE);
CvInvoke.cvDilate(cannyImage, cannyImage, kernel, 1);
IntPtr cont = IntPtr.Zero;
Graphics graphicsBitmap = Graphics.FromImage(depthImage2);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours =
cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
contours != null; contours = contours.HNext)
{
IntPtr seq = CvInvoke.cvConvexHull2(contours, storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr = contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(
storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
graphicsBitmap.DrawRectangle(
new Pen(new SolidBrush(Color.Red)), tr.BoundingRectangle);
}
Contour contours = cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE) //to return all points
then:
List<Point[]> convertedContours = new List<Point[]>();
while(cotours!=null)
{
var contourPoints = contours.ToArray(); //put Seq<Point> to Point[], ToList() is also available ?
convertedContours.Add(contourpoints);
contours = contours.HNext;
}
you can draw contour by image Draw functon overload.
just find signature that contains parameter Seq<>
....
in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)
I am trying to script a greyscale object in a captured image as a matrix of 0 1 that represents a block of object pixels (or something like object style scaling), i can imagine the manual processing by looping the object, scaling and writing the matrix according to the grade of color,
however i'm looking for intelligent or open source tools,
.NET are preferred,
[Update, to explain in more details]
The original images are colored, however, i'm converting it into 256 greyscale, then i want to scale it into black or white only, so at the end of the day it's just a black and white picture i want convert it to zero-one matrix,
the following url contains a discussion of how to convert black-white picture to zero-one matrix using a software called imagemagick:
http://studio.imagemagick.org/discourse-server/viewtopic.php?f=1&t=18433
notice the Zero one matrix which demonstrate a dragon face image!, is there any techniques or open source tools that helping me to achieve that?
Something like the following using Emgu OpenCV for .NET would work.
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using System;
using System.Drawing;
using System.IO;
using (Image<Bgr, Byte> img = new Image<Bgr, Byte>("MyImage.jpg"))
{
Matrix<Int32> matrix = new Matrix<Int32>(img.Width, img.Height);
for (int i = 0; i<img.Height;i++)
{
for (int j = 0; j<img.Width;j++)
{
if (img.Data[i,j,2] == 255 &&
img.Data[i,j,1] == 255 &&
img.Data[i,j,0] == 255)
{
matrix.Data[i,j] = 0;
}
else
{
matrix.Data[i,j] = 1;
}
}
}
TextWriter tw = new StreamWriter("output.txt");
for (int i = 0; i<img.Height;i++)
{
for (int j = 0; j<img.Width;j++)
{
tw.Write(matrix.Data[i,j]);
}
tw.Write(tw.NewLine);
}
}
Note that the snippet above loads colour images and creates a matrix with white as 0 and 1 otherwise.
In order to load and work with grayscale images
the Image<Bgr, Byte> becomes an Image<Gray, Byte> and the comparison simplifies to just
if (img.Data[i,j,0] == 255).
Also to do the thresholding (conversion from colour to grayscale to black and white), you can use Otsu's thresholding using the cvThreshold method, using something like :
int threshold = 150;
Image<Bgr, Byte> img = new Image<Bgr, Byte>("MyImage.jpg");
Image<Gray, Single> img2 = img1.Convert<Gray, Single>();
Image<Gray, Single> img3 = new Image<Gray, Single>(img2.Width, img2.Height);
CvInvoke.cvThreshold(img2, img3, threshold, 255, THRESH.CV_THRESH_OTSU);
Other possible tools include
convert from ImageMagick and pnmoraw from netpbm, as mentioned in the URL you linked, with example snippet convert lib/dragon_face.xbm pbm: | pnmnoraw.
Using PIL (Python Image Library) to iterate through image data and the Python IO functions to write the output data
Using System.Drawing.Bitmap specifically the GetPixel method to iterate through the image data, and C# IO functions to write the output data.