Converting contours found using EMGU.CV - emgucv

I am new to EMGU.CV and I am struggling a bit. Let me start by giving some background of the project, i am trying to track a users fingers, i.e. calculate the users finger tips, but i am struggling a bit. I have created a set of code which filters the depth information to only a certain range and I generate a Bitmap image, tempBitmap, i then convert this image to a greyscale image using EMGU.CV which can be used by cvCanny. Once this is done i apply dilate filter to the canny image to thicken up the outline of the hand to better improve the chance of generating a successful contour, I then try to get the contours of the hand. Now what i have managed to do is to draw a box around the hand, but i am struggling to find a way to convert the contours generated by FindContours to a set of Points i can use to draw the contour with. the variable depthImage2 is a Bitmap image variable i use to draw on before assinging it to the picturebox variable on my C# form based application. any information or guidance you can provide me with will be greatly appreciated, also if my code isnt correct maybe guiding me in a direction where i can calculate the finger tips. I think i am almost there i am just missing something, so any help of any kind will be appreciated.
Image<Bgr, Byte> currentFrame = new Image<Bgr, Byte>(tempBitmap);
Image<Gray, Byte> grayImage = currentFrame.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyImage = new Image<Gray, Byte>(grayImage.Size);
CvInvoke.cvCanny(grayImage, cannyImage, 10, 60, 3);
StructuringElementEx kernel = new StructuringElementEx(
3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_ELLIPSE);
CvInvoke.cvDilate(cannyImage, cannyImage, kernel, 1);
IntPtr cont = IntPtr.Zero;
Graphics graphicsBitmap = Graphics.FromImage(depthImage2);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours =
cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
contours != null; contours = contours.HNext)
{
IntPtr seq = CvInvoke.cvConvexHull2(contours, storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr = contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(
storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
graphicsBitmap.DrawRectangle(
new Pen(new SolidBrush(Color.Red)), tr.BoundingRectangle);
}

Contour contours = cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE) //to return all points
then:
List<Point[]> convertedContours = new List<Point[]>();
while(cotours!=null)
{
var contourPoints = contours.ToArray(); //put Seq<Point> to Point[], ToList() is also available ?
convertedContours.Add(contourpoints);
contours = contours.HNext;
}
you can draw contour by image Draw functon overload.
just find signature that contains parameter Seq<>
....

Related

Efficiently tell if one image is entirely comprised of the pixel values of another in OpenCV

I am trying to find an efficient way to see if one image is a subset of another (meaning that each unique pixel in one image is also found in the other.) The repetition or ordering of the pixels do not matter.
I am working in Java, so I would like all of my operations to be completed in OpenCV for efficiency's sake.
My first idea was to export a list of unique pixel values, and compare it to the list from the second image.
As there is not a built in function to extract unique pixels, I abandoned this approach.
I also understand that I can find the locations of a particular color with the inclusive inRange, and findNonZero operations.
Core.inRange(image, color, color, tempMat); // inclusive
Core.findNonZero(tempMat, colorLocations);
Unfortunately, this does not provide an adequate answer, as it would need to be executed per color, and would still require extracting unique pixels.
Essentially, I'm asking if there is a clever way to use the built in OpenCV functions to see if an image is comprised of the pixels found in another image.
I understand that this will not work for slight color differences. I am working on a limited dataset, and care about the exact pixel values.
To put the question more mathematically:
Because the only think you are interested in is the pixel values i would suggest to do the following.
Compute the histogram of image 1 using hist1 = calcHist()
Compute the histogram of image 2 using hist2 = calcHist()
Calculate the difference vector diff = hist1 - hist2
Check if each bin of the hist of the subimage is less or equal than the corresponding bin in the hist of the bigger image
Thanks to Miki for the fix.
I will keep Amitay's as the accepted answer, as he absolutely lead me down the correct path. I wanted to also share my exact answer for anyone who finds this in the future.
As I stated in my question, I was looking for an efficient way to see if the RGB values of one image were a subset of the RGB values of another image.
I made a function to the following specification:
The Java code is as follows:
private boolean isSubset(Mat subset, Mat subMask, Mat superset) {
// Get unique set of pixels from both images
subset = getUniquePixels(subset, subMask);
superset = getUniquePixels(superset, null);
// See if the superset pixels encapsulate the subset pixels
// OR the unique pixels together
Mat subOrSuper = new Mat();
Core.bitwise_or(subset, superset, subOrSuper);
//See if the ORed matrix is equal to the superset
Mat notEqualMat = new Mat();
Core.compare(superset, subOrSuper, notEqualMat, Core.CMP_NE);
return Core.countNonZero(notEqualMat) == 0;
}
subset and superset are assumed to be CV_8UC3 matricies, while subMask is assumed to be CV_8UC1.
private Mat getUniquePixels(Mat img, Mat mask) {
if (mask == null) {
mask = new Mat();
}
// int bgrValue = (b << 16) + (g << 8) + r;
img.convertTo(img, CvType.CV_32FC3);
Vector<Mat> splitImg = new Vector<>();
Core.split(img, splitImg);
Mat flatImg = Mat.zeros(img.rows(), img.cols(), CvType.CV_32FC1);
Mat multiplier;
for (int i = 0; i < splitImg.size(); i++) {
multiplier = Mat.ones(img.rows(), img.cols(), CvType.CV_32FC1);
// set powTwo = to 2^i;
int powTwo = (1 << i);
// Set multiplier matrix equal to powTwo;
Core.multiply(multiplier, new Scalar(powTwo), multiplier);
// n<<i == n * 2^i;
// I'm shifting the RGB values into separate parts of the same 32bit
// integer.
Core.multiply(multiplier, splitImg.get(i), splitImg.get(i));
// Add the shifted RGB components together.
Core.add(flatImg, splitImg.get(i), flatImg);
}
// Create a histogram of the pixel values.
List<Mat> images = new ArrayList<>();
images.add(flatImg);
MatOfInt channels = new MatOfInt(0);
Mat hist = new Mat();
// 16777216 == 256*256*256
MatOfInt histSize = new MatOfInt(16777216);
MatOfFloat ranges = new MatOfFloat(0f, 16777216f);
Imgproc.calcHist(images, channels, mask, hist, histSize, ranges);
Mat uniquePixels = new Mat();
Core.inRange(hist, new Scalar(1), new Scalar(Float.MAX_VALUE), uniquePixels);
return uniquePixels;
}
Please feel free to ask questions, or point out problems!

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

Removing long horizontal/vertical lines from edge image using OpenCV

How can I use standard image processing filters (from OpenCV) to remove long horizontal and vertical lines from an image?
The images are B&W so removing means simply painting black.
Illustration:
I'm currently doing it in Python, iterating over pixel rows and cols and detecting ranges of consecutive pixels, removing those that are longer than N pixels. However, it's really slow in comparison to the OpenCV library, and if there's a way of accomplishing the same with OpenCV functions, that'll likely be orders of magnitude faster.
I imagine this can be done by convolution using a kernel that's a row of pixels (for horizontal lines), but I'm having a hard time figuring the exact operation that would do the trick.
if your lines are truly horizontal/vertical, try this
import cv2
import numpy as np
img = cv2.imread('c:/data/test.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
linek = np.zeros((11,11),dtype=np.uint8)
linek[5,...]=1
x=cv2.morphologyEx(gray, cv2.MORPH_OPEN, linek ,iterations=1)
gray-=x
cv2.imshow('gray',gray)
cv2.waitKey(0)
result
You can refer OpenCV Morphological Transformations documentation for more details.
How long is "long". Long, as in, lines that run the length of the entire image, or just longer than n pixels?
If the latter, then you could just use an n+1 X n+1 median or mode filter, and set the corner coefficients to zero, and you'd get the desired effect.
If you're referring to just lines that run the width of the entire image, just use the memcmp() function against a row of data, and compare it to a pre-allocated array of zeros which is the same length as a row. If they are the same, you know you have a blank line that spans the horizontal length of the image, and that row can be deleted.
This will be MUCH faster than the element-wise comparisons you are currently using, and is very well explained here:
Why is memcpy() and memmove() faster than pointer increments?
If you want to repeat the same operation for vertical lines, just transpose the image, and repeat the operation.
I know this is more of a system-optimization level approach than an openCV filter like you requested, but it gets the job done fast and safely. You can speed up the calculation even more if you manage to force the image and your empty array to be 32-bit aligned in memory.
To remove Horizontal Lines from an image you can use an edge detection algorithm to detect edges and then use Hough's Transform in OpenCV to detect lines and color them white:
import cv2
import numpy as np
img = cv2.imread(img,0)
laplacian = cv2.Laplacian(img,cv2.CV_8UC1) # Laplacian Edge Detection
minLineLength = 900
maxLineGap = 100
lines = cv2.HoughLinesP(laplacian,1,np.pi/180,100,minLineLength,maxLineGap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img,(x1,y1),(x2,y2),(255,255,255),1)
cv2.imwrite('Written_Back_Results.jpg',img)
This is for javacv.
package com.test11;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class GetVerticalOrHorizonalLines {
static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
public static void main(String[] args) {
//Canny process before HoughLine Recognition
Mat source = Imgcodecs.imread("src//data//bill.jpg");
Mat gray = new Mat(source.rows(),source.cols(),CvType.CV_8UC1);
Imgproc.cvtColor(source, gray, Imgproc.COLOR_BGR2GRAY);
Mat binary = new Mat();
Imgproc.adaptiveThreshold(gray, binary, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2);
Imgcodecs.imwrite("src//data//binary.jpg", binary);
Mat horizontal = binary.clone();
int horizontalsize = horizontal.cols() / 30;
int verticalsize = horizontal.rows() / 30;
Mat horizontal_element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(horizontalsize,1));
//Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3,3));
Imgcodecs.imwrite("src//data//horizontal_element.jpg", horizontal_element);
Mat Linek = Mat.zeros(source.size(), CvType.CV_8UC1);
//x = Imgproc.morphologyEx(gray, dst, op, kernel, anchor, iterations);
Imgproc.morphologyEx(gray, Linek,Imgproc.MORPH_BLACKHAT, horizontal_element);
Imgcodecs.imwrite("src//data//bill_RECT_Blackhat.jpg", Linek);
Mat vertical_element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(1,verticalsize));
Imgcodecs.imwrite("src//data//vertical_element.jpg", vertical_element);
Mat Linek2 = Mat.zeros(source.size(), CvType.CV_8UC1);
//x = Imgproc.morphologyEx(gray, dst, op, kernel, anchor, iterations);
Imgproc.morphologyEx(gray, Linek2,Imgproc.MORPH_CLOSE, vertical_element);
Imgcodecs.imwrite("src//data//bill_RECT_Blackhat2.jpg", Linek2);
}
}
Another one.
package com.test12;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class ImageSubstrate {
static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
public static void main(String[] args) {
Mat source = Imgcodecs.imread("src//data//bill.jpg");
Mat image_h = Mat.zeros(source.size(), CvType.CV_8UC1);
Mat image_v = Mat.zeros(source.size(), CvType.CV_8UC1);
Mat output = new Mat();
Core.bitwise_not(source, output);
Mat output_result = new Mat();
Mat kernel_h = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(20, 1));
Imgproc.morphologyEx(output, image_h, Imgproc.MORPH_OPEN, kernel_h);
Imgcodecs.imwrite("src//data//output.jpg", output);
Core.subtract(output, image_h, output_result);
Imgcodecs.imwrite("src//data//output_result.jpg", output_result);
Mat kernel_v = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(1, 20));
Imgproc.morphologyEx(output_result, image_v, Imgproc.MORPH_OPEN, kernel_v);
Mat output_result2 = new Mat();
Core.subtract(output_result, image_v, output_result2);
Imgcodecs.imwrite("src//data//output_result2.jpg", output_result2);
}
}

JavaCv comparison of three and more images

I have tried the cvMatchTemplate function to compare two images(a template and an image).
IplImage img = cvLoadImage("thumbnail.jpg");
IplImage template = cvLoadImage("temp.jpg");
IplImage result = cvCreateImage(cvSize(img.width()-template.width()+1, img.height()-template.height()+1), IPL_DEPTH_32F, 1);
int method = CV_TM_SQDIFF;
cvMatchTemplate(img,template,result,method);
cvShowImage("res",result);
double[] min_val = new double[2];
double[] max_val = new double[2];
//Where are located our max and min correlation points
CvPoint minLoc = new CvPoint();
CvPoint maxLoc = new CvPoint();
cvMinMaxLoc(result, min_val, max_val, minLoc, maxLoc, null); //the last null it's for optional mask mat()
CvPoint point = new CvPoint();
point.x(minLoc.x()+template.width());
point.y(minLoc.y()+template.height());
cvRectangle(img, minLoc, point, CvScalar.WHITE, 2, 8, 0); //Draw the rectangle result in original img.
cvShowImage("Image", img);
cvWaitKey(0);
//Release
cvReleaseImage(img);
cvReleaseImage(template);
cvReleaseImage(result);
I got the desired result but could not find a way of comparing two and more images with a template.
I converted the result image that is obtained to a matrix using asCvMat and got the matrix of probability of match on every pixel of original image.
I came across the determinant function in OpenCv to compare the two matrices to understand which of the images is a closer match to the template but could not find a corresponding function in JavaCv.
Is there any way by which I could compare the results and determine that which image is a closer match. I did come across ObjectFinder but could not find proper documentation of how to use it.
Please point out certain links or examples which may help me solve my problem.
You can compare image matching results by compering the max_val
I would even change the method to CV_TM_SQDIFF_NORMED and then you can set a threshold for max_val that is somewhere between 0 to 1.

Scripting objects in greyscale images as zero-one array[]

I am trying to script a greyscale object in a captured image as a matrix of 0 1 that represents a block of object pixels (or something like object style scaling), i can imagine the manual processing by looping the object, scaling and writing the matrix according to the grade of color,
however i'm looking for intelligent or open source tools,
.NET are preferred,
[Update, to explain in more details]
The original images are colored, however, i'm converting it into 256 greyscale, then i want to scale it into black or white only, so at the end of the day it's just a black and white picture i want convert it to zero-one matrix,
the following url contains a discussion of how to convert black-white picture to zero-one matrix using a software called imagemagick:
http://studio.imagemagick.org/discourse-server/viewtopic.php?f=1&t=18433
notice the Zero one matrix which demonstrate a dragon face image!, is there any techniques or open source tools that helping me to achieve that?
Something like the following using Emgu OpenCV for .NET would work.
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using System;
using System.Drawing;
using System.IO;
using (Image<Bgr, Byte> img = new Image<Bgr, Byte>("MyImage.jpg"))
{
Matrix<Int32> matrix = new Matrix<Int32>(img.Width, img.Height);
for (int i = 0; i<img.Height;i++)
{
for (int j = 0; j<img.Width;j++)
{
if (img.Data[i,j,2] == 255 &&
img.Data[i,j,1] == 255 &&
img.Data[i,j,0] == 255)
{
matrix.Data[i,j] = 0;
}
else
{
matrix.Data[i,j] = 1;
}
}
}
TextWriter tw = new StreamWriter("output.txt");
for (int i = 0; i<img.Height;i++)
{
for (int j = 0; j<img.Width;j++)
{
tw.Write(matrix.Data[i,j]);
}
tw.Write(tw.NewLine);
}
}
Note that the snippet above loads colour images and creates a matrix with white as 0 and 1 otherwise.
In order to load and work with grayscale images
the Image<Bgr, Byte> becomes an Image<Gray, Byte> and the comparison simplifies to just
if (img.Data[i,j,0] == 255).
Also to do the thresholding (conversion from colour to grayscale to black and white), you can use Otsu's thresholding using the cvThreshold method, using something like :
int threshold = 150;
Image<Bgr, Byte> img = new Image<Bgr, Byte>("MyImage.jpg");
Image<Gray, Single> img2 = img1.Convert<Gray, Single>();
Image<Gray, Single> img3 = new Image<Gray, Single>(img2.Width, img2.Height);
CvInvoke.cvThreshold(img2, img3, threshold, 255, THRESH.CV_THRESH_OTSU);
Other possible tools include
convert from ImageMagick and pnmoraw from netpbm, as mentioned in the URL you linked, with example snippet convert lib/dragon_face.xbm pbm: | pnmnoraw.
Using PIL (Python Image Library) to iterate through image data and the Python IO functions to write the output data
Using System.Drawing.Bitmap specifically the GetPixel method to iterate through the image data, and C# IO functions to write the output data.

Resources