HoughLinesBinary doesn't find lines - opencv

I use canny on image of a house and then HoughLinesBinary on result.
Original image:
Image after canny:
Lines found by HoughLinesBinary:
As you can see it produced many artifacts, but didn't mark straight lines, like the left side of the door
Source:
public static Image<Bgr, byte> split_to_patterns(Image<Bgr, byte> original)
{
Image<Bgr, byte> res = original.Copy();
LineSegment2D[] lines =
original
.Convert<Gray, byte>()
.Canny(16, 16)
.HoughLinesBinary(1,Math.PI/16,1,10,1)[0];
foreach (LineSegment2D line in lines)
{
res.Draw(line,new Bgr(Color.Red),2);
}
return res;
}

Related

How to save interest point in SURF feature?

I'm using Emgu CV's SURF feature to recognize similar objects in images.
The image is drawn, showing all the key points found, in both images. The problem is that the similar points are seen in the image.
How can I save those match points in a database?
First of all, create a class SURF.cs then write the following code in it:
public void FindSURF(Image<Gray, Byte> modelImage)
{
VectorOfKeyPoint modelKeyPoints;
SURFDetector surfCPU = new SURFDetector(500, false);
//extract features from the object image
modelKeyPoints = new VectorOfKeyPoint();
Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(modelImage, null, modelKeyPoints);
}
Then, in the program.cs write the following code:
SURF FindImageSURF = new SURF();
string[] filePaths = Directory.GetFiles(#"E:\folderimages\");
for (int i = 0; i < filePaths.Length; ++i)
{
string path = filePaths[i];
using (Image<Gray, Byte> modelImage = new Image<Gray, byte>(path))
{
FindImageSURF.FindSURF(modelImage);
}
}

Emgu CV 3 findContours and hierarchy parameter of type Vec4i equivalent?

I'm attempting to translate the following OpenCV C++ code into Emgu CV 3:
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> v4iHierarchy;
cv::findContours(imgThreshCopy, contours, v4iHierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
I can find some Emgu CV 3 examples that use null for the 3rd parameter to findContours, for example doing it that way here would be a Visual Basic translation:
Dim contours As New VectorOfVectorOfPoint()
CvInvoke.FindContours(imgThreshCopy, contours, Nothing, RetrType.Tree, ChainApproxMethod.ChainApproxSimple)
Which works if the hierarchy parameter is not needed, but what if it is? I can't seem to figure the Emgu CV 3 syntax equivalent for the C++ line
std::vector<cv::Vec4i> v4iHierarchy;
Anybody else gotten this to work? Any help would be appreciated.
Pass a default-constructed Mat to get the hierarchy.
var VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
var Mat hierarchy = new Mat();
CvInvoke.FindContours(
image,
contours,
hierarchy,
RetrType.Ccomp,
ChainApproxMethod.ChainApproxSimple
);
Console.WriteLine("contours.Size: " + contours.Size);
Console.WriteLine("hierarchy.Rows: " + hierarchy.Rows);
Console.WriteLine("hierarchy.Cols: " + hierarchy.Cols);
Console.WriteLine("hierarchy.Depth: " + hierarchy.Depth);
Console.WriteLine("hierarchy.NumberOfChannels: " + hierarchy.NumberOfChannels);
// Example Output:
// contours.Size: 4391
// hierarchy.Rows: 1
// hierarchy.Cols: 4391
// hierarchy.Depth: Cv32S
// hierarchy.NumberOfChannels: 4
You can access the hierarchy data using the Mat DataPointer:
/// <summary>
/// Get a neighbor index in the heirarchy tree.
/// </summary>
/// <returns>
/// A neighbor index or -1 if the given neighbor does not exist.
/// </returns>
public int Get(HierarchyIndex component, int index)
{
if (Hierarchy.Depth != Emgu.CV.CvEnum.DepthType.Cv32S)
{
throw new ArgumentOutOfRangeException("ContourData must have Cv32S hierarchy element type.");
}
if (Hierarchy.Rows != 1)
{
throw new ArgumentOutOfRangeException("ContourData must have one hierarchy hierarchy row.");
}
if (Hierarchy.NumberOfChannels != 4)
{
throw new ArgumentOutOfRangeException("ContourData must have four hierarchy channels.");
}
if (Hierarchy.Dims != 2)
{
throw new ArgumentOutOfRangeException("ContourData must have two dimensional hierarchy.");
}
long elementStride = Hierarchy.ElementSize / sizeof(Int32);
var offset = (long)component + index * elementStride;
if (0 <= offset && offset < Hierarchy.Total.ToInt64() * elementStride)
{
unsafe
{
return *((Int32*)Hierarchy.DataPointer.ToPointer() + offset);
}
}
else
{
return -1;
}
}
https://gist.github.com/joshuanapoli/8c3f282cece8340a1dd43aa5e80d170b
EmguCV started using VectorOfVectorPoint for FindContours, but didn't really update their code to work correctly with it. See below for a working example:
/// <summary>
/// Find contours using the specific memory storage
/// </summary>
/// <param name="method">The type of approximation method</param>
/// <param name="type">The retrieval type</param>
/// <param name="stor">The storage used by the sequences</param>
/// <returns>
/// Contour if there is any;
/// null if no contour is found
/// </returns>
public static VectorOfVectorOfPoint FindContours(this Image<Gray, byte> image, ChainApproxMethod method = ChainApproxMethod.ChainApproxSimple,
Emgu.CV.CvEnum.RetrType type = RetrType.List) {
//Check that all parameters are valid.
VectorOfVectorOfPoint result = new VectorOfVectorOfPoint();
if (method == Emgu.CV.CvEnum.ChainApproxMethod.ChainCode) {
throw new ColsaNotImplementedException("Chain Code not implemented, sorry try again later");
}
CvInvoke.FindContours(image, result, null, type, method);
return result;
}
This returns a VectorOfVectorPoint, which implements IInputOutputArray, IOutputArray, IInputArrayOfArrays, and IInputArray. I'm not sure what you need to do with the contours, but here is an example of how to get the bounding boxes for each one. We do some other things, so let me know what you need and I can help you.
VectorOfVectorOfPoint contours = canvass2.FindContours(ChainApproxMethod.ChainApproxSimple, RetrType.Tree);
int contCount = contours.Size;
for (int i = 0; i < contCount; i++) {
using (VectorOfPoint contour = contours[i]) {
segmentRectangles.Add(CvInvoke.BoundingRectangle(contour));
if (debug) {
finalCopy.Draw(CvInvoke.BoundingRectangle(contour), new Rgb(255, 0, 0), 5);
}
}
}
You can simply create a
Matrix
then copy you Mat object data into that Matrix. See Example below:
Mat hierarchy = new Mat();
CvInvoke.FindContours(imgThreshCopy, contours, hierarchy , RetrType.Tree,ChainApproxMethod.ChainApproxSimple);
Matrix<int> matrix = new Matrix<int>(hierarchy.Rows, hierarchy.Cols,hierarchy.NumberOfChannels);
hierarchy.CopyTo(matrix);
Data can be accessed in
matrix.Data
Good Luck. H

EmguCV and Unity3D Interoperation

I managed integrating EmguCV into Unity3D and wrote a little converter that has some little problems
Step 1 Converting Unity3D Texture to OpenCV Image
public static Image<Bgr, byte> UnityTextureToOpenCVImage(Texture2D tex){
return UnityTextureToOpenCVImage(tex.GetPixels32 (), tex.width, tex.height);
}
public static Image<Bgr, byte> UnityTextureToOpenCVImage(Color32[] data, int width, int height){
byte[,,] imageData = new byte[width, height, 3];
int index = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
imageData[x,y,0] = data[index].b;
imageData[x,y,1] = data[index].g;
imageData[x,y,2] = data[index].r;
index++;
}
}
Image<Bgr, byte> image = new Image<Bgr, byte>(imageData);
return image;
}
Step 2 Converting OpenCV Image back to Unity3D Texture
public static Texture2D OpenCVImageToUnityTexture(Image<Bgr, byte> openCVImage, GameObject check){
return OpenCVImageToUnityTexture(openCVImage.Data, openCVImage.Width, openCVImage.Height, check);
}
public static Texture2D OpenCVImageToUnityTexture(byte[,,] data, int width, int height, GameObject check){
Color32 [] imageData = new Color32[width*height];
int index = 0;
byte alpha = 255;
for (int y = 0; y < width; y++) {
for (int x = 0; x < height; x++) {
imageData[index] = new Color32((data[x,y,2]),
(data[x,y,1]),
(data[x,y,0]),
alpha);
check.SetActive(true);
index++;
}
}
Texture2D toReturn = new Texture2D(width, height, TextureFormat.RGBA32, false);
toReturn.SetPixels32(imageData);
toReturn.Apply ();
toReturn.wrapMode = TextureWrapMode.Clamp;
return toReturn;
}
Compiler throws no errors but some goes wrong all the time. See yourself: cats.
On the left side is the original image, on the right side is the converted one. As you can see there are more cats then it should be...
Has anyone any clues?
Also it is slow as hell because of iterating twice through all pixels. Is there any better solution?
EDIT
This is the code where i draw my GUITextures:
public GameObject catGO;
GUITexture guitex;
Texture catTex;
void Start () {
guitex = GetComponent<GUITexture> ();
catTex = catGO.GetComponent<GUITexture> ().texture;
Image<Bgr, byte> cvImage = EmguCVUnityInterop.UnityTextureToOpenCVImage((Texture2D)catTex);
Texture2D converted = EmguCVUnityInterop.OpenCVImageToUnityTexture(cvImage);
guitex.texture = converted;
}
First of all you wrote in 2 for loops check.SetActive(true) so its width*height times. Enabling GameObject is a costly operation.
Second is that iterating thru every pixel is another costly operation. For examle if u have image 2560x1600 you have over 4 milion iterations.
Try to change TextureWrapMode to Repeat (I know it sounds silly:] )

CascadeClassifier in java does not find face with webcam

I am trying to translate the OpenCV CascadeClassifier tutorial from C++ to Java. Working good in C++. Also this java tutorial is working fine.
But the translation is simply not detecting the face. I don't get explicit errors. I can see the processing of the video input from the webcam (grey/histogram...) and the video display. Cascade load doesn't give error. But the CascadeClassifier call just doesn't return any faces... So, you probably can skip all the code and just go to my CascadeClassifier call, down to public Mat detect(Mat inputframe). As I am new to Java and OpenCV, I paste the rest (I removed anything I felt may not be significant), just in case, but don't mean for you to debug that...
I have also tried this call (and other portions) in many different ways and nothing... running out of ideas...
Thank you!!
import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.highgui.VideoCapture;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
class My_Panel extends JPanel{
private static final long serialVersionUID = 1L;
private BufferedImage image;
private CascadeClassifier face_cascade;
// Create a constructor method
public My_Panel(){
super();
String face_cascade_name = "/haarcascade_frontalface_alt.xml";
//String face_cascade_name = "/lbpcascade_frontalface.xml";
//-- 1. Load the cascades
String str;
str = getClass().getResource(face_cascade_name).getPath();
str = str.replace("/C:","C:");
face_cascade_name=str;
face_cascade=new CascadeClassifier(face_cascade_name);
if( !face_cascade.empty())
{
System.out.println("--(!)Error loading A\n");
return;
}
else
{
System.out.println("Face classifier loooaaaaaded up");
}
}
private BufferedImage getimage(){
return image;
}
public void setimage(BufferedImage newimage){
image=newimage;
return;
}
/**
* Converts/writes a Mat into a BufferedImage.
*
* #param matrix Mat of type CV_8UC3 or CV_8UC1
* #return BufferedImage of type TYPE_3BYTE_BGR or TYPE_BYTE_GRAY
*/
public BufferedImage matToBufferedImage(Mat matrix) {
int cols = matrix.cols();
int rows = matrix.rows();
int elemSize = (int)matrix.elemSize();
byte[] data = new byte[cols * rows * elemSize];
int type;
matrix.get(0, 0, data);
switch (matrix.channels()) {
case 1:
type = BufferedImage.TYPE_BYTE_GRAY;
break;
case 3:
type = BufferedImage.TYPE_3BYTE_BGR;
// bgr to rgb
byte b;
for(int i=0; i<data.length; i=i+3) {
b = data[i];
data[i] = data[i+2];
data[i+2] = b;
}
break;
default:
return null;
}
BufferedImage image2 = new BufferedImage(cols, rows, type);
image2.getRaster().setDataElements(0, 0, cols, rows, data);
return image2;
}
public void paintComponent(Graphics g){
BufferedImage temp=getimage();
g.drawImage(temp,10,10,temp.getWidth(),temp.getHeight(), this);
}
public Mat detect(Mat inputframe){
Mat mRgba=new Mat();
Mat mGrey=new Mat();
MatOfRect faces = new MatOfRect();
//MatOfRect eyes = new MatOfRect();
inputframe.copyTo(mRgba);
inputframe.copyTo(mGrey);
Imgproc.cvtColor( mRgba, mGrey, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist( mGrey, mGrey );
face_cascade.detectMultiScale(mGrey, faces);
//face_cascade.detectMultiScale(mGrey, faces, 1.1, 2, 0|Objdetect.CASCADE_SCALE_IMAGE, new Size(30, 30), new Size(200,200) );
//face_cascade.detectMultiScale(mGrey, faces, 1.1, 2, 2//CV_HAAR_SCALE_IMAGE,
// ,new Size(30, 30), new Size(200,200) );
System.out.println(String.format("Detected %s faces", faces.toArray().length));
return mGrey;
}
}
public class window {
public static void main(String arg[]){
// Load the native library.
System.loadLibrary("opencv_java245");
String window_name = "Capture - Face detection";
JFrame frame = new JFrame(window_name);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(400,400);
My_Panel my_panel = new My_Panel();
frame.setContentPane(my_panel);
frame.setVisible(true);
//-- 2. Read the video stream
BufferedImage temp;
Mat webcam_image=new Mat();
VideoCapture capture =new VideoCapture(0);
if( capture.isOpened())
{
while( true )
{
capture.read(webcam_image);
if( !webcam_image.empty() )
{
frame.setSize(webcam_image.width()+40,webcam_image.height()+60);
//-- 3. Apply the classifier to the captured image
// At this point I was wondering where this should be done.
// I put it within the panel class, but maybe one could actually
// create a processor object...
webcam_image=my_panel.detect(webcam_image);
//-- 4. Display the image
temp=my_panel.matToBufferedImage(webcam_image);
my_panel.setimage(temp);
my_panel.repaint();
}
else
{
System.out.println(" --(!) No captured frame -- Break!");
break;
}
}
}
return;
}
}
PS.: Other info, just in case:
mGrey is: Mat [ 480*640*CV_8UC1, isCont=true, isSubmat=false, nativeObj=0x19d9af48, dataAddr=0x19dc3430 ]
face is: Mat [ 0*0*CV_8UC1, isCont=false, isSubmat=false, nativeObj=0x194bb048, dataAddr=0x0 ]
I have tried your code and it works fine! You have only one issue with haarcascade_frontalface_alt.xml file location. Try to use a full path to the file:
face_cascade= new CascadeClassifier("D:/HelloCV/src/haarcascade_frontalface_alt.xml");

emguCV getting the largest blob

i'm working on color tracking..
and i'm specifically tracking an orange ball, a basketball ball perhaps, along with kinect for the body, i'm making a free throw shooting guide.
here's my case
i have already thresholded my image, eroded it to remove noise, and other insignificant objects (non-ball) and then dilated a few times to emphasize the ball..
and so i've come to a final binary image - where i've successfully isolated the ball.. there are other blobs..(smaller blobs that aren't the ball).. how do i get the largest blob(the ball) and put a bounding box?
i've tried hough circles btw, however this is very slow,,..thanks! some code would be useful
This is the code I used to get the largest blob in the image:
public static Blob FindLargestObject(Image<Gray, byte> block, Rectangle rectangle)
{
Image<Gray, byte> mask = block.CopyBlank();
Contour<Point> largestContour = null;
double largestarea = 0;
for (var contours = block.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL); contours != null; contours = contours.HNext)
{
if (contours.Area > largestarea)
{
largestarea = contours.Area;
largestContour = contours;
}
}
// fill the largest contour
mask.Draw(largestContour, new Gray(255), -1);
return new Blob(mask, largestContour, rectangle);
}
For Blob:
public class Blob
{
Image<Gray,byte> Mask{ get; set; }
Contour<Point> Contour { get; set; }
Rectangle Rectangle { get; set; }
}
The blob will contain all the information that you want to get.

Resources