Image Processing - Find contours EmguCv - opencv

I'm trying to find contours to convert image to dxf file. as you see this is the image which I work on.
Image<Gray, byte> imgOut = img.Convert<Gray, byte>()
.ThresholdBinary(new Gray(100), new Gray(255));
when I use find contours and then draw them in a new image I got this
when I use contours to get my dxf file I get this
So In result of the lines are bold and thick Emgucv threat them as they are closed polylines not single lines.
what should I do?

What is the goal to clearify your question?
If you are looking for the geometric objects of the drawing you can find it directly inside the dxf file without use of EmguCV. Probably the wall and helper lines are not on the same layer. This will help to analyze features such as the area of the flat. This information is lost when you are looking on the b/w image.
If you are looking for low level features like connected components this post about Blob can help you. It's base on FindCountours().

Related

How to get the black solid object alone ignoring white island using python

After Appling image processing technics i got the input image, now i need the out put as shown in attachment:
How can i deleted all the white island and keep only solid black object
Any techniques available, please do suggest.
To get rid of single pixels or small clusters, you can use morphological operations.
The opencv opening function removes small objects.
dst=open(src,element)=dilate(erode(src,element))
where element is the kernel for the dilation and erosion and src the source image.
The opencv tutorial is very helpful on this.
To find closed contours, you could use the findContours function after the morphological operations.
Tutorial

Getting single pixel line contours of jigsaw with OpenCV

I'm trying to automatically extract the contour of all the puzzle pieces from a photo of a puzzle. Here's the grayscale input image:
So far, I've been able to get to a more helpful image which varies less with the lighting conditions by taking the local standard deviation (standard deviation within an 11px box), and applying a 5px box blur to the result using opencv. That gives me this:
From here, I'm not sure what to do to get down to a single pixel line between the pieces. Having approximate contours for each individual piece isn't quite good enough, because I want to be able to cut up the image into the pieces and be able to move those pieces independently while the pieces still fit together perfectly.
The results of findContours are nowhere near good enough for this.
My ideal output here is to have an image that overlays the original and has a 1px black line between every adjacent pair of pieces.
You can do the following:
Use local threshold for the image thresholding.
Remove areas with a small area.
Inverse the image (255 - ThreholdImage)
Find connected components using findContours.
The result is:
It is far from prefect but i think this is because i used the image that you gave in the post. I think if you post the original image i will get better results.

SimpleBlobDetection Code

Hi I am a pure novice in image processing especially with openCV. I want to write a program on blob detection that takes an image as an input and returns the color and centroid of the blob. My image consists purely of regular polygons in a black background. For eg. my image might consist of a green triangle(equilateral) or a red square in a black background. I want to use the simpleBlobDetection class in opencv and its 'detect' function for this purpose. Since I'm a novice a full program will be a lot of help to me.
I suggest you to use the complementary openCV library cvblob. It has an example to automatically obtain blobs in an image, centroid, contour, etc.
Here is the source code, i tried it in OSX and works really fine.
Link: https://code.google.com/p/cvblob/

Extracting lines from an image to feed to OCR - Tesseract

I was watching this talk from pycon http://youtu.be/B1d9dpqBDVA?t=15m34s around the 15:33 mark the speaker talks about extracting lines from an image (receipt) and then feeding that to the OCR engine so that text can be extracted in a better way.
I have a similar need where I'm passing images to the OCR engine. However, I don't quite understand what he means by extracting lines from an image. What are some open source tools that I can use to extract lines from an image?
Take a look at the technique used to detect the skew angle of a text.
Groups are lines are used to isolate text on an image (this is the interesting part).
From this result you can easily detect the upper/lower limits of each line of text. The text itself will be located inside them. I've faced a similar problem before, the code might be useful to you:
All you need to do from here is crop each pair of lines and feed that as an image to Tesseract.
i can tell u a simple technique to feed the images to OCR.. just perform some operations to get the ROI (Region of Interest) of ur image, and localize the area where the image after binarizing it.. then you may find contours, and by keeping the threasholding value, and setting the required contour area, you can feed the resulting image to OCR :) ..
(sorry for bad way of explaination)
Direct answer: you extract lines from an image with Hough Transform.
You can find an analytical guide here.
Text lines can be detected as well. Karlphillip's answer is based on Hough Transform too.

Convert raster images to vector graphics using OpenCV?

I'm looking for a possibility to convert raster images to vector data using OpenCV. There I found a function cv::findContours() which seems to be a bit primitive (more probably I did not understand it fully):
It seems to use b/w images only (no greyscale and no coloured images) and does not seem to accept any filtering/error suppresion parameters that could be helpful in noisy images, to avoid very short vector lines or to avoid uneven polylines where one single, straight line would be the better result.
So my question: is there a OpenCV possibility to vectorise coloured raster images where the colour-information is assigned to the resulting polylinbes afterwards? And how can I apply noise reduction and error suppression to such a algorithm?
Thanks!
If you want to raster image by color than I recommend you to clusterize image on some group of colors (or quantalize it) and after this extract contours of each color and convert to needed format. There are no ready vectorizing methods in OpenCV.

Resources