Extracting a part of an image in LabView - image-processing

Using the LabView Vision Toolkit I want to extract the average color in a region of the image. But I am having some trouble extracting a part of the image. This is what I have tried:
But I just seem to get the same image out.

I'm using a different way to extract a portion of an image that does not need the vision toolkit. Maybe it could be of some use to you:

Related

Quick image search using histograms of colors

I want to search images using their histograms of colors. For extracting these histograms I will use OpenCV, I also found examples which describes how to compare two images using histograms of colors. But I have some issues:
Google and another search-engines uses these histograms for searching by image, but I do not think that they iteratively compare described image with images in the database (as it done in the OpenCV examples). So how can I implement quick image search using histograms?
Can I use for this purpose and another image searching purposes common RDBMS like MySQL?

How to extract graph region from a picture using Image Processing

Given an scanned image containing graphs and text , how can I extract only images from that picture . Can you mention any image processing algorithms .
You could do connected component analysis, filtering out everything that does not look like character bounding boxes. An example paper is Robust Text Detection from Binarized Document Images
(https://www.researchgate.net/publication/220929376_Robust_Text_Detection_from_Binarized_Document_Images), but there are a lot of approaches. It depends on your exact needs if you get away with something simple.
There is a lot more complex stuff available, too. One example: Fast and robust text detection in images and video frames (http://ucassdl.cn/publication/ivc.pdf).

SimpleBlobDetection Code

Hi I am a pure novice in image processing especially with openCV. I want to write a program on blob detection that takes an image as an input and returns the color and centroid of the blob. My image consists purely of regular polygons in a black background. For eg. my image might consist of a green triangle(equilateral) or a red square in a black background. I want to use the simpleBlobDetection class in opencv and its 'detect' function for this purpose. Since I'm a novice a full program will be a lot of help to me.
I suggest you to use the complementary openCV library cvblob. It has an example to automatically obtain blobs in an image, centroid, contour, etc.
Here is the source code, i tried it in OSX and works really fine.
Link: https://code.google.com/p/cvblob/

Extracting lines from an image to feed to OCR - Tesseract

I was watching this talk from pycon http://youtu.be/B1d9dpqBDVA?t=15m34s around the 15:33 mark the speaker talks about extracting lines from an image (receipt) and then feeding that to the OCR engine so that text can be extracted in a better way.
I have a similar need where I'm passing images to the OCR engine. However, I don't quite understand what he means by extracting lines from an image. What are some open source tools that I can use to extract lines from an image?
Take a look at the technique used to detect the skew angle of a text.
Groups are lines are used to isolate text on an image (this is the interesting part).
From this result you can easily detect the upper/lower limits of each line of text. The text itself will be located inside them. I've faced a similar problem before, the code might be useful to you:
All you need to do from here is crop each pair of lines and feed that as an image to Tesseract.
i can tell u a simple technique to feed the images to OCR.. just perform some operations to get the ROI (Region of Interest) of ur image, and localize the area where the image after binarizing it.. then you may find contours, and by keeping the threasholding value, and setting the required contour area, you can feed the resulting image to OCR :) ..
(sorry for bad way of explaination)
Direct answer: you extract lines from an image with Hough Transform.
You can find an analytical guide here.
Text lines can be detected as well. Karlphillip's answer is based on Hough Transform too.

Image Processing open source program? [duplicate]

My current project involves transcribing texts in pdf into text files, and I first tried putting the image file directly into OCR program (tesseract) and it didnt' do that well.
The original image files are old news papers, basically, and have some background noises, which I am sure tesseract has problem with. So I am trying to use some image preprocessing before feeding it into tesseract. Is there any suggestion for open source image preprocessing engine that fits well to this situation??? And instructions on how to use it would be even more appreciated !
I never heard of an "image preprocessing engine" for that purpose, but you can take a look at OpenCV (Open Source Computer Vision Library) and implement your own "pre-processing engine". OpenCV is a computer vision library that offers many features to perform image processing.
One interesting thing you might want test as a preprocessing step is apply a threshold to the image to remove noises and stuff. Anyway, I've talked about this kind of stuff in this thread.
Like #karlphillip mentioned, I highly doubt there's a readily available preprocessing engine for your purposes as the preprocessing technique vary greatly with the desired result.
Some common approaches to clearing up the text in noisy images include:
1. Adaptive thresholding (Sauvola or Niblack binarization)
2. Applying a median filter of a size slightly larger than the text to get a background image, then subtract out the background from the original image (to remove the larger noise like creases, stains, handwritten notes, etc.).
OpenCV has implementations of these filters/binarization methods. If you have access to published literature there's quite a bit of work on binarization of noisy documents.
Check out ScanTailor. It has pretty impressive pre-processing functionality and it is open source.

Resources