Image processing to size bubbles in octave - image-processing

Hi I am wondering whether anybody can offer any pointers on a potential approach to sizing the bubbles at the water surface (not those below it) in the following image. I would like to use an open source software if possible (my mind is leaning towards octave given that an image is a matrix). I have absolutely no background in image processing so any ideas are welcome. Obviously as a starting point I know the size of each pixel in the raw image (this image is a compressed version) so calculating a radius in pixels would be perfect.
Edit based upon the thoughts of #mmgp
So to try and make the question more direct I have taken onboard the thoughts of #mmgp using the open source libraries of Opencv. I have never used this before (nor indeed programmed directly in C or C++ however it looks as though it could fulfil my needs and although it looks as though it may have a steep learning curve my experience tells me those solutions that offer the most power often require time spent learning. So here is what I have done so far (with no background in image processing I am not sure if the functions I have used are ideal but I thought it might promote further thought). I have converted the image to grayscale, used a binary threshold and then applied a Hough Transform for circles. The images I generate at each stage are below as well as the code I have used. What is clear is that trackerbars are very useful for tweaking the parameters on the fly. I am however not yet adept enough to implement them in my code (any pointers would be great especially regarding the Hough transform where there are a few parameters to tweak).
So what do you think? What other function(s) might I try? Clearly my attempt is nowhere near as good as #mmgp but that may just be a matter of tweaking parameters.
Here are the photos:
Grayscale(for completeness):
Binary threshold:
Circle image:
Here is the code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
/** #function main */
int main(int argc, char** argv)
{
Mat src, src_gray, src_bw;
/// Read the image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Convert it to gray
cvtColor( src, src_gray, CV_BGR2GRAY );
imwrite( "Gray_Image.jpg", src_gray );
/// Threshold the image to make binary
threshold(src_gray, src_bw, 140, 255, CV_THRESH_BINARY);
imwrite( "Binary_Image.jpg", src_bw );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_bw, circles, CV_HOUGH_GRADIENT, 5, src_bw.rows/2, 5, 10, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
/// Show your results
namedWindow( "Hough Circle Transform Demo", 0 );
namedWindow( "Gray", 0 );
namedWindow( "Binary Threshold", 0 );
imshow( "Hough Circle Transform Demo", src );
imshow( "Gray", src_gray );
imshow( "Binary Threshold", src_bw );
imwrite( "Circles_Image.jpg", src );
waitKey(0);
return 0;
}

Another possible path to consider would be Template matching
you just need to create a template image of a typical bubble.
This might be useful for identifying false positives identified by the Hough transform.
You will need to use template images of varying sizes for detecting different size bubbles.
Also, if you have a picture of the water from before the bubbles appeared, you can subtract this to find areas of the image that have bubbles.

Related

Recognizing a topological graph in a noisy image

I am not at all experienced with machine learning or image processing, so I'm hoping someone can give some pointers of first thoughts on this problem:
The image below is an example of a photograph of tomato plant leaf. We have thousands of these. We need to trace the veins and output a graph. We have already had undergraduates trace the veins by hand for a few hundreds, so I presume that this can be a training set for a machine learning approach.
So my question: what types of filters/classifiers immediately come to mind? Is there anything you recommend I read or take a look at?
Our first thought was, look at directional derivatives. Each pixel can be classified as being in an edge or not in an edge at a given angle, and if a pixel is in an for a lot of different angles, then it's probably a blotch and not a vein. Then the parameters of gradient threshold and angle variation allowed can be adjusted by the learning but probably this is not the best way...
Thank you for any help!
Two methods immediately come to mind
a sliding window neural network classifier
identifying a threshold that sets apart dark/light pixels in the image (this could be done using machine learning or perhaps a simple computation) and then doing a flood fill to identify regions in the image.
The second method should be simpler and quicker, so I'd perhaps prototype it first to see if it gives good enough answers.
In any case, my intuition is that it's going to be easier to solve the dual problem - not trying to find edges and nodes of the graph, but finding its faces. From that, you get the graph itself easily.
I did this very simple program to filter the vein regions using opencv. I've added comments to explain the operations. Resulting images for the intermediate steps are saved. Hope it helps.
#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
#define INPUT_FILE "wMTjH3L.png"
#define OUTPUT_FOLDER_PATH string("")
#define CONTOUR_AREA_THRESHOLD 30.0
int _tmain(int argc, _TCHAR* argv[])
{
// read image as grayscale
Mat im = imread(INPUT_FILE, CV_LOAD_IMAGE_GRAYSCALE);
imwrite(OUTPUT_FOLDER_PATH + string("gray.jpg"), im);
// smooth the image with a gaussian filter
Mat blurred;
GaussianBlur(im, blurred, Size(3, 3), 1.5);
imwrite(OUTPUT_FOLDER_PATH + string("blurred.jpg"), blurred);
// flatten lighter regions while retaining the darker vein regions using morphological opening
Mat morph;
Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(5, 5));
morphologyEx(blurred, morph, MORPH_OPEN, morphKernel);
imwrite(OUTPUT_FOLDER_PATH + string("morph.jpg"), morph);
// apply adaptive thresholding
Mat adaptTh;
adaptiveThreshold(morph, adaptTh, 255.0, ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, 7, 2.0);
imwrite(OUTPUT_FOLDER_PATH + string("adaptth.jpg"), adaptTh);
// morphological closing to merge disjoint regions
Mat morphBin;
Mat morphKernelBin = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(adaptTh, morphBin, MORPH_CLOSE, morphKernelBin);
imwrite(OUTPUT_FOLDER_PATH + string("adptmorph.jpg"), morphBin);
// find contours
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morphBin, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// filter contours by region areas and draw
RNG rng(12345);
Mat drawing = Mat::zeros(morphBin.size(), CV_8UC3);
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
if (contourArea(contours[idx]) > CONTOUR_AREA_THRESHOLD)
{
Scalar color( rand()&255, rand()&255, rand()&255 );
drawContours( drawing, contours, idx, color, CV_FILLED, 8, hierarchy );
}
}
imwrite(OUTPUT_FOLDER_PATH + string("cont.jpg"), drawing);
return 0;
}
The output looks like this for the provided sample image:

opencv: Please help me to find out the grid of carton

This question has been annoying me over 2 weeks.
My goal is to analyze a set of products stored in cartons on a shelf.
Right now, I have tried using the following methods from OpenCV Python module: findContours, canny,HoughLines,cv2.HoughLinesP, but I can't find the result grid.
My goal is to check if the products been filled up in carton.
Here is the original image: http://postimg.org/image/hyz1jpd7p/7a4dd87c/
My first step is to use closing transformation:
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)]
This gives me the contours (I have not enough reputation to post this url, this image is similar with the last image below, but without red lines!).
Finally, the question is, how could I find the carton grid (i.e., the products in it one by one).
I have added the red lines in the image below.
Please give me the hints, thank you very much!
Red lines: http://postimg.org/image/6i0di4gsx/
I've played a little bit with the input and found a way to extract basically the grid with HoughLinesP after thresholding the Hue channel.
edit: I'm using C++, but similar python methods should be available I guess.
cv::Mat image = cv::imread("box1.png");
cv::Mat output; image.copyTo(output);
cv::Mat hsv;
cv::cvtColor(image, hsv, CV_BGR2HSV);
std::vector<cv::Mat> hsv_channels;
cv::split(hsv, hsv_channels);
// thresholding here is a little sloppy, maybe you have to use some smarter way
cv::Mat h_thres = hsv_channels[0] < 50;
// unfortunately, HoughLinesP couldnt detect all the lines if they were too wide
// to make this part more robust I would suggest a ridge detection on the distance transformed image instead of 'some erodes after a dilate'
cv::dilate(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
std::vector<cv::Vec4i> lines;
cv::HoughLinesP( h_thres, lines, 1, CV_PI/(4*180.0), 50, image.cols/4, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
cv::line( output, cv::Point(lines[i][0], lines[i][1]),
cv::Point(lines[i][2], lines[i][3]), cv::Scalar(155,255,155), 1, 8 );
}
here are the images:
hue channel after hsv convert:
threshholded hue channel:
output:
maybe someone else has an idea how to improve the HoughLinesP without those erode steps...
Hope this method helps you a bit and you can improve it further to use it for your needs.

iOS & OpenCV: Image Registration / Alignment

I am doing a project of combining multiple images similar to HDR in iOS. I have managed to get 3 images of different exposures through the Camera and now I want to align them because during the capture, one's hand must have shaken and resulted in all 3 images having slightly different alignment.
I have imported OpenCV framework and I have been exploring functions in OpenCV to align/register images, but found nothing. Is there actually a function in OpenCV to achieve this? If not, is there any other alternatives?
Thanks!
In OpenCV 3.0 you can use findTransformECC. I have copied this ECC Image Alignment code from LearnOpenCV.com where a very similar problem is solved for aligning color channels. The post also contains code in Python. Hope this helps.
// Read the images to be aligned
Mat im1 = imread("images/image1.jpg");
Mat im2 = imread("images/image2.jpg");
// Convert images to gray scale;
Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = MOTION_EUCLIDEAN;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == MOTION_HOMOGRAPHY )
warp_matrix = Mat::eye(3, 3, CV_32F);
else
warp_matrix = Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 5000;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
TermCriteria criteria (TermCriteria::COUNT+TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
Mat im2_aligned;
if (warp_mode != MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), INTER_LINEAR + WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),INTER_LINEAR + WARP_INVERSE_MAP);
// Show final result
imshow("Image 1", im1);
imshow("Image 2", im2);
imshow("Image 2 Aligned", im2_aligned);
waitKey(0);
There is no single function called something like align, you need to do/implement it yourself, or find an already implemented one.
Here is a one solution.
You need to extract keypoints from all 3 images and try to match them. Be sure that your keypoint extraction technique is invariant to illumination changes since all have different intensity values because of different exposures. You need to match your keypoints and find some disparity. Then you can use disparity to align your images.
Remember this answer is so superficial, for details first you need to do some research about keypoint/descriptor extraction, and keypoint/descriptor matching.
Good luck!

OpenCV findContours destroying source image

I writing a code that draw circle, line and rectangle in a single channel blank image. After that I just find out the contour in the image and I am getting all the contour correctly. But after finding the contour my source image is getting distorted. Why this happening ? Any one can help me to solve it out. And my code look like below.
using namespace cv;
using namespace std;
int main()
{
Mat dst = Mat::zeros(480, 480, CV_8UC1);
Mat draw= Mat::zeros(480, 480, CV_8UC1);
line(draw, Point(100,100), Point(150,150), Scalar(255,0,0),1,8,0);
rectangle(draw, Rect(200,300,10,15), Scalar(255,0,0),1, 8,0);
circle(draw, Point(80,80),20, Scalar(255,0,0),1,8,0);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( draw, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color( 255,255,255);
drawContours( dst, contours, i, color, 1, 8, hierarchy );
}
imshow( "Components", dst );
imshow( "draw", draw );
waitKey(0);
}
Source image
Distorted source after finding contour
Documentation clearly states that source image is altered when using findContours.
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours
See first note.
If you need the source image, you have to run findContours on copy.
try using
findContours( draw.clone(), contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
For me, the second image looks like what I would expect as a result from an edge detection algorithm. My guess is the findContours function overwrites the original image with the result found.
Have a look here.
I think that the problem is that you are expecting a perfect plot from the findContours and it gives you an ugly drawing.
FindContours is not gonna give an exact plot of your figures. You must use the drawContours in order to generate a properly image.
Look the reference here: http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours
You can see that the first parameter is Input/Output array. So the function uses the same array to open, modify and saving the image. That's why you are getting a distorted image.
In addition see the parameters explanation. When it talks about the first parameter it says: "The function modifies the image while extracting the contours."
I havn't worked a lot with findContours but i never had a clear image of what I wanted. I must use always the drawContours to get a nice plot of it.
Otherwise you can use the Canny function wich is gonna give you the edges instead of the contours.

OpenCV Mean/SD filter

I'm throwing this out there in hope that someone will have attempted something this ridiculous before. My goal is to take in an input image, and segment it based upon the standard deviation of a small window around each pixel. Bascially, this should mathematically resemble a gauss or box filter, in that it will be applied to a compile time (or even run-time) user specified window size around each pixel, and the destination array will contain the SD information at each pixel, in an image the same size as the original.
The idea is to do this on an image in HSV space, so that I can easily find regions of homogeneous color (i.e. those with small local SDs in the Hue and Sat planes) and extract them from the image for more in-depth processing.
So the question is, has anyone ever built a custom filter like this before? I don't know how to do the SD in a simple box type filter kernel like the ones used for gauss and blur, so I'm guessing I'll have to use the FilterEngine construct. Also, I forgot to mention I'm doing this in C++.
Your advice and musings are much appreciated.
Wikipedia has a nice explanation of standard deviation, which you can use to for a standard deviation filter.
Basically, it boils down to blurring the image with a box filter, blurring the square of the image with a box filter, and taking the square root of their difference.
UPDATE: This is probably better shown with the equation from Wikipedia...
You can think of the OpenCV blur function as representing the expected value (i.e., E[X] a.k.a. the sample mean) of the neighborhood of interest. The random samples X in this case are represented by image pixels in the local neighborhood. Therefore, by using the above equivalence we have something like sqrt(blur(img^2) - blur(img)^2) in OpenCV. Doing it this way allows you to compute the local means and standard deviations.
Also, just in case you are curious about the mathematical proof. This equivalence is known as the computational formula for variance.
Here is how you can do this in OpenCV:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
Mat mat2gray(const Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 1.0, NORM_MINMAX);
return dst;
}
int main()
{
Mat image = imread("coke-can.jpg", 0);
Mat image32f;
image.convertTo(image32f, CV_32F);
Mat mu;
blur(image32f, mu, Size(3, 3));
Mat mu2;
blur(image32f.mul(image32f), mu2, Size(3, 3));
Mat sigma;
cv::sqrt(mu2 - mu.mul(mu), sigma);
imshow("coke", mat2gray(image32f));
imshow("mu", mat2gray(mu));
imshow("sigma",mat2gray(sigma));
waitKey();
return 0;
}
This produces the following images:
Original
Mean
Standard Deviation
Hope that helps!
In case you want to use this in more general way this can produce nan values
Values close to zero can be sometimes "negative".
Mat sigma;
cv::sqrt(mu2 - mu.mul(mu), sigma);
correct way should be
Mat sigma;
cv::sqrt(cv::abs(mu2 - mu.mul(mu)), sigma);

Resources