Wound Segmentation using Wavelet Transform in OpenCV - opencv

We tried Local Histogram approach for wound segmentation which didn't work well for all kinds of images and then we taught to use Wavelet transform for wound segmentation.
Which Wavelet transform will be good for wound segmentation and some tips to implement it ??
Is there any better way than the wavelet transform to segment wound in all light conditions ??
We also tried Image Clustering Which didn't went that well.
Here are some test cases and clustering program we used.
#include "cv.h"
#include "highgui.h"
#include <iostream>
void show_result(const cv::Mat& labels, const cv::Mat& centers, int height, int width);
int main(int argc, const char * argv[])
{
cv::Mat image = cv::imread("kmean.jpg");
if ( image.empty() ) {
std::cout << "unable to load an input image\n";
return 1;
}
//cv::cvtColor(image,image,CV_BGR2HSV);
std::cout << "image: " << image.rows << ", " << image.cols << std::endl;
assert(image.type() == CV_8UC3);
cv::imshow("image", image);
cv::Mat reshaped_image = image.reshape(1, image.cols * image.rows);
std::cout << "reshaped image: " << reshaped_image.rows << ", " << reshaped_image.cols << std::endl;
assert(reshaped_image.type() == CV_8UC1);
//check0(image, reshaped_image);
cv::Mat reshaped_image32f;
reshaped_image.convertTo(reshaped_image32f, CV_32FC1, 1.0 / 255.0);
std::cout << "reshaped image 32f: " << reshaped_image32f.rows << ", " << reshaped_image32f.cols << std::endl;
assert(reshaped_image32f.type() == CV_32FC1);
cv::Mat labels;
int cluster_number = 4;
cv::TermCriteria criteria(cv::TermCriteria::COUNT, 100, 1);
cv::Mat centers;
cv::kmeans(reshaped_image32f, cluster_number, labels, criteria, 1, cv::KMEANS_PP_CENTERS, centers);
show_result(labels, centers, image.rows,image.cols);
return 0;
}
void show_result(const cv::Mat& labels, const cv::Mat& centers, int height, int width)
{
std::cout << "===\n";
std::cout << "labels: " << labels.rows << " " << labels.cols << std::endl;
std::cout << "centers: " << centers.rows << " " << centers.cols << std::endl;
assert(labels.type() == CV_32SC1);
assert(centers.type() == CV_32FC1);
cv::Mat rgb_image(height, width, CV_8UC3);
cv::MatIterator_<cv::Vec3b> rgb_first = rgb_image.begin<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> rgb_last = rgb_image.end<cv::Vec3b>();
cv::MatConstIterator_<int> label_first = labels.begin<int>();
cv::Mat centers_u8;
centers.convertTo(centers_u8, CV_8UC1, 255.0);
cv::Mat centers_u8c3 = centers_u8.reshape(3);
while ( rgb_first != rgb_last ) {
const cv::Vec3b& rgb = centers_u8c3.ptr<cv::Vec3b>(*label_first)[0];
*rgb_first = rgb;
++rgb_first;
++label_first;
}
cv::imshow("tmp", rgb_image);
cv::waitKey();
}
Would-1 with Background : (two clusters)
Would-1 with out Background :
Would-2 with Background :
Would-2 with out Background :(three clusters)
When we remove background we are getting a bit better segmentation, but for removing background we are using grab-cut which relies on manual operation. So we need a substitute for the kmean-clustering for segmenting image (or) some improvements in above code to achieve 100% success cases.
So is there any better way to segment the wounds ??

Instead of attempting to use the traditional wavelet transform, you may want to try Haar-like wavelets tuned for object detection tasks, similar to the basis of integral images used in the Viola Jones face detector. This paper by Lienhart et al, used for generic object detection, would be a good start.
From the looks of your example images, the variance of intensities within small pixel neighbourhoods in the wound is a lot higher, whereas the unbruised skin appears to be fairly uniform in small neighbourhoods. The Lienhart paper should be able to detect such variations - you can either feed the features into a machine learning setup, or just make manual observations and define the search windows and related heuristics.
Hope this helps.

Related

how to reduce motion blur in picture?

I am trying to detect the QR data from a blurry image and have not been successful till now.
I have tried a couple of morphology operations on the image and still did not get the data embedded in it.
How could I improve the situation?
I have tried this so far:
#include <iostream>
#include <opencv2/highgui.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/objdetect.hpp>
#include <opencv2/barcode.hpp>
int main() {
cv::Mat imageMat = cv::imread("/Users/apple/Downloads/36.jpg");
if(imageMat.empty()) {
std::cout << "Image not present and can not be opened" << std::endl;
return 0;
}
cv::Mat imageGray, imageBlur, imageCanny, imageDilated, imageEroded, thresholdImage;
cv::cvtColor(imageMat, imageGray, cv::COLOR_BGR2GRAY);
cv::GaussianBlur(imageGray, imageBlur, cv::Size(3,3), 3, 0);
std::cout << "Gaussian blur done" << std::endl;
// cv::Canny(imageBlur, imageCanny, 25, 75);
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3));
cv::dilate(imageBlur, imageDilated, kernel);
cv::erode(imageDilated, imageEroded, kernel);
cv::threshold(imageEroded, thresholdImage, 5, 255, cv::THRESH_BINARY+cv::THRESH_OTSU);
std::cout << "Threshold done" << std::endl;
// Reads the barcode/dat
cv::QRCodeDetector qrDecoder;
std::string decodedData = qrDecoder.detectAndDecode(thresholdImage);
std::cout << "Decoded data = " << decodedData << std::endl;
This code is not decoding the data from the image which has blurry QR code. What would you guys prefer to unblur the image and get the data embedded in it? Increase the contrast? Increase the brightness?
Any document can be helpful too.
Thank you in advance.

Converting depth image of type CV_16UC1 in OpenCV

The input image is a depth image having CV_16UC1 encoding (depth values are in millimeter). I want to convert depth values to meters. Later on, I need depth values of a few pixels. Therefore, I am using the mat.at() to access the individual pixel locations. Finally, the depth value is multiplied by 0.001f to convert it to meters.
However, instead of multiplying the depth value after using the mat.at() function, I want to do it another way i.e. multiply the whole image by 0.001f and then use the mat.at() function. unfortunately, this is giving the wrong value. A sample code is shown below-
#include <iostream>
#include <opencv2/opencv.hpp>
int main(int argc, char* argv[])
{
cv::Mat img_mm(480, 640, CV_16UC1);
// just for debugging
randu(img_mm, cv::Scalar(0), cv::Scalar(1234));
// assign a fixed value at (0, 0) just for debugging
int pixel_x = 0;
int pixel_y = 0;
img_mm.at<unsigned short>(pixel_y, pixel_x) = 123;
// the first way
auto depth_mm = img_mm.at<unsigned short>(pixel_y, pixel_x);
auto depth_m = depth_mm * 0.001f;
// the second way
cv::Mat img_m = img_mm * 0.001f;
float depth_unsigned_short = img_m.at<unsigned short>(pixel_y, pixel_x);
float depth_float = img_m.at<float>(pixel_y, pixel_x);
std::cout << "depth_mm " << depth_mm << ", depth_m " << depth_m << ", depth_unsigned_short " << depth_unsigned_short << ", depth_float " << depth_float << std::endl;
return 0;
}
Below is the output-
depth_mm 123, depth_m 0.123, depth_unsigned_short 0, depth_float 9.18355e-41
I was expecting to see 0.123 in the second way. But we see that both depth_unsigned_short and depth_float are returning wrong values.
You should use opencv provided matrix conversion utility.
Check convertTo
Something like:
cv::mat f32Mat;
img_mm.convertTo(f32Mat,CV_32FC1,0.001);
should do the trick.
At least the following statement of your code is wrong assuming img_m is a float matrix.
float depth_unsigned_short = img_m.at<unsigned short>(pixel_y, pixel_x);

Improve Prediction Sensivity using SVM with OpenCV

I am trying to classify my images whether characters are printed on surface or not.
For doing it.
First I take surf features of images with real images and manually defect real images to try create bag of words to an xml file and then try to predict.
however unless I use absolutely different image or totally cropped image my SVM classifier predicts as it is correct.
those are the images I used for train
https://www.dropbox.com/sh/xked9ywnibzv3tt/AADC0lP4WYAo3ddEDgvHpFhha/negative?dl=0
Here is my code.
#include <stdio.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include "opencv2/core/core.hpp"
#include<dirent.h>
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv2/ml/ml.hpp>
using namespace cv;
using namespace std;
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
Ptr<DescriptorExtractor> descriptors = DescriptorExtractor::create("SURF");
string to_string(const int val) {
int i = val;
std::string s;
std::stringstream out;
out << i;
s = out.str();
return s;
}
Mat compute_features(Mat image) {
vector<KeyPoint> keypoints;
Mat features;
detector->detect(image, keypoints);
KeyPointsFilter::retainBest(keypoints, 1500);
descriptors->compute(image, keypoints, features);
return features;
}
BOWKMeansTrainer addFeaturesToBOWKMeansTrainer(String dir, BOWKMeansTrainer& bowTrainer) {
DIR *dp;
struct dirent *dirp;
struct stat filestat;
dp = opendir(dir.c_str());
Mat features;
Mat img;
string filepath;
#pragma loop(hint_parallel(4))
for (; (dirp = readdir(dp));) {
filepath = dir + dirp->d_name;
cout << "Reading... " << filepath << endl;
if (stat( filepath.c_str(), &filestat )) continue;
if (S_ISDIR( filestat.st_mode )) continue;
img = imread(filepath, 0);
features = compute_features(img);
bowTrainer.add(features);
}
return bowTrainer;
}
void computeFeaturesWithBow(string dir, Mat& trainingData, Mat& labels, BOWImgDescriptorExtractor& bowDE, int label) {
DIR *dp;
struct dirent *dirp;
struct stat filestat;
dp = opendir(dir.c_str());
vector<KeyPoint> keypoints;
Mat features;
Mat img;
string filepath;
#pragma loop(hint_parallel(4))
for (;(dirp = readdir(dp));) {
filepath = dir + dirp->d_name;
cout << "Reading: " << filepath << endl;
if (stat( filepath.c_str(), &filestat )) continue;
if (S_ISDIR( filestat.st_mode )) continue;
img = imread(filepath, 0);
detector->detect(img, keypoints);
bowDE.compute(img, keypoints, features);
trainingData.push_back(features);
labels.push_back((float) label);
}
cout << string( 100, '\n' );
}
int main() {
initModule_nonfree();
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
TermCriteria tc(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 10, 0.001);
int dictionarySize = 1000;
int retries = 1;
int flags = KMEANS_PP_CENTERS;
BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);
BOWImgDescriptorExtractor bowDE(descriptors, matcher);
string dir = "/positive/", filepath;
DIR *dp;
struct dirent *dirp;
struct stat filestat;
cout << "Add Features to KMeans" << endl;
addFeaturesToBOWKMeansTrainer("/positive/", bowTrainer);
addFeaturesToBOWKMeansTrainer("/negative/", bowTrainer);
cout << endl << "Clustering..." << endl;
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
cout << endl << "Extract bow features" << endl;
computeFeaturesWithBow("/positive/", trainingData, labels, bowDE, 1);
computeFeaturesWithBow("/negative/", trainingData, labels, bowDE, 0);
CvSVMParams params;
params.kernel_type=CvSVM::LINEAR;
params.svm_type=CvSVM::C_SVC;
params.gamma=5;
params.C=100;
params.term_crit=cvTermCriteria(CV_TERMCRIT_NUMBER,100,0.000001);
CvSVM svm;
cout << endl << "Begin training" << endl;
bool res =svm.train(trainingData,labels,cv::Mat(),cv::Mat(),params);
svm.save("classifier.xml");
//CvSVM svm;
svm.load("classifier.xml");
vector<KeyPoint> cameraKeyPoints;
Mat rotated = imread("test.jpg",0);
Mat featuresFromimage;
detector->detect(rotated, cameraKeyPoints);
bowDE.compute(rotated, cameraKeyPoints, featuresFromimage);
cout <<"anar:"<< svm.predict(featuresFromimage) << endl;
imshow("edges", rotated);
cvWaitKey(0);
return 0;
}
Question 1: since those images are too similiar how can I do prediction like
if similiarity > %80
"correct"
else
"defected"
Question 2 Since this character defection is too rare in a factory it is going to very very tough to get a lot of defected images to train. Manually create defect on this images is a correct solution ? if not what I can actually do ?
Question 3
What kind of preprocessing methods I can actually do on this kind of images to increase accuracy of SVM ?
thank you

Convert matlab image svd method to opencv

I want to write a program with opencv by c++ in the visual studio.
My code is following matlab code:
close all
clear all
clc
%reading and converting the image
inImage=imread('pic.jpg');
inImageD=double(inImage);
[U,S,V]=svd(inImageD);
% Using different number of singular values (diagonal of S) to compress and
% reconstruct the image
dispEr = [];
numSVals = [];
for N=5:25:300
% store the singular values in a temporary var
C = S;
% discard the diagonal values not required for compression
C(N+1:end,:)=0;
C(:,N+1:end)=0;
% Construct an Image using the selected singular values
D=U*C*V';
% display and compute error
figure;
buffer = sprintf('Image output using %d singular values', N)
imshow(uint8(D));
title(buffer);
error=sum(sum((inImageD-D).^2));
% store vals for display
dispEr = [dispEr; error];
numSVals = [numSVals; N];
end
What's your opinion to do this? I want to save image in a text file and retrieve it from file into the Mat array. I've written this part as follow:
Mat image;
FileStorage read_file("pic_file.txt", FileStorage::READ);
read_file["pic"] >> image;
read_file.release();
Mat P;
image.convertTo(P, CV_32FC3,1.0/255);
SVD svda(P); //or SVD::compute(P,W,U,V);
But I have problem with SVD function and it doesn't work. Is there anything to do for computing SVD compression of an image?
Thank You so much.
Vahids.
Here is my code:
int main(int argc, char* argv[])
{
// Image matrix
Mat img;
Mat result;
//---------------------------------------------
//
//---------------------------------------------
namedWindow("Source Image");
namedWindow("Result");
// Load image in grayscale mode
img=imread("D:\\ImagesForTest\\cat.bmp",0);
img.convertTo(img,CV_32FC1,1.0/255.0);
cout << "Source size:" << img.rows*img.cols <<" elements "<< endl;
// create SVD
cv::SVD s;
// svd result
Mat w,u,vt;
// computations ...
s.compute(img,w,u,vt);
// collect Sigma matrix (diagonal - is eigen values, other - zeros)
// we got it in as vector, transform it to diagonal matrix
Mat W=Mat::zeros(w.rows,w.rows,CV_32FC1);
for(int i=0;i<w.rows;i++)
{
W.at<float>(i,i)=w.at<float>(i);
}
// reduce rank to k
int k=25;
W=W(Range(0,k),Range(0,k));
u=u(Range::all(),Range(0,k));
vt=vt(Range(0,k),Range::all());
// Get compressed image
result=u*W*vt;
cout << "Result size:" << u.rows*u.cols+k+vt.rows*vt.cols <<" elements "<< endl;
//---------------------------------------------
//
//---------------------------------------------
imshow("Source Image", img);
imshow("Result", result);
cvWaitKey(0);
return 0;
}
Source and result images.

How can I port code that uses numpy.fft.rfft from python to C++?

I have code written in python. It computes positive part of FFT of real input using numpy. I need to port this code to C++.
import numpy as np
interp=[131.107, 133.089, 132.199, 129.905, 132.977]
res=np.fft.rfft(interp)
print res
Result of rfft is [ 659.27700000+0.j, 1.27932533-1.4548977j, -3.15032533+2.1158917j]
I tried to use OpenCV for 1D DFT:
std::vector<double> fft;
std::vector<double> interpolated = {131.107, 133.089, 132.199, 129.905, 132.977};
cv::dft( interpolated, fft );
for( auto it = fft.begin(); it != fft.end(); ++it ) {
std::cout << *it << ' ';
}
std::cout << std::endl;
Result of cv::dft is {1.42109e-14, -127.718, -94.705, 6.26856, 23.0231}. It is much different from numpy.fft.rfft. It looks strange that DC value (zero element) is near zero on all inputs after OpenCV's dft computed.
Usage of FFTW3 library gave me the same results as OpenCV's results:
std::vector<double> interpolated = {131.107, 133.089, 132.199, 129.905, 132.977};
fftw_complex* out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * 3 );
fftw_plan plan = fftw_plan_dft_r2c_1d( interpolated.size( ), interpolated.data( ), out, FFTW_ESTIMATE );
fftw_execute(plan);
fftw_destroy_plan(plan);
for( size_t i = 0; i < interpolated.size( ); ++i ) {
std::cout << " (" << out[ i ][ 0 ] << ", " << out[ i ][ 1 ] << ")";
}
fftw_free(out);
This code gives me the same results as OpenCV. It prints: (1.42109e-14, 0) (-127.718, -94.705) (6.26856, 23.0231).
Why do I get different results of dft in C++ and in python? What am I doing wrong?
Thanks!
I'm using gcc 4.6 at the moment, which doesn't have C++11, so I tried this version of your code, using OpenCV 2.4.8:
#include <iostream>
#include "opencv2/core/core.hpp"
int main(int argc, char *argv[])
{
double data[] = {131.107, 133.089, 132.199, 129.905, 132.977};
std::vector<double> interpolated (data, data + sizeof(data) / sizeof(double));
std::vector<double> fft;
cv::dft(interpolated, fft);
for (std::vector<double>::const_iterator it = fft.begin(); it != fft.end(); ++it) {
std::cout << *it << ' ';
}
std::cout << std::endl;
}
The output
659.277 1.27933 -1.4549 -3.15033 2.11589
agrees with numpy and with the cv2 python module:
In [55]: np.set_printoptions(precision=3)
In [56]: x
Out[56]: array([ 131.107, 133.089, 132.199, 129.905, 132.977])
In [57]: np.fft.rfft(x)
Out[57]: array([ 659.277+0.j , 1.279-1.455j, -3.150+2.116j])
In [58]: cv2.dft(x)
Out[58]:
array([[ 659.277],
[ 1.279],
[ -1.455],
[ -3.15 ],
[ 2.116]])
I don't know why your code is not working, so I guess this is more of a long comment than an answer.
Please check the documentation. The libfft rfft method transforms a vector of real inputs into the complex Fourier coefficients. Using the conjugacy of Fourier coefficients for real signals, the output can be given in an array of the same length as the input.
The generic fft and dft methods transform vectors of complex numbers into vectors of complex coefficients. The older codes use arrays of doubles for input and output where the real and imaginary parts of the complex numbers are given alternatingly, i.e., one array of even length. What happens to odd input lengths may be undocumented behavior.

Resources