Using SVMs to classify between SUVs and sedans - opencv

I am trying to implement an SVM with OpenCV that classifies images of sedans and SUVs. I have heavily referenced this post: using OpenCV and SVM with images
I have 29 training images of sedans and SUVs, and I stretch each image out to be 1 really long row, thus making my training Mat a size of 29ximage_area. The picture below shows that the training_mat comes out all in white, which I'm not sure is correct and it may be affecting my result.
This may be due to the training_mat being a float type. If the training_mat is changed to be CV_8UC1 for example, I can see clearly that each image is unfurled in the training_mat but the svm->train function does not accept the training_mat.
I use the labels_mat as the supervised version of the implementation. A 1 means an SUV, and a -1 means a sedan. In the picture below, when I attempt to use the SVM model to predict an SUV, I get a value of like -800000000000. No matter what I do (change parameters, use an all white test image, all black test image, change labels to only be 1 or -1) I always get that same -80000000000 value. Now any negative result may just mean -1 (sedan) but I cant be sure because it never changes. If anyone has insight on this that would be appreciated
Here is my code, result, and all white training_mat.
int num_train_images = 29; //29 images will be used to train the SVM
int image_area = 150 * 200;
Mat training_mat(num_train_images, image_area, CV_32FC1); // Creates a 29 rows by 30000 columns... 29 150x200 images will be put into 1 row per image
//Converts 29 2D images into a really long row per image
for (int file_count = 1; file_count < (num_train_images + 1); file_count++)
{
ss << name << file_count << type; //'Vehicle_1.jpg' ... 'Vehicle_2.jpg' ... etc ...
string filename = ss.str();
ss.str("");
Mat training_img = imread(filename, 0); //Reads the training images from the folder
int ii = 0; //Scans each column
for (int i = 0; i < training_img.rows; i++)
{
for (int j = 0; j < training_img.cols; j++)
{
training_mat.at<float>(file_count - 1, ii) = training_img.at<uchar>(i, j); //Fills the training_mat with the read image
ii++;
}
}
}
imshow("Training Mat", training_mat);
waitKey(0);
//Labels are used as the supervised learning portion of the SVM. If it is a 1, its an SUV test image. -1 means a sedan.
int labels[29] = { 1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1 };
//Place the labels into into a 29 row by 1 column matrix.
Mat labels_mat(num_train_images, 1, CV_32S);
cout << "Beginning Training..." << endl;
//Set SVM Parameters (not sure about these values)
Ptr<SVM> svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::RBF);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->setGamma(1);
svm->setDegree(3);
cout << "Parameters Set..." << endl;
svm->train(training_mat, ROW_SAMPLE, labels_mat);
cout << "End Training" << endl;
waitKey(0);
Mat test_image(1, image_area, CV_32FC1); //Creates a 1 x 1200 matrix to house the test image.
Mat SUV_image = imread("SUV_1.jpg", 0); //Read the file folder
int jj = 0;
for (int i = 0; i < SUV_image.rows; i++)
{
for (int j = 0; j < SUV_image.cols; j++)
{
test_image.at<float>(0, jj) = SUV_image.at<uchar>(i, j); //Fills the training_mat
jj++;
}
}
//Should return a 1 if its an SUV, or a -1 if its a sedan
float result = svm->predict(test_image);
if (result < 0)
cout << "Sedan" << endl;
else
cout << "SUV" << endl;
cout << "Result: " << result << endl;
namedWindow("Test Image", CV_WINDOW_NORMAL);
imshow("Test Image", SUV_image);
waitKey(0);

Refer to this post for a solution to this problem I was having. Using SVM with HOG Features to Classify Vehicles
In this, I use HOG features instead of just plain pixel values of the images. The training_mat is no longer white, and the classifier works well. Additionally, the output result is a 1 or -1.

Related

Comparing openCv PnP with openGv PnP

I am trying to build a test project to compare the openCv solvePnP implementation with the openGv one.
the opencv is detailed here:
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp
and the openGv here:
https://laurentkneip.github.io/opengv/page_how_to_use.html
Using the opencv example code, I am finding a chessboard in an image, and constructing the matching 3d points. i run the cv pnp, then set up the Gv solver. the cv pnp runs fine, and prints the values:
//rotation
-0.003040771263293328, 0.9797142824436152, -0.2003763421317906;
0.0623096853748876, 0.2001735322445355, 0.977777101438374]
//translation
[-12.06549797067309;
-9.533070368412945;
37.6825295047483]
I test by reprojecting the 3d points, and it looks good.
The Gv Pnp, however, prints nan for all values. i have tried to follow the example code, but I must be making a mistake somewhere. The code is:
int main(int argc, char **argv) {
cv::Mat matImg = cv::imread("chess.jpg");
cv::Size boardSize(8, 6);
//Construct the chessboard model
double squareSize = 2.80;
std::vector<cv::Point3f> objectPoints;
for (int i = 0; i < boardSize.height; i++) {
for (int j = 0; j < boardSize.width; j++) {
objectPoints.push_back(
cv::Point3f(double(j * squareSize), float(i * squareSize), 0));
}
}
cv::Mat rvec, tvec;
cv::Mat cameraMatrix, distCoeffs;
cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
fs["cameraMatrix"] >> cameraMatrix;
fs["dist_coeffs"] >> distCoeffs;
//Found chessboard corners
std::vector<cv::Point2f> imagePoints;
bool found = cv::findChessboardCorners(matImg, boardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
if (found) {
cv::drawChessboardCorners(matImg, boardSize, cv::Mat(imagePoints), found);
//SolvePnP
cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
drawAxis(matImg, cameraMatrix, distCoeffs, rvec, tvec, squareSize);
}
//cv to matrix
cv::Mat R;
cv::Rodrigues(rvec, R);
std::cout << "results from cv:" << R << tvec << std::endl;
//START OPEN GV
//vars
bearingVectors_t bearingVectors;
points_t points;
rotation_t rotation;
//add points to the gv type
for (int i = 0; i < objectPoints.size(); ++i)
{
point_t pnt;
pnt.x() = objectPoints[i].x;
pnt.y() = objectPoints[i].y;
pnt.z() = objectPoints[i].z;
points.push_back(pnt);
}
/*
K is the common 3x3 camera matrix that you can compose with cx, cy, fx, and fy.
You put the image point into homogeneous form (append a 1),
multiply it with the inverse of K from the left, which gives you a normalized image point (a spatial direction vector).
You normalize that to norm 1.
*/
//to homogeneous
std::vector<cv::Point3f> imagePointsH;
convertPointsToHomogeneous(imagePoints, imagePointsH);
//multiply by K.Inv
for (int i = 0; i < imagePointsH.size(); i++)
{
cv::Point3f pt = imagePointsH[i];
cv::Mat ptMat(3, 1, cameraMatrix.type());
ptMat.at<double>(0, 0) = pt.x;
ptMat.at<double>(1, 0) = pt.y;
ptMat.at<double>(2, 0) = pt.z;
cv::Mat dstMat = cameraMatrix.inv() * ptMat;
//store as bearing vector
bearingVector_t bvec;
bvec.x() = dstMat.at<double>(0, 0);
bvec.y() = dstMat.at<double>(1, 0);
bvec.z() = dstMat.at<double>(2, 0);
bvec.normalize();
bearingVectors.push_back(bvec);
}
//create a central absolute adapter
absolute_pose::CentralAbsoluteAdapter adapter(
bearingVectors,
points,
rotation);
size_t iterations = 50;
std::cout << "running epnp (all correspondences)" << std::endl;
transformation_t epnp_transformation;
for (size_t i = 0; i < iterations; i++)
epnp_transformation = absolute_pose::epnp(adapter);
std::cout << "results from epnp algorithm:" << std::endl;
std::cout << epnp_transformation << std::endl << std::endl;
return 0;
}
Where am i going wrong in setting up the openGv Pnp solver?
Years later, i had this same issue, and solved it. To convert openCv to openGV bearing vectors, you can do this:
bearingVectors_t bearingVectors;
std::vector<cv::Point2f> dd2;
const int N1 = static_cast<int>(dd2.size());
cv::Mat points1_mat = cv::Mat(dd2).reshape(1);
// first rectify points and construct homogeneous points
// construct homogeneous points
cv::Mat ones_col1 = cv::Mat::ones(N1, 1, CV_32F);
cv::hconcat(points1_mat, ones_col1, points1_mat);
// undistort points
cv::Mat points1_rect = points1_mat * cameraMatrix.inv();
// compute bearings
points2bearings3(points1_rect, &bearingVectors);
using this function for the final conversion:
// Convert a set of points to bearing
// points Matrix of size Nx3 with the set of points.
// bearings Vector of bearings.
void points2bearings3(const cv::Mat& points,
opengv::bearingVectors_t* bearings) {
double l;
cv::Vec3f p;
opengv::bearingVector_t bearing;
for (int i = 0; i < points.rows; ++i) {
p = cv::Vec3f(points.row(i));
l = std::sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]);
for (int j = 0; j < 3; ++j) bearing[j] = p[j] / l;
bearings->push_back(bearing);
}
}

SVM Predict returns a large value that isnt 1 or -1

So my goal here is to classify vehicles between sedans and SUVs. The training images I'm using are 29 150x200 images of sedans and SUVs, so my training_mat is a 29x30000 Mat and I use a double nested for loop to do this instead of .reshape because reshape wasn't working properly.
labels_mat is written so that a -1 corresponds to a sedan and a 1 corresponds to an SUV. I finally got svm->train to accept both Mats, and I expected that a new test_image fed into svm->predict would either yield a -1 or a 1. Unfortunately, svm->predict(test_image) returns an extremely high or low value like -8.38e08. Can anyone help me with this?
Here is the majority of my code:
for (int file_count = 1; file_count < (num_train_images + 1); file_count++)
{
ss << name << file_count << type; //'Vehicle_1.jpg' ... 'Vehicle_2.jpg' ... etc ...
string filename = ss.str();
ss.str("");
Mat training_img = imread(filename, 0); //Reads the training images from the folder
int ii = 0; //Scans each column
for (int i = 0; i < training_img.rows; i++)
{
for (int j = 0; j < training_img.cols; j++)
{
training_mat.at<float>(file_count - 1, ii) = training_img.at<uchar>(i, j); //Fills the training_mat with the read image
ii++;
}
}
}
imshow("Training Mat", training_mat);
waitKey(0);
//Labels are used as the supervised learning portion of the SVM. If it is a 1, its an SUV test image. -1 means a sedan.
int labels[29] = { 1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1 };
//Place the labels into into a 29 row by 1 column matrix.
Mat labels_mat(num_train_images, 1, CV_32S);
cout << "Beginning Training..." << endl;
//Set SVM Parameters (not sure about these values)
Ptr<SVM> svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setC(.1);
svm->setKernel(SVM::POLY);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->setGamma(3);
svm->setDegree(3);
cout << "Parameters Set..." << endl;
svm->train(training_mat, ROW_SAMPLE, labels_mat);
cout << "End Training" << endl;
waitKey(0);
Mat test_image(1, image_area, CV_32FC1); //Creates a 1 x 1200 matrix to house the test image.
Mat SUV_image = imread("SUV_1.jpg", 0); //Read the file folder
int jj = 0;
for (int i = 0; i < SUV_image.rows; i++)
{
for (int j = 0; j < SUV_image.cols; j++)
{
test_image.at<float>(0, jj) = SUV_image.at<uchar>(i, j);
jj++;
}
}
//Should return a 1 if its an SUV, or a -1 if its a sedan
float result = svm->predict(test_image);
cout << "Result: " << result << endl;
The output will not be -1 and 1. Machine learning methods, such as SVM, predict membership as the sign of the result. So a negative value means -1 and a positive value means 1.
Similarly, some other methods, such as logistic regression method use probability to predict membership where there are often 0 and 1. If probability <0.5, its membership is 0, otherwise 1.
BTW: your question is not a C++ question.
You forgot to fill your labels into the labels_mat. Simple mistake but it happens to everyone...
Mat labels_mat(num_train_images, 1, CV_32S, labels);
And that should work out fine.

Element-wise power using OpenCV

I am currently reading this book. The author wrote a code snippet on page 83 in order to (if i understand it correctly) calculate the element-wise power of two matrices. But i think the code doesn't fulfill its purpose because the matrix dst does not contain the element-wise power after the execution.
Here is the original code:
const Mat* arrays[] = { src1, src2, dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
Since the parameter of the constructor or cv::NAryMatIterator is const cv::Mat **, i think change of values in the matrix dst is not allowed.
I tried to assign ptrs[2][j] back in dst but failed to get the correct indices of dst. My questions are as follows:
Is there a convenient method for the matrix element-wise power, like A .^ B in Matlab?
Is there a way to use cv::NAryMatIterator to achieve this goal? If no, then what is the most efficient way to implement it?
You can get this working by converting the src1, src2 and dst to float (CV_32F) type matrices. This is because the code treats them that way in float* ptrs[3];.
An alternative implementation using opencv functions log, multiply and exp is given at the end.
As an example for your 2nd question,
Mat src1 = (Mat_<int>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
Mat src2 = (Mat_<uchar>(3, 3) <<
1, 2, 3,
1, 2, 3,
1, 2, 3);
Mat dst = (Mat_<float>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
src1.convertTo(src1, CV_32F);
src2.convertTo(src2, CV_32F);
cout << "before\n";
cout << dst << endl;
const Mat* arrays[] = { &src1, &src2, &dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
cout << "after\n";
cout << dst << endl;
outputs
before
[1, 2, 3;
4, 5, 6;
7, 8, 9]
after
[1, 4, 27;
4, 25, 216;
7, 64, 729]
If you remove the src1.convertTo(src1, CV_32F); or src2.convertTo(src2, CV_32F);, you won't get the desired result. Try it.
If this is a separate function, don't place the convertTo within the function, as it modifies the image representation, that could affect later operations. Instead, use convertTo on temporary Mats, like
Mat src132f, src232f, dst32f;
src1.convertTo(src132f, CV_32F);
src2.convertTo(src132f, CV_32F);
dst.convertTo(dst32f, CV_32F);
pow_mat(&src132f, &src232f, &dst32f); /* or whatever the name */
As for your first question, I'm not aware of such function. But you can try something like
Mat tmp;
cv::log(src1, tmp);
cv::multiply(src2, tmp, dst);
cv::exp(dst, dst);
using the relation that c = a^b is equivalent to c = e^(b.ln(a)). Here, the matrices should have type 32F or 64F. This produces
[1, 4, 27.000002;
4, 25.000002, 216.00002;
6.9999995, 64, 729.00006]
for the example above.

opencv multidimensional kmeans

I'm trying to run the kmeans algorithm on a n-dimensional data.
I Have N points and each point have { x, y, z, ... , n } features.
my code is the following:
cv::Mat points(N, n, CV_32F);
// fill the data points
cv::Mat labels; cv::Mat centers;
cv::kmeans(points, k, labels, cv::TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 1000, 0.001), 10, cv::KMEANS_PP_CENTERS, centers);
the problem is that the kmeans algorithm run into a segmentation fault.
any help is appreciated
update
How Miki and Micka said the above code was correct!
I had made a mistake in the "fill the data points" so that I corrupts the memory
The code looks ok. You have to choose the data as 1 dimension per column.
Can you try to run this example?
// k-means
int main(int argc, char* argv[])
{
cv::Mat projectedPointsImage = cv::Mat(512, 512, CV_8UC3, cv::Scalar::all(255));
int nReferenceCluster = 10;
int nSamplesPerCluster = 100;
int N = nReferenceCluster*nSamplesPerCluster; // number of samples
int n = 10; // dimensionality of data
// fill the data points
// create n artificial clusters and randomly seed 100 points around them
cv::Mat referenceCenters(nReferenceCluster, n, CV_32FC1);
//std::cout << referenceCenters << std::endl;
cv::randu(referenceCenters, cv::Scalar::all(0), cv::Scalar::all(512));
//std::cout << "FILLED:" << "\n" << referenceCenters << std::endl;
cv::Mat points = cv::Mat::zeros(N, n, CV_32FC1);
cv::randu(points, cv::Scalar::all(-20), cv::Scalar::all(20)); // seed points around the center
for (int j = 0; j < nReferenceCluster; ++j)
{
cv::Scalar clusterColor = cv::Scalar(rand() % 255, rand() % 255, rand() % 255);
//cv::Mat & clusterCenter = referenceCenters.row(j);
for (int i = 0; i < nSamplesPerCluster; ++i)
{
// creating a sample randomly around the artificial cluster:
int index = j*nSamplesPerCluster + i;
//samplesRow += clusterCenter;
for (int k = 0; k < points.cols; ++k)
{
points.at<float>(index, k) += referenceCenters.at<float>(j, k);
}
// projecting the 10 dimensional clusters to 2 dimensions:
cv::circle(projectedPointsImage, cv::Point(points.at<float>(index, 0), points.at<float>(index, 1)), 2, clusterColor, -1);
}
}
cv::Mat labels; cv::Mat centers;
int k = 10; // searched clusters in k-means
cv::kmeans(points, k, labels, cv::TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 1000, 0.001), 10, cv::KMEANS_PP_CENTERS, centers);
for (int j = 0; j < centers.rows; ++j)
{
std::cout << centers.row(j) << std::endl;
cv::circle(projectedPointsImage, cv::Point(centers.at<float>(j, 0), centers.at<float>(j, 1)), 30, cv::Scalar::all(0), 2);
}
cv::imshow("projected points", projectedPointsImage);
cv::imwrite("C:/StackOverflow/Output/KMeans.png", projectedPointsImage);
cv::waitKey(0);
return 0;
}
I'm creating 10-dimensional data around artificial cluster centers there. For displaying I project them to 2D, getting this result:

Image Sharpening Using Laplacian Filter

I was trying to sharpening on some standard image from Gonzalez books. Below are some code that I have tried but it doesn't get closer to the results of the sharpened image.
cvSmooth(grayImg, grayImg, CV_GAUSSIAN, 3, 0, 0, 0);
IplImage* laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_16S, 1);
IplImage* abs_laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvLaplace(grayImg, laplaceImg, 3);
cvConvertScaleAbs(laplaceImg, abs_laplaceImg, 1, 0);
IplImage* dstImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvAdd(abs_laplaceImg, grayImg, dstImg, NULL);
Before Sharpening
My Sharpening Result
Desired Result
Absolute Laplace
I think the problem is that you are blurring the image before take the 2nd derivate.
Here is the working code with the C++ API (I'm using Opencv 2.4.3). I tried also with MATLAB and the result is the same.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int /*argc*/, char** /*argv*/) {
Mat img, imgLaplacian, imgResult;
//------------------------------------------------------------------------------------------- test, first of all
// now do it by hand
img = (Mat_<uchar>(4,4) << 0,1,2,3,4,5,6,7,8,9,0,11,12,13,14,15);
// first, the good result
Laplacian(img, imgLaplacian, CV_8UC1);
cout << "let opencv do it" << endl;
cout << imgLaplacian << endl;
Mat kernel = (Mat_<float>(3,3) <<
0, 1, 0,
1, -4, 1,
0, 1, 0);
int window_size = 3;
// now, reaaallly by hand
// note that, for avoiding padding, the result image will be smaller than the original one.
Mat frame, frame32;
Rect roi;
imgLaplacian = Mat::zeros(img.size(), CV_32F);
for(int y=0; y<img.rows-window_size/2-1; y++) {
for(int x=0; x<img.cols-window_size/2-1; x++) {
roi = Rect(x,y, window_size, window_size);
frame = img(roi);
frame.convertTo(frame, CV_32F);
frame = frame.mul(kernel);
float v = sum(frame)[0];
imgLaplacian.at<float>(y,x) = v;
}
}
imgLaplacian.convertTo(imgLaplacian, CV_8U);
cout << "dudee" << imgLaplacian << endl;
// a little bit less "by hand"..
// using cv::filter2D
filter2D(img, imgLaplacian, -1, kernel);
cout << imgLaplacian << endl;
//------------------------------------------------------------------------------------------- real stuffs now
img = imread("moon.jpg", 0); // load grayscale image
// ok, now try different kernel
kernel = (Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1); // another approximation of second derivate, more stronger
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
filter2D(img, imgLaplacian, CV_32F, kernel);
img.convertTo(img, CV_32F);
imgResult = img - imgLaplacian;
// convert back to 8bits gray scale
imgResult.convertTo(imgResult, CV_8U);
imgLaplacian.convertTo(imgLaplacian, CV_8U);
namedWindow("laplacian", CV_WINDOW_AUTOSIZE);
imshow( "laplacian", imgLaplacian );
namedWindow("result", CV_WINDOW_AUTOSIZE);
imshow( "result", imgResult );
while( true ) {
char c = (char)waitKey(10);
if( c == 27 ) { break; }
}
return 0;
}
Have fun!
I think the main problem lies in the fact that you do img + laplace, while img - laplace would give better results. I remember that img - 2*laplace was best, but I cannot find where I read that, probably in one of the books I read in university.
You need to do img - laplace instead of img + laplace.
laplace: f(x,y) = f(x-1,y+1) + f(x-1,y-1) + f(x,y+1) + f(x+1,y) - 4*f(x,y)
So, if you see subtract laplace from the original image you would see that the minus sign in front of 4*f(x,y) gets negated and this term becomes positive.
You could also have kernel with -5 in the center pixel instead of -4 to make the laplacian a one-step process instead of getting the getting the laplace and doing img - laplace Why? Try deriving that yourself.
This would be the final kernel.
Mat kernel = (Mat_(3,3) <<
-1, 0, -1,
0, -5, 0,
-1, 0, -1);
It is indeed a well-known result in image processing that if you subtract its Laplacian from an image, the image edges are amplified giving a sharper image.
Laplacian Filter Kernel algorithm: sharpened_pixel = 5 * current – left – right – up – down
enter image description here
So the Code will look like these:
void sharpen(const Mat& img, Mat& result)
{
result.create(img.size(), img.type());
//Processing the inner edge of the pixel point, the image of the outer edge of the pixel should be additional processing
for (int row = 1; row < img.rows-1; row++)
{
//Front row pixel
const uchar* previous = img.ptr<const uchar>(row-1);
//Current line to be processed
const uchar* current = img.ptr<const uchar>(row);
//new row
const uchar* next = img.ptr<const uchar>(row+1);
uchar *output = result.ptr<uchar>(row);
int ch = img.channels();
int starts = ch;
int ends = (img.cols - 1) * ch;
for (int col = starts; col < ends; col++)
{
//The traversing pointer of the output image is synchronized with the current row, and each channel value of each pixel in each row is given a increment, because the channel number of the image is to be taken into account.
*output++ = saturate_cast<uchar>(5 * current[col] - current[col-ch] - current[col+ch] - previous[col] - next[col]);
}
} //end loop
//Processing boundary, the peripheral pixel is set to 0
result.row(0).setTo(Scalar::all(0));
result.row(result.rows-1).setTo(Scalar::all(0));
result.col(0).setTo(Scalar::all(0));
result.col(result.cols-1).setTo(Scalar::all(0));
}
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
ggicci::sharpen(lena, sharpenedLena);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
If you are a lazier. Have fun with the following.
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
Mat kernel = (Mat_<float>(3, 3) << 0, -1, 0, -1, 4, -1, 0, -1, 0);
cv::filter2D(lena, sharpenedLena, lena.depth(), kernel);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
And the result like these.enter image description here

Resources