Plot histogram of Sobel operator magnitude and angle in OpenCV - opencv

I want to plot histogram in OpenCV C++. The task is that x-axis should be angle and y-axis should be magnitude of histogram. I calculate magnitude and angle by using Sobel operator. Now how can I plot histogram by using magnitude and angle?
Thanks in advance. The simple code of problem is
// Read image
Mat img = imread("abs.jpg");
img.convertTo(img, CV_32F, 1 / 255.0);
/*GaussianBlur(img, img, Size(3, 3), 0, 0, BORDER_CONSTANT);*/
// Calculate gradients gx, gy
Mat gx, gy;
Sobel(img, gx, CV_32F, 1, 0, 1);
Sobel(img, gy, CV_32F, 0, 1, 1);
// C++ Calculate gradient magnitude and direction (in degrees)
Mat mag, angle;
cartToPolar(gx, gy, mag, angle, 1);
imshow("magnitude of image is", mag);
imshow("angle of image is", angle);

Ok, So the first part of it is to calculate the histogram of each of them. Since both are separated already (in their own Mat) we do not have to split them or anything, and we can use them directly in the calcHist function of OpenCV.
By the documentation we have:
void calcHist(const Mat* images, int nimages, const int* channels, InputArray mask, OutputArray hist, int dims, const int* histSize, const float** ranges, bool uniform=true, bool accumulate=false )
So you would have to do:
cv::Mat histMag, histAng;
// number of bins of the histogram, adjust to your liking
int histSize = 10;
// degrees goes from 0-360 if radians then change acordingly
float rangeAng[] = { 0, 360} ;
const float* histRangeAng = { rangeAng };
double minval, maxval;
// get the range for the magnitude
cv::minMaxLoc(mag, &minval, &maxval);
float rangeMag[] = { static_cast<float>(minval), static_cast<float>(maxval)} ;
const float* histRangeMag = { rangeMag };
cv::calcHist(&mag, 1, 0, cv::NoArray(), histMag, 1, &histSize, &histRangeMag, true, false);
cv::calcHist(&angle, 1, 0, cv::NoArray(), histAng, 1, &histSize, &histRangeAng, true, false);
Now you have to plot the two histograms found in histMag and histAng.
In the turtorial I posted in the comments you have lines in the plot, for the angle it would be something like this:
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
cv::Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
cv::normalize(histAng, histAng, 0, histImage.rows, cv::NORM_MINMAX, -1, Mat() );
// Draw the lines
for( int i = 1; i < histSize; i++ )
{
cv::line( histImage, cv::Point( bin_w*(i-1), hist_h - cvRound(histAng.at<float>(i-1)) ) ,
cv::Point( bin_w*(i), hist_h - cvRound(histAng.at<float>(i)) ),
cv::Scalar( 255, 0, 0), 2, 8, 0 );
}
With this you can do the same for the magnitude, or maybe turn it into a function which draws histograms if they are supplied.
In the documentation they have another option, to draw rectangles as the bins, adapting it to our case, we get something like:
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = std::round( static_cast<double>(hist_w)/static_cast<double>(histSize) );
cv::Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
cv::normalize(histAng, histAng, 0, histImage.rows, cv::NORM_MINMAX, -1, Mat() );
for( int i = 1; i < histSize; i++ )
{
cv::rectangle(histImage, cv::Point(bin_w*(i-1), hist_h - static_cast<int>(std::round(histAng.at<float>(i-1)))), cv::Point(bin_w*(i), hist_h),);
}
Again, this can be done for the magnitude as well in the same way. This are super simple plots, if you need more complex or beautiful plots, you may need to call an external library and pass the data inside the calculated histograms. Also, this code has not been tested, so it may have a typo or error, but if something fails, just write a comment and we can find a solution.
I hope this helps you, and sorry for the late answer.

Related

Opencv : project points manually

I'm trying to reproduce the behavior of the method projectPoints() from OpenCV.
In the two images below, red/green/blue axis are obtained with OpenCV's method, whereas magenta/yellow/cyan axis are obtained with my own method :
image1
image2
With my method, axis seem to have a good orientation but translations are incorrect.
Here is my code :
void drawVector(float x, float y, float z, float r, float g, float b, cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
//Origin = (0, 0, 0, 1)
cv::Mat origin(4, 1, CV_64FC1, double(0));
origin.at<double>(3, 0) = 1;
//End = (x, y, z, 1)
cv::Mat end(4, 1, CV_64FC1, double(1));
end.at<double>(0, 0) = x; end.at<double>(1, 0) = y; end.at<double>(2, 0) = z;
//multiplies transformation matrix by camera matrix
cv::Mat mat = cameraMatrix * pose.colRange(0, 4).rowRange(0, 3);
//projects points
origin = mat * origin;
end = mat * end;
//draws corresponding line
cv::line(dst, cv::Point(origin.at<double>(0, 0), origin.at<double>(1, 0)),
cv::Point(end.at<double>(0, 0), end.at<double>(1, 0)),
CV_RGB(255 * r, 255 * g, 255 * b));
}
void drawVector_withProjectPointsMethod(float x, float y, float z, float r, float g, float b, cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
std::vector<cv::Point3f> points;
std::vector<cv::Point2f> projectedPoints;
//fills input array with 2 points
points.push_back(cv::Point3f(0, 0, 0));
points.push_back(cv::Point3f(x, y, z));
//Gets rotation vector thanks to cv::Rodrigues() method.
cv::Mat rvec;
cv::Rodrigues(pose.colRange(0, 3).rowRange(0, 3), rvec);
//projects points using cv::projectPoints method
cv::projectPoints(points, rvec, pose.colRange(3, 4).rowRange(0, 3), cameraMatrix, std::vector<double>(), projectedPoints);
//draws corresponding line
cv::line(dst, projectedPoints[0], projectedPoints[1],
CV_RGB(255 * r, 255 * g, 255 * b));
}
void drawAxis(cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
drawVector(0.1, 0, 0, 1, 1, 0, pose, cameraMatrix, dst);
drawVector(0, 0.1, 0, 0, 1, 1, pose, cameraMatrix, dst);
drawVector(0, 0, 0.1, 1, 0, 1, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0.1, 0, 0, 1, 0, 0, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0, 0.1, 0, 0, 1, 0, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0, 0, 0.1, 0, 0, 1, pose, cameraMatrix, dst);
}
What am I doing wrong ?
I just forgot to divide the resulting points by their last component after projection :
Given the matrix of the camera wich serve to take an image, and for any point (x, y, z, 1) in 3d space, its projection on that image is computed like following :
//point3D has 4 component (x, y, z, w), point2D has 3 (x, y, z).
point2D = cameraMatrix * point3D;
//then we have to divide the 2 first component of point2D by the third.
point2D /= point2D.z;

CvSVM predict not returning correct value

I was trying the example detailed within this article.
Training and further the loop to identify hyper-plane works well.
i.e.
// Data for visual representation
int width = 512, height = 512;
Mat image = Mat::zeros(height, width, CV_8UC3);
// Set up training data
float labels[4] = {1.0, -1.0, -1.0, -1.0};
Mat labelsMat(4, 1, CV_32FC1, labels);
float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
// Set up SVM's parameters
CvSVMParams svmparam;
svmparam.svm_type = CvSVM::C_SVC;
svmparam.kernel_type = CvSVM::LINEAR;
svmparam.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
// Train the SVM
CvSVM svm;
svm.train(trainingDataMat, labelsMat, Mat(), Mat(), svmparam);
svm.save("Training.xml");
// Train the SVM
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
Vec3b blue(255, 0 ,0);
Vec3b green(0, 255, 0);
for(int x = 0; x < image.rows; ++x)
{
for(int y = 0; y < image.cols; ++y)
{
Mat sampleMat = (Mat_<float>(1,2) << y,x);
float response = svm.predict(sampleMat);
if(response == 1)
image.at<Vec3b>(x,y) = green;
else if(response == -1)
image.at<Vec3b>(x,y) = blue;
}
}
But when I am trying to get support vector using api (svm.get_support_vector(i);), it returns a very small number (as 0.000876529e-28). Hence after type-casting to "int" the coordinates X,Y are becoming 0,0 respectively. So, even after getting the hyperplane, I am unable to get respective support - vectors.
i.e.
for (int i = 0; i < c; ++i)
{
const float* v = svm.get_support_vector(i);
cv::Point resCenter((int) v[0], (int) v[1]);
std::cout << v[0] << ":" << v[1] << "= " << resCenter << std::endl;
circle( image, resCenter, 6, Scalar(128, 128, 128), thickness, lineType);
}
I tried Normalizing the coordinate position as
X' = x - MinR / (MaxR - MinR) // Here MinR and MaxR are size of cols (0, 512)
Y' = y - MinR / (MaxR - MinR) // Here MinR and MaxR are size of rows (0, 512)
As I am new to Machine learning, I would be thankful, if you would suggest me something to read on the following questions:
What does Train internally do with the feature-vector we are passing. (I understand it creates a category with respect to Labels provided, but how is it happening.)
Internal functioning of predict.
Any pointers to understanding these would help me. Thanks for your precious time in advance.

Regarding a specific Object Detection in OpenCV using WebCam and comparing it with an input Image

I am new to OpenCV and want to develop a program which takes the camera input and compares it with a known image of an object which would be input to it as a .jpg image and if the input of the Webcam matches with the fed in image upto a certain level of accuracy, then some message etc should be displayed that the required object has been found.
Eg: If I get a Computer Cable before the webcam, it needs to be detected and compared to the image of the Computer cable I have fed into the program.
I've tried many techniques and find Template matching to be effective as mentioned in the foll0wing link---
Real-time template matching - OpenCV, C++
However after drawing the rectangle and getting the roiImage..I want to compare its likeliness with a known image on my disk(in the opencv working directory). For this I am trying to convert the roiImg and my other images in HSV format and get 4 values according to the Algorithms.
I have tried to combine the 2 codes but it doesn;t seem to work as roiImg is being made at runtime and is not being able to compare with the other 2 Images using imread.
#include <iostream>
#include "opencv2/opencv.hpp"
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <sstream>
using namespace cv;
using namespace std;
Point point1, point2; /* vertical points of the bounding box */
int drag = 0;
Rect rect; /* bounding box */
Mat img, roiImg; /* roiImg - the part of the image in the bounding box */
int select_flag = 0;
bool go_fast = false;
Mat mytemplate;
Mat src_base, hsv_base;
Mat src_test1, hsv_test1;
Mat src_test2, hsv_test2;
Mat hsv_half_down;
///------- template matching -----------------------------------------------------------------------------------------------
Mat TplMatch( Mat &img, Mat &mytemplate )
{
Mat result;
matchTemplate( img, mytemplate, result, CV_TM_SQDIFF_NORMED );
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
return result;
}
///------- Localizing the best match with minMaxLoc ------------------------------------------------------------------------
Point minmax( Mat &result )
{
double minVal, maxVal;
Point minLoc, maxLoc, matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
matchLoc = minLoc;
return matchLoc;
}
///------- tracking --------------------------------------------------------------------------------------------------------
void track()
{
if (select_flag)
{
//roiImg.copyTo(mytemplate);
// select_flag = false;
go_fast = true;
}
// imshow( "mytemplate", mytemplate ); waitKey(0);
Mat result = TplMatch( img, mytemplate );
Point match = minmax( result );
rectangle( img, match, Point( match.x + mytemplate.cols , match.y + mytemplate.rows ), CV_RGB(255, 255, 255), 0.5 );
std::cout << "match: " << match << endl;
/// latest match is the new template
Rect ROI = cv::Rect( match.x, match.y, mytemplate.cols, mytemplate.rows );
roiImg = img( ROI );
roiImg.copyTo(mytemplate);
imshow( "roiImg", roiImg ); //waitKey(0);
//Compare the roiImg with a know image to calculate resemblence
/*Method Base - Base Base - Half Base - Test 1 Base - Test 2
Correlation 1.000000 0.930766 0.182073 0.120447
Chi-square 0.000000 4.940466 21.184536 49.273437
Intersection 24.391548 14.959809 3.889029 5.775088
Bhattacharyya 0.000000 0.222609 0.646576 0.801869
For the Correlation and Intersection methods, the higher the metric, the more accurate the match. As we can see,
the match base-base is the highest of all as expected. Also we can observe that the match base-half is the second best match (as we predicted).
For the other two metrics, the less the result, the better the match. We can observe that the matches between the test 1 and test 2 with respect
to the base are worse, which again, was expected.)*/
src_base = imread("roiImg");
src_test1 = imread("Samarth.jpg");
src_test2 = imread("Samarth2.jpg");
//double l2_norm = cvNorm( src_base, src_test1 );
/// Convert to HSV
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
/// Using 50 bins for hue and 60 for saturation
int h_bins = 50; int s_bins = 60;
int histSize[] = { h_bins, s_bins };
// hue varies from 0 to 179, saturation from 0 to 255
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
const float* ranges[] = { h_ranges, s_ranges };
// Use the o-th and 1-st channels
int channels[] = { 0, 1 };
/// Histograms
MatND hist_base;
MatND hist_half_down;
MatND hist_test1;
MatND hist_test2;
/// Calculate the histograms for the HSV images
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_half_down, 1, channels, Mat(), hist_half_down, 2, histSize, ranges, true, false );
normalize( hist_half_down, hist_half_down, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false );
normalize( hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
/// Apply the histogram comparison methods
for( int i = 0; i < 4; i++ )
{
int compare_method = i;
double base_base = compareHist( hist_base, hist_base, compare_method );
double base_half = compareHist( hist_base, hist_half_down, compare_method );
double base_test1 = compareHist( hist_base, hist_test1, compare_method );
double base_test2 = compareHist( hist_base, hist_test2, compare_method );
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
}
printf( "Done \n" );
}
///------- MouseCallback function ------------------------------------------------------------------------------------------
void mouseHandler(int event, int x, int y, int flags, void *param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/// left button clicked. ROI selection begins
point1 = Point(x, y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/// mouse dragged. ROI being selected
Mat img1 = img.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 3, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
point2 = Point(x, y);
rect = Rect(point1.x, point1.y, x - point1.x, y - point1.y);
drag = 0;
roiImg = img(rect);
roiImg.copyTo(mytemplate);
// imshow("MOUSE roiImg", roiImg); waitKey(0);
}
if (event == CV_EVENT_LBUTTONUP)
{
/// ROI selected
select_flag = 1;
drag = 0;
}
}
///------- Main() ----------------------------------------------------------------------------------------------------------
int main()
{
int k;
///open webcam
VideoCapture cap(0);
if (!cap.isOpened())
return 1;
/* ///open video file
VideoCapture cap;
cap.open( "Wildlife.wmv" );
if ( !cap.isOpened() )
{ cout << "Unable to open video file" << endl; return -1; }*/
/*
/// Set video to 320x240
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 240);*/
cap >> img;
GaussianBlur( img, img, Size(7,7), 3.0 );
imshow( "image", img );
while (1)
{
cap >> img;
if ( img.empty() )
break;
// Flip the frame horizontally and add blur
cv::flip( img, img, 1 );
GaussianBlur( img, img, Size(7,7), 3.0 );
if ( rect.width == 0 && rect.height == 0 )
cvSetMouseCallback( "image", mouseHandler, NULL );
else
track();
imshow("image", img);
// waitKey(100); k = waitKey(75);
k = waitKey(go_fast ? 30 : 10000);
if (k == 27)
break;
}
return 0;
}
if you want to detect a object in live feed , detecting the object in each frame is not efficient .. for the first time you have to detect after you have to track the object.
so this process involving both detection and tracking..
for detection you have to segment the object from the rest, opencv provides many algorithms for segmenting an object from background based on colors color based detection.other than color you can use the objects's shape to segment the object from backgroundshape based segmentation.
you can use lk optical flow algorithm as a starting to tracking.
additionally, you can use template matching or camshift or medial flow tracker.. etc to obtain quick results.all the above algorithm will be useful based on scale change of the object and lighting change of the feed. opencv has sample programs to the above algorithms.

Image Sharpening Using Laplacian Filter

I was trying to sharpening on some standard image from Gonzalez books. Below are some code that I have tried but it doesn't get closer to the results of the sharpened image.
cvSmooth(grayImg, grayImg, CV_GAUSSIAN, 3, 0, 0, 0);
IplImage* laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_16S, 1);
IplImage* abs_laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvLaplace(grayImg, laplaceImg, 3);
cvConvertScaleAbs(laplaceImg, abs_laplaceImg, 1, 0);
IplImage* dstImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvAdd(abs_laplaceImg, grayImg, dstImg, NULL);
Before Sharpening
My Sharpening Result
Desired Result
Absolute Laplace
I think the problem is that you are blurring the image before take the 2nd derivate.
Here is the working code with the C++ API (I'm using Opencv 2.4.3). I tried also with MATLAB and the result is the same.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int /*argc*/, char** /*argv*/) {
Mat img, imgLaplacian, imgResult;
//------------------------------------------------------------------------------------------- test, first of all
// now do it by hand
img = (Mat_<uchar>(4,4) << 0,1,2,3,4,5,6,7,8,9,0,11,12,13,14,15);
// first, the good result
Laplacian(img, imgLaplacian, CV_8UC1);
cout << "let opencv do it" << endl;
cout << imgLaplacian << endl;
Mat kernel = (Mat_<float>(3,3) <<
0, 1, 0,
1, -4, 1,
0, 1, 0);
int window_size = 3;
// now, reaaallly by hand
// note that, for avoiding padding, the result image will be smaller than the original one.
Mat frame, frame32;
Rect roi;
imgLaplacian = Mat::zeros(img.size(), CV_32F);
for(int y=0; y<img.rows-window_size/2-1; y++) {
for(int x=0; x<img.cols-window_size/2-1; x++) {
roi = Rect(x,y, window_size, window_size);
frame = img(roi);
frame.convertTo(frame, CV_32F);
frame = frame.mul(kernel);
float v = sum(frame)[0];
imgLaplacian.at<float>(y,x) = v;
}
}
imgLaplacian.convertTo(imgLaplacian, CV_8U);
cout << "dudee" << imgLaplacian << endl;
// a little bit less "by hand"..
// using cv::filter2D
filter2D(img, imgLaplacian, -1, kernel);
cout << imgLaplacian << endl;
//------------------------------------------------------------------------------------------- real stuffs now
img = imread("moon.jpg", 0); // load grayscale image
// ok, now try different kernel
kernel = (Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1); // another approximation of second derivate, more stronger
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
filter2D(img, imgLaplacian, CV_32F, kernel);
img.convertTo(img, CV_32F);
imgResult = img - imgLaplacian;
// convert back to 8bits gray scale
imgResult.convertTo(imgResult, CV_8U);
imgLaplacian.convertTo(imgLaplacian, CV_8U);
namedWindow("laplacian", CV_WINDOW_AUTOSIZE);
imshow( "laplacian", imgLaplacian );
namedWindow("result", CV_WINDOW_AUTOSIZE);
imshow( "result", imgResult );
while( true ) {
char c = (char)waitKey(10);
if( c == 27 ) { break; }
}
return 0;
}
Have fun!
I think the main problem lies in the fact that you do img + laplace, while img - laplace would give better results. I remember that img - 2*laplace was best, but I cannot find where I read that, probably in one of the books I read in university.
You need to do img - laplace instead of img + laplace.
laplace: f(x,y) = f(x-1,y+1) + f(x-1,y-1) + f(x,y+1) + f(x+1,y) - 4*f(x,y)
So, if you see subtract laplace from the original image you would see that the minus sign in front of 4*f(x,y) gets negated and this term becomes positive.
You could also have kernel with -5 in the center pixel instead of -4 to make the laplacian a one-step process instead of getting the getting the laplace and doing img - laplace Why? Try deriving that yourself.
This would be the final kernel.
Mat kernel = (Mat_(3,3) <<
-1, 0, -1,
0, -5, 0,
-1, 0, -1);
It is indeed a well-known result in image processing that if you subtract its Laplacian from an image, the image edges are amplified giving a sharper image.
Laplacian Filter Kernel algorithm: sharpened_pixel = 5 * current – left – right – up – down
enter image description here
So the Code will look like these:
void sharpen(const Mat& img, Mat& result)
{
result.create(img.size(), img.type());
//Processing the inner edge of the pixel point, the image of the outer edge of the pixel should be additional processing
for (int row = 1; row < img.rows-1; row++)
{
//Front row pixel
const uchar* previous = img.ptr<const uchar>(row-1);
//Current line to be processed
const uchar* current = img.ptr<const uchar>(row);
//new row
const uchar* next = img.ptr<const uchar>(row+1);
uchar *output = result.ptr<uchar>(row);
int ch = img.channels();
int starts = ch;
int ends = (img.cols - 1) * ch;
for (int col = starts; col < ends; col++)
{
//The traversing pointer of the output image is synchronized with the current row, and each channel value of each pixel in each row is given a increment, because the channel number of the image is to be taken into account.
*output++ = saturate_cast<uchar>(5 * current[col] - current[col-ch] - current[col+ch] - previous[col] - next[col]);
}
} //end loop
//Processing boundary, the peripheral pixel is set to 0
result.row(0).setTo(Scalar::all(0));
result.row(result.rows-1).setTo(Scalar::all(0));
result.col(0).setTo(Scalar::all(0));
result.col(result.cols-1).setTo(Scalar::all(0));
}
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
ggicci::sharpen(lena, sharpenedLena);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
If you are a lazier. Have fun with the following.
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
Mat kernel = (Mat_<float>(3, 3) << 0, -1, 0, -1, 4, -1, 0, -1, 0);
cv::filter2D(lena, sharpenedLena, lena.depth(), kernel);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
And the result like these.enter image description here

Read HSV value of pixel in opencv

how would you go about reading the pixel value in HSV format rather than RGB? The code below reads the pixel value of the circles' centers in RGB format. Is there much difference when it comes to reading value in HSV?
int main(int argc, char** argv)
{
//load image from directory
IplImage* img = cvLoadImage("C:\\Users\\Nathan\\Desktop\\SnookerPic.png");
IplImage* gray = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
//covert to grayscale
cvCvtColor(img, gray, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray, gray, CV_GAUSSIAN, 7, 7);
IplImage* canny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
cvCanny(gray, canny, 50, 100, 3);
//detect circles
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 1, 35.0, 75, 60,0,0);
cvCvtColor(canny, rgbcanny, CV_GRAY2BGR);
//draw all detected circles
for (int i = 0; i < circles->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
//uchar* ptr;
//ptr = cvPtr2D(img, center.y, center.x, NULL);
//printf("B: %d G: %d R: %d\n", ptr[0],ptr[1],ptr[2]);
CvScalar s;
s = cvGet2D(img,center.y, center.x);//colour of circle
printf("B: %f G: %f R: %f\n",s.val[0],s.val[1],s.val[2]);
// draw the circle center
cvCircle(img, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(img, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
//display coordinates
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
//create window
//cvNamedWindow("circles", 1);
cvNamedWindow("SnookerImage", 1);
//show image in window
//cvShowImage("circles", rgbcanny);
cvShowImage("SnookerImage", img);
cvSaveImage("out.png", img);
//cvDestroyWindow("SnookerImage");
//cvDestroyWindow("circles");
//cvReleaseMemStorage("storage");
cvWaitKey(0);
return 0;
}
If you use the C++ interface, you can use
cv::cvtColor(img, img, CV_BGR2HSV);
See the documentation for cvtColor for more information.
Update:
Reading and writing pixels the slow way (assuming that the HSV values are stored as a cv::Vec3b (doc))
cv::Vec3b pixel = image.at<cv::Vec3b>(0,0); // read pixel (0,0) (make copy)
pixel[0] = 0; // H
pixel[1] = 0; // S
pixel[2] = 0; // V
image.at<cv::Vec3b>(0,0) = pixel; // write pixel (0,0) (copy pixel back to image)
Using the image.at<...>(x, y) (doc, scroll down a lot) notation is quite slow, if you want to manipulate every pixel. There is an article in the documentation on how to access the pixels faster. You can apply the iterator method also like this:
cv::MatIterator_<cv::Vec3b> it = image.begin<cv::Vec3b>(),
it_end = image.end<cv::Vec3b>();
for(; it != it_end; ++it)
{
// work with pixel in here, e.g.:
cv::Vec3b& pixel = *it; // reference to pixel in image
pixel[0] = 0; // changes pixel in image
}

Resources