I have an Image that I would like to zoom into and view at high detail. It is of unknown size and mostly black and white with some text on it. When I zoom in the text becomes unreadable and I thought it was to do with not having enough pixels/texels to display so I upscaled the image by a factor of 2. Now that I have scaled it, it is still unreadable.
Then I started to use OpenCV with :
void resizeFirst(){
Mat src = imread( "../floor.png", 1 );
Mat tmp;
resize(src,tmp,Size(),3,3,INTER_CUBIC);
Mat dst = tmp.clone();
Size s(3,3);
//blur( tmp, dst, s, Point(-1,-1) );//homogeneous
GaussianBlur(tmp, dst, s, 3);//gaussian
//medianBlur ( tmp, dst, 5 );//median
//bilateralFilter ( tmp, dst, 5, 5*2, 5/2 );
addWeighted(tmp, 1.5, dst, -0.5, 0, dst);
imwrite("sharpenedImage.png",dst);
}
void blurFirst(){
Mat src = imread( "../floor.png", 1 );
Size s(3,3);
Mat dst;
GaussianBlur(src, dst, s, 3);//gaussian
addWeighted(src, 2, dst, -1, 0, dst);
Mat tmp;
resize(dst,tmp,Size(),3,3,INTER_CUBIC);
imwrite("sharpenedImage0.png",tmp);
}
and the output is better but the image still isnt sharp. Does anyone have any ideas on how to keep text sharp when zooming into an image?
EDIT: below are sample images.
The first one is the smaller res original and the second I resized and tried to do gaussian sharpening as per below
Resize function offers different interpolation methods
INTER_NEAREST nearest-neighbor interpolation
INTER_LINEAR bilinear interpolation (used by default)
INTER_AREA resampling using pixel area relation. It may be the preferred method for image decimation, as it gives moire-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method
INTER_CUBIC bicubic interpolation over 4x4 pixel neighborhood
INTER_LANCZOS4 Lanczos interpolation over 8x8 pixel neighborhood
try all the interpolation methods and use the one that suits you the most. The resize function will however change the aspect ratio of your image.
Related
I want to detect the very minimal movement of a conveyor belt using image evaluation (Resolution: 31x512, image rate: 1000 per second.). The moment of belt-start is important for me.
If I do cv::absdiff between two subsequent images, I obtain very noisy result:
According to the mechanical rotation sensor of the motor, the movement starts here:
I tried to threshold the abs-diff image with a cascade of erosion and dilation, but I could detect the earliest change more than second too late in this image:
Is it possible to find the change earlier?
Here is the sequence of the Images without changes (according to motor sensor):
In this sequence the movement begins in the middle image:
Looks like I've found a solution which works in MY case.
Instead of comparing the image changes in space-domain, the cross-correlation should be applied:
I convert both images to DFT, multiply DFT-Mats and convert back. The max pixel value is the center of the correlation. As long as the images are same, the max-pix remains in the same position and moves otherwise.
The actual working code uses 3 images, 2 DFT multiplication result between images 1,2 and 2,3:
Mat img1_( 512, 32, CV_16UC1 );
Mat img2_( 512, 32, CV_16UC1 );
Mat img3_( 512, 32, CV_16UC1 );
//read the data in the images wohever you want. I read from MHD-file
//Set ROI (if required)
Mat img1 = img1_(cv::Rect(0,200,32,100));
Mat img2 = img2_(cv::Rect(0,200,32,100));
Mat img3 = img3_(cv::Rect(0,200,32,100));
//Float mats for DFT
Mat img1f;
Mat img2f;
Mat img3f;
//DFT and produtcts mats
Mat dft1,dft2,dft3,dftproduct,dftproduct2;
//Calculate DFT of both images
img1.convertTo(img1f, CV_32FC1);
cv::dft(img1f, dft1);
img2.convertTo(img3f, CV_32FC1);
cv::dft(img3f, dft3);
img3.convertTo(img2f, CV_32FC1);
cv::dft(img2f, dft2);
//Multiply DFT Mats
cv::mulSpectrums(dft1,dft2,dftproduct,true);
cv::mulSpectrums(dft2,dft3,dftproduct2,true);
//Convert back to space domain
cv::Mat result,result2;
cv::idft(dftproduct,result);
cv::idft(dftproduct2,result2);
//Not sure if required, I needed it for visualizing
cv::normalize( result, result, 0, 255, NORM_MINMAX, CV_8UC1);
cv::normalize( result2, result2, 0, 255, NORM_MINMAX, CV_8UC1);
//Find maxima positions
double dummy;
Point locdummy; Point maxLoc1; Point maxLoc2;
cv::minMaxLoc(result, &dummy, &dummy, &locdummy, &maxLoc1);
cv::minMaxLoc(result2, &dummy, &dummy, &locdummy, &maxLoc2);
//Calculate products simply fot having one value to compare
int maxlocProd1 = maxLoc1.x*maxLoc1.y;
int maxlocProd2 = maxLoc2.x*maxLoc2.y;
//Calculate absolute difference of the products. Not 0 means movement
int absPosDiff = std::abs(maxlocProd2-maxlocProd1);
if ( absPosDiff>0 )
{
std::cout << id<< std::endl;
break;
}
I have a contour that I would like to "snap" to edges in an image. That is, some thing like Intelligent Scissors, but for the whole contour at the same. A user has provided a rough sketch of the outline of an object, and I'd like to clean it up by "pushing" each point on the contour to the nearest point in an edge image.
Does something like this exist in OpenCV?
You can mimic active contours using cv::grabCut as suggested. You choose the radius of attraction (how far from the original position the curve can evolve), and by using dilated and eroded images, you define the unknown region around the contour.
// cv::Mat img, mask; // contour on mask as filled polygon
if ( mask.size()!=img.size() )
CV_Error(CV_StsError,"ERROR");
int R = 32; // radius of attraction
cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(2*R+1,2*R+1) );
cv::Mat gc( mask.size(), CV_8UC1, cv::Scalar(cv::GC_BGD) );
cv::Mat t;
cv::dilate( mask, t, strel );
gc.setTo( cv::GC_PR_BGD, t );
gc.setTo( cv::GC_PR_FGD, mask ); // 3
cv::erode( mask, t, strel );
gc.setTo( cv::GC_FGD, t ); // 1
cv::grabCut( img, gc, cv::Rect(), cv::Mat(), cv::Mat(), 2 );
gc &= 0x1; // either foreground or probably foreground
gc *= 255; // so that you see it
What you may loose, is the topology of the contour. Some processing required there. Also, you cannot control the curvature or smoothness of the contour and it's not really contour evolution in sense.
Only if you are interested, ITK geodesic active contour might be what you are looking for http://www.itk.org/Doxygen/html/classitk_1_1GeodesicActiveContourLevelSetImageFilter.html
I am using this method to rotate a cvMat, whenever I run it I get back a rotated image however there is a lot of deadspace below it.
void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
int len = std::max(src.cols, src.rows);
cv::Point2f pt(len/2., len/2.);
cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);
cv::warpAffine(src, dst, r, cv::Size(len, len));
}
When given this image:
I get this image:
The image has been rotated but as you can see some extra pixels have been added, how can I only rotate the original image and not add any extra pixels?
Method call:
rotate(src, skew, res);
res being dst.
As mayank-baddi said you have to use output image size same as the input to resolve this, and my answer is based on your comment above How can I avoid adding the black area? after wrapAffine,
So you have to do,
Create white image little bigger than your source, and it will depend on your skew angle, here I used 50 pixel.
int extend=50;
Mat tmp(src.rows+2*extend,src.cols+2*extend,src.type(),Scalar::all(255));
Copy the source to above using ROI
Rect ROI(extend,extend,src.cols,src.rows);
src.copyTo(tmp(ROI));
Now rotate tmp instead of src
rotate(tmp, skew, res); res being dst.
Crop back the final image from rotated result using the same ROI.
Mat crop=res(ROI);
imshow("crop",crop);
You have to define the output image size while using warpAffine transform.
Here you are defining the size as cv::Size(len, len) where len is max of height and width.
cv::warpAffine(src, dst, r, cv::Size(len, len));
Define/calculate the size of the final image accordingly.
I have image as following (so, this is white figure on red background. This figure have two thin red lines inside it)
and I want to receive following image (remove red background but not two red lines inside figure)
I was trying convexHull from OpenCV, but, obviously that approach works only on convex figures. My feeling that convolution may help here, but have no real idea yet.
Dilate and Erode should work for your example:
Mat image = imread("image1.jpg");
int erosion_size = 5;
int dilation_size = 6;
int threshold_value = 200;
Mat mask;
cvtColor( image, mask, CV_BGR2GRAY );
//BINARY THRESHOLDING
threshold( mask, mask, threshold_value, 255, 0);
Mat erosion_element = getStructuringElement(MORPH_RECT, Size( 2*erosion_size + 1, 2*erosion_size+1 ), Point( erosion_size, erosion_size ) );
Mat dilation_element = getStructuringElement(MORPH_RECT, Size( 2*dilation_size + 1, 2*dilation_size+1 ), Point( dilation_size, dilation_size ) );
dilate(mask, mask, erosion_element);
erode(mask, mask, dilation_element);
Mat target;
image.copyTo(target, mask);
imshow("hello",target);
waitKey();
OutPut:
suggestions: :)
just floodfill()
convexHull does what the name says, but has a companion, convexitydefects
he-he, it looks like convolution with circle having diameter slightly bigger (8 pixels, for example) than line thickness works!
so, algorithm will looks as following:
convolve with circle having diameter slightly bigger than line
thickness
normalize convolution, you are interested in values
greater than 0.95-0.97
for each point on convolution function
(with values greater than 0.95-0.97) you should zero all
neighborhoods which are in range R=diameter/2
Ive been reading about feature detection and wanted to try harris corner detector. I realize that it is achieved by calling
void cornerHarris(InputArray src, OutputArray dst, int blockSize, int ksize, double k, int borderType=BORDER_DEFAULT )
where the dst is an image of floats containing corner strengths at each pixel.
I wanted to see it work so I wanted to apply it to the following picture:
The result produced was this:
As you can tell the results are not good. It looks to me that it just picked up noise, the main corners were not even detected.
Here is the code I used to print corners on the image, I used threshold and set any arbitrary value for threshold.
int _tmain(int argc, _TCHAR* argv[])
{
Mat img, dst, threshed;
img = imread("c:\\laptop.jpg",0);
dst = Mat::zeros(img.size(), CV_32FC1);
cornerHarris(img, dst, 2, 3, 0.04, BORDER_DEFAULT);
threshold(dst, threshed, 0.00001, 255, THRESH_BINARY_INV);
namedWindow("meh", CV_WINDOW_AUTOSIZE);
imshow("meh", threshed);
//imwrite("harris.jpg", threshed);
waitKey(0);
return 0;
If I reduce threshold the result is white with just a few black dots (detections) Increasing threshold just produces a more noisy like image.
Am I missing something? How can I improve the quality of this function?
Thank you
You can try a goodFeaturesToTrack function. It is built on top of Harris corner detector but filters our the noise and returns only strong corners.