Error in implementation of local thresholding in Octave - image-processing

I'm trying to implement the Sauvola & Pietaksinen method to perform a binarization in an image via local thresholding.
The method defines the threshold of each pixel (x,y) as T(x,y) = mean(x,y)*[1+k(std(x,y)/R-1)], as in the arcticle ”Adaptive Document Image Binarization”. The mean and the standard deviation are calculated in a neighbourhood of (x,y). k and R are suggested to be 0.5 and 128, respectively.
This is what my code looks like:
filtered = colfilt(image, [n n], "sliding", #(x) (mean(x).*(1+0.5*(std(x)/128 - 1))));
image(image < filtered) = 0;
image(image >= filtered) = 255;
However, for all images I tested, the result is a entirely blank image, which is obviously incorrect. I think I must be misusing some element in the colfilt function, but I'm too newbie at Octave and couldn't find it until now.
Could someone please give me a hand?
Thanks in advance.

I can't see a problem. You really should include your source and perhaps also your input image and parameter for n. Btw, you shouldn't overwrite function names (like image in your case).
Input image:
pkg load image
img = imread ("lenna256.jpg");
k = 0.5;
R = 128;
n = 5;
filtered = colfilt(img, [n n], "sliding", #(x) (mean(x).*(1+0.5*(std(x)/128 - 1))));
img(img < filtered) = 0;
img(img >= filtered) = 255;
image (img)
imwrite (img, "lenna_out.png")
which creates

Related

how to remove a stamp from an image with opencv

I am working on a OCR project, and in the preprocessing, some RED stamps need to be removed, so that the text near the stamps could be detected. I try a lot of methods(like change the values of pixel, threshold in Red channel) but fail.
Any suggestions are highly appreciated.
Python, C++, Java or what? Since you didn't state the OpenCV implementation you are using, I'm giving my answer in C++.
An option is to use the HSV color space to filter out the range of red values that defines the seal. My approach is to use the CMYK color space to filter everything except the black (or dark) text. It should do a pretty good job on printed media, which is your case.
//read input image:
std::string imageName = "C://opencvImages//seal.png";
cv::Mat imageInput = cv::imread( imageName );
Now, perform the CMYK conversion. OpenCV does not support this operation out of the box, bear with me as I provide the helper function at the end of this post.
//CMYK conversion:
std::vector<cv::Mat> cmyk;
cmyk = rgb2cmyk( imageInput );
//This is the Black channel:
cv::Mat blackChannel = cmyk[3].clone();
This is the image of the black channel; it is nice how everything that is not black (or dark) practically disappears!
Now, optionally, enhance the result applying brightness and contrast adjustment. Just try to separate the text from the background a little bit better; we want some defined pixel distributions to get a nice binary image.
//Brightness and contrast adjustment:
float alpha = 2.0;
float beta = -50.0;
contrastBrightnessAdjustment( blackChannel, alpha, beta );
Again, OpenCV does not offer brightness and contrast adjustment out of the box; however, its implementation is very easy. Hold on a little bit, and let me show you the result of this operation:
Nice. Let's Otsu-threshold this bad boy to get a nice binary image containing the clean text:
cv::threshold( blackChannel, binaryImage ,0, 255, cv::THRESH_OTSU );
This is what you get:
Now, the RGB to CMYK conversion function. I'm using the following implementation. The function receives an RGB image and returns a vector containing each of the CMYK channels
std::vector<cv::Mat> rgb2cmyk( cv::Mat& inputImage ){
std::vector<cv::Mat> cmyk;
for (int i = 0; i < 4; i++) {
cmyk.push_back( cv::Mat( inputImage.size(), CV_8UC1 ) );
}
std::vector<cv::Mat> inputRGB;
cv::split( inputImage, inputRGB );
for (int i = 0; i < inputImage.rows; i++)
{
for (int j = 0; j < inputImage.cols; j++)
{
float r = (int)inputRGB[2].at<uchar>(i, j) / 255.;
float g = (int)inputRGB[1].at<uchar>(i, j) / 255.;
float b = (int)inputRGB[0].at<uchar>(i, j) / 255.;
float k = std::min(std::min(1-r, 1-g), 1-b);
cmyk[0].at<uchar>(i, j) = (1 - r - k) / (1 - k) * 255.;
cmyk[1].at<uchar>(i, j) = (1 - g - k) / (1 - k) * 255.;
cmyk[2].at<uchar>(i, j) = (1 - b - k) / (1 - k) * 255.;
cmyk[3].at<uchar>(i, j) = k * 255.;
}
}
return cmyk;
}
And the contrastBrightnessAdjustment function is this, implemented using pointer arithmetic. The function receives a grayscale image and applies the linear transformation via the alpha and beta parameters:
void contrastBrightnessAdjustment( cv::Mat inputImage, float alpha, int beta ){
cv::MatIterator_<cv::Vec3b> it, end;
for (it = inputImage.begin<cv::Vec3b>(), end = inputImage.end<cv::Vec3b>(); it != end; ++it) {
uchar &pixel = (*it)[0];
pixel = cv::saturate_cast<uchar>(alpha*pixel+beta);
}
}

Obtain sigma of gaussian blur between two images

Suppose I have an image A, I applied Gaussian Blur on it with Sigam=3 So I got another Image B. Is there a way to know the applied sigma if A,B is given?
Further clarification:
Image A:
Image B:
I want to write a function that take A,B and return Sigma:
double get_sigma(cv::Mat const& A,cv::Mat const& B);
Any suggestions?
EDIT1: The suggested approach doesn't work in practice in its original form(i.e. using only 9 equations for a 3 x 3 kernel), and I realized this later. See EDIT1 below for an explanation and EDIT2 for a method that works.
EDIT2: As suggested by Humam, I used the Least Squares Estimate (LSE) to find the coefficients.
I think you can estimate the filter kernel by solving a linear system of equations in this case. A linear filter weighs the pixels in a window by its coefficients, then take their sum and assign this value to the center pixel of the window in the result image. So, for a 3 x 3 filter like
the resulting pixel value in the filtered image
result_pix_value = h11 * a(y, x) + h12 * a(y, x+1) + h13 * a(y, x+2) +
h21 * a(y+1, x) + h22 * a(y+1, x+1) + h23 * a(y+1, x+2) +
h31 * a(y+2, x) + h32 * a(y+2, x+1) + h33 * a(y+2, x+2)
where a's are the pixel values within the window in the original image. Here, for the 3 x 3 filter you have 9 unknowns, so you need 9 equations. You can obtain those 9 equations using 9 pixels in the resulting image. Then you can form an Ax = b system and solve for x to obtain the filter coefficients. With the coefficients available, I think you can find the sigma.
In the following example I'm using non-overlapping windows as shown to obtain the equations.
You don't have to know the size of the filter. If you use a larger size, the coefficients that are not relevant will be close to zero.
Your result image size is different than the input image, so i didn't use that image for following calculation. I use your input image and apply my own filter.
I tested this in Octave. You can quickly run it if you have Octave/Matlab. For Octave, you need to load the image package.
I'm using the following kernel to blur the image:
h =
0.10963 0.11184 0.10963
0.11184 0.11410 0.11184
0.10963 0.11184 0.10963
When I estimate it using a window size 5, I get the following. As I said, the coefficients that are not relevant are close to zero.
g =
9.5787e-015 -3.1508e-014 1.2974e-015 -3.4897e-015 1.2739e-014
-3.7248e-014 1.0963e-001 1.1184e-001 1.0963e-001 1.8418e-015
4.1825e-014 1.1184e-001 1.1410e-001 1.1184e-001 -7.3554e-014
-2.4861e-014 1.0963e-001 1.1184e-001 1.0963e-001 9.7664e-014
1.3692e-014 4.6182e-016 -2.9215e-014 3.1305e-014 -4.4875e-014
EDIT1:
First of all, my apologies.
This approach doesn't really work in the practice. I've used the filt = conv2(a, h, 'same'); in the code. The resulting image data type in this case is double, whereas in the actual image the data type is usually uint8, so there's loss of information, which we can think of as noise. I simulated this with the minor modification filt = floor(conv2(a, h, 'same'));, and then I don't get the expected results.
The sampling approach is not ideal, because it's possible that it results in a degenerated system. Better approach is to use random sampling, avoiding the borders and making sure the entries in the b vector are unique. In the ideal case, as in my code, we are making sure the system Ax = b has a unique solution this way.
One approach would be to reformulate this as Mv = 0 system and try to minimize the squared norm of Mv under the constraint squared-norm v = 1, which we can solve using SVD. I could be wrong here, and I haven't tried this.
Another approach is to use the symmetry of the Gaussian kernel. Then a 3x3 kernel will have only 3 unknowns instead of 9. I think, this way we impose additional constraints on v of the above paragraph.
I'll try these out and post the results, even if I don't get the expected results.
EDIT2:
Using the LSE, we can find the filter coefficients as pinv(A'A)A'b. For completion, I'm adding a simple (and slow) LSE code.
Initial Octave Code:
clear all
im = double(imread('I2vxD.png'));
k = 5;
r = floor(k/2);
a = im(:, :, 1); % take the red channel
h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian
filt = conv2(a, h, 'same');
% use non-overlapping windows to for the Ax = b syatem
% NOTE: boundry error checking isn't performed in the code below
s = floor(size(a)/2);
y = s(1);
x = s(2);
w = k*k;
y1 = s(1)-floor(w/2) + r;
y2 = s(1)+floor(w/2);
x1 = s(2)-floor(w/2) + r;
x2 = s(2)+floor(w/2);
b = [];
A = [];
for y = y1:k:y2
for x = x1:k:x2
b = [b; filt(y, x)];
f = a(y-r:y+r, x-r:x+r);
A = [A; f(:)'];
end
end
% estimated filter kernel
g = reshape(A\b, k, k)
LSE method:
clear all
im = double(imread('I2vxD.png'));
k = 5;
r = floor(k/2);
a = im(:, :, 1); % take the red channel
h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian
filt = floor(conv2(a, h, 'same'));
s = size(a);
y1 = r+2; y2 = s(1)-r-2;
x1 = r+2; x2 = s(2)-r-2;
b = [];
A = [];
for y = y1:2:y2
for x = x1:2:x2
b = [b; filt(y, x)];
f = a(y-r:y+r, x-r:x+r);
f = f(:)';
A = [A; f];
end
end
g = reshape(A\b, k, k) % A\b returns the least squares solution
%g = reshape(pinv(A'*A)*A'*b, k, k)

how to superimpose two images?

I have a visualization output of gabor filter with 12 different orientations.I want to superimpose the vizualization image on my image of retina for vessel extraction.How do i do it?I have tried the below method.is there any other method to perform superimposition of images in matlab.
here is my code
I = getimage();
I=I(:,:,2);
lambda = 8;
theta = 0;
psi = [0 pi/2];
gamma = 0.5;
bw = 1;
N = 2;
img_in = im2double(I);
%img_in(:,:,2:3) = []; % discard redundant channels, it's gray anyway
img_out = zeros(size(img_in,1), size(img_in,2), N);
for n=1:N
gb = gabor_fn(bw,gamma,psi(1),lambda,theta)...
+ 1i * gabor_fn(bw,gamma,psi(2),lambda,theta);
% gb is the n-th gabor filter
img_out(:,:,n) = imfilter(img_in, gb, 'symmetric');
% filter output to the n-th channel
%theta = theta + 2*pi/N
%figure;
%imshow(img_out(:,:,n));
imshow(img_in); hold on;
h = imagesc(img_out(:,:,n)); % here i am getting error saying CDATA must be size[M*N]
set( h, 'AlphaData', .5 ); % .5 transparency
figure;
imshow(h);
theta = 15 * n; % next orientation
end
this is my original image
this is my visualized image got by gabor filter using orientation
this is the kind/type of image i have to get with respect to visualisation .i.e i have to impose visualized image on my original image and i have to get this type of image
With the information you have provided, my understanding is you want the third/final image to be an overlay on top of the first/initial image. I do things like this when using segmentation to detect hemorrhaging in MRI images of the brain.
First, let's set up some defintions:
I_src = source/original image
I_out = output/final image
Now, make a copy of I_src and make it a color image rather than grayscale.
I_hybrid = I_src
colorIm = gray2rgb(I_src)
Let's assume both I_src and I_out are the same visual dimensions (ie: width, height), and that I_out is strictly black-and-white (ie: monochrome). Now, we can use I_out as a mask template for alpha channel adjustments in the resulting image. This is where it gets fun.
BLACK=0;
WHITE=1;
[length width] = size(I_out);
for i = 1:1:length
for j = 1:1:width
if (I_out(i,j) == WHITE)
I_hybrid(i,j) = I_hybrid(i,j) + [0.25 0 0]a;
end
end
This will result in you getting your original image with the blood vessels in the eye being slightly brighter and tinted red. You now have a beautiful composite of your original image with the desired features highlighted, but not overwritten (ie: you can undo the highlighting by subtracting the original color vector).
I will include an example of what the output would look like, but it's noisy because I had to create it in GIMP as I don't have Matlab installed right now. The results will be similar, but yours would be much cleaner and prettier.
Please let me know how this goes.
References
"Converting Images from Grayscale to Color" http://blogs.mathworks.com/pick/2012/11/25/converting-images-from-grayscale-to-color/

which filter is being use by get_convolve() function in CImg library

Which kind of filter is being used by CImg library's get_convolve() function(written in C)? Median or Gaussian or bilateral or some other?
I tried to understand the function so that I can use the similar functionality in PIL openCV. In the header file CImg.h of the library, it says:
/**
Compute the convolution of the image by a mask.
The result \p res of the convolution of an image \p img by a mask \p mask is defined to be :
res(x,y,z) = sum_{i,j,k} img(x-i,y-j,z-k)*mask(i,j,k)
param mask = the correlation kernel.
param cond = the border condition type (0=zero, 1=dirichlet)
param weighted_convol = enable local normalization.
**/
Declaration is like this:
template<typename t> CImg<typename cimg::superset2<T,t,float>::type>
get_convolve(const CImg<t>& mask, const unsigned int cond=1, const bool weighted_convol=false) const {}
Here is a code sniplet:
for (int z = mz1; z<mze; ++z)
for (int y = my1; y<mye; ++y)
for (int x = mx1; x<mxe; ++x) {// For each pixel
Ttfloat val = 0;
for (int zm = -mz1; zm<=mz2; ++zm)
for (int ym = -my1; ym<=my2; ++ym)
for (int xm = -mx1; xm<=mx2; ++xm)
val+=(*this)(x+xm,y+ym,z+zm,v)*mask(mx1+xm,my1+ym,mz1+zm);
dest(x,y,z,v) = (Ttfloat)val;
}
if (cond)
cimg_forYZV(*this,y,z,v)
for (int x = 0; x<dimx(); (y<my1 || y>=mye || z<mz1 || z>=mze)?++x:((x<mx1-1 || x>=mxe)?++x:(x=mxe))) {
Ttfloat val = 0;
for (int zm = -mz1; zm<=mz2; ++zm) for (int ym = -my1; ym<=my2; ++ym) for (int xm = -mx1; xm<=mx2; ++xm)
val+=at3(x+xm,y+ym,z+zm,v)*mask(mx1+xm,my1+ym,mz1+zm);
dest(x,y,z,v) = (Ttfloat)val;
}else
cimg_forYZV(*this,y,z,v)
I am using the mask of 7 x 7 and each of the values inside it is '1'.
What I got from the function was that for each pixel, it is taking a 7 by 7 window around it, with the pixel at center and then multiplying with he Identity matrix. It feels like some kind of smoothing filter but which one is it? Which equivalent filter can I use in openCV?
I can post the whole function, but its too long and I don't see the point. I would be really thankful for your help.
So, I found the answer in the thesis of the person who implemented pHash. It said:
During the process of calculating pHash, a mean filter is applied to the image. A ker-
nel with dimension 7x7 is used. To apply this kernel, the get_convolve()
function of the CImg library is used. It is then highlighted as:
For an image I and a mask M it is:
R(x,y,z) = SIGMA(i,j,k) I(x − i, y − j, z − k)M (i, j, k)
Then when I looked at the type of filtering functions offered by openCV here, it matched with the box filter function.

Implementing Otsu binarization for faded images of documents

I'm trying to implement Otsu binarization technique on document images such as the one shown:
Could someone please tell me how to implement the code in MATLAB?
Taken from Otsu's method on Wikipedia
I = imread('cameraman.tif');
Step 1. Compute histogram and probabilities of each intensity level.
nbins = 256; % Number of bins
counts = imhist(I,nbins); % Each intensity increments the histogram from 0 to 255
p = counts / sum(counts); % Probabilities
Step 2. Set up initial omega_i(0) and mu_i(0)
omega1 = 0;
omega2 = 1;
mu1 = 0;
mu2 = mean(I(:));
Step 3. Step through all possible thresholds from 0 to maximum intensity (255)
Step 3.1 Update omega_i and mu_i
Step 3.2 Compute sigma_b_squared
for t = 1:nbins
omega1(t) = sum(p(1:t));
omega2(t) = sum(p(t+1:end));
mu1(t) = sum(p(1:t).*(1:t)');
mu2(t) = sum(p(t+1:end).*(t+1:nbins)');
end
sigma_b_squared_wiki = omega1 .* omega2 .* (mu2-mu1).^2; % Eq. (14)
sigma_b_squared_otsu = (mu1(end) .* omega1-mu1) .^2 ./(omega1 .* (1-omega1)); % Eq. (18)
Step 4 Desired threshold corresponds to the location of maximum of sigma_b_squared
[~,thres_level_wiki] = max(sigma_b_squared_wiki);
[~,thres_level_otsu] = max(sigma_b_squared_otsu);
There are some differences between the wiki-version eq. (14) in Otsu and the eq. (18), and I don't why. But the thres_level_otsu correspond to the MATLAB's implementation graythresh(I)
Since the function graythresh in Matlab implements the Otsu method, what you have to do is convert your image to grayscale and then use the im2bw function to binarize the image using the threhsold level returned by graythresh.
To convert your image I to grayscale you can use the following code:
I = im2uint8(I);
if size(I,3) ~= 1
I = rgb2gray(I);
end;
To get the binary image Ib using the Otsu's method, use the following code:
Ib = im2bw(I, graythresh(I));
You should get the following result:
Starting out with what your initial question was implementing the OTSU thresolding its true that MATLAB's graythresh function is based on that method
The OTSU's method considers the threshold value as the valley between two peaks that is one of the foreground pixels and the other of the background pixels
Pertaining to your image which seems like a historical manuscript found this paper that compares all the methods that could be used for thresholding document images
You can also download and read up sauvola thresholding from here
Good luck with its implementation =)
Corrected MATLAB Implementation (for 2d matrix)
function [T] = myotsu(I,N);
% create histogram
nbins = N;
[x,h] = hist(I(:),nbins);
% calculate probabilities
p = x./sum(x);
% initialisation
om1 = 0;
om2 = 1;
mu1 = 0;
mu2 = mode(I(:));
for t = 1:nbins,
om1(t) = sum(p(1:t));
om2(t) = sum(p(t+1:nbins));
mu1(t) = sum(p(1:t).*[1:t]);
mu2(t) = sum(p(t+1:nbins).*[t+1:nbins]);
end
sigma = (mu1(nbins).*om1-mu1).^2./(om1.*(1-om1));
idx = find(sigma == max(sigma));
T = h(idx(1));

Resources