Adaptive Fourier Filter - image-processing

Main question
Have someone already created a free adaptive Fourier filter for Digital Micrograph (or alternatively ImageJ)?
About the adaptive Fourier filter
I want to use some effective filtering processes for my TEM image processing. I came across the adaptive Fourier filtering technique introduced by Möbus et al. in 1993 [1]. In short this is a reciprocal space filtering technique with the workflow:
FFT( Image ) --> Mask * FFT( Image ) --> iFFT( Mask * FFT( Image ) )
The new feature of this filter is that the shape of the filter is adapted to the spectrum of the image and the windows of the mask are automatically placed at all positions which allows
an optimal separation of signal from noise [2].
What have I already tried?
The filter is available in the HREM Filters Pro package from HREM Research https://www.hremresearch.com/Eng/plugin/FiltersEng.html , but my institute does not have a license for this. I have found DM scripts for other filters such as Wiener filters and average background subtracted filters on the DM script database https://www.felmi-zfe.at/dm_script, but there is no adaptive filter.
So what was the question again?
Since I have no experience with DM scripting myself, I would prefer to find or adjust an already existing DM script on adaptive Fourier filtering. Alternatively, I also do some of my image processing in ImageJ, so a script for this program would work as well. Do any of you know whether such scripts already exist?
Sources
[1] Möbus, G., G. Necker, and M. Rühle. "Adaptive Fourier-filtering technique for quantitative evaluation of high-resolution electron micrographs of interfaces." Ultramicroscopy 49.1-4 (1993): 46-65.
[2] Kret, S., et al. "Extracting quantitative information from high resolution electron microscopy." physica status solidi (b) 227.1 (2001): 247-295.

The Adaptive Threshold ImageJ plugin which can be downloaded from:
https://sites.google.com/site/qingzongtseng/adaptivethreshold
is indeed an adaptive filter.

I'm not aware of an (open source) script for this, but a base template for a Fourier-Space filtered script in DigitalMicrograph would be:
// Create and show test image
realimage img := RealImage( "Test Image 2D", 4, 512, 512 )
img = abs( itheta*2*icol/(iwidth+1)* sin(iTheta*10) ) + 15*(irow<iheight/2 ? irow : iheight-irow )/iheight
img = PoissonRandom(100*img)
img.ShowImage()
// Transform to Fourier Space
compleximage img_FFT := FFT(img)
// Create "Mask" or Filter in Fourier Space
// This is where all the "adaptive" things have to happen to create
// the correct mask. The below is just a dummy
image mask := RealImage("Mask",4, 512,512 )
mask = (iradius<iheight/3 && iradius>5 ) ? 1 : 0
mask = SQRT((icol-iwidth/2-100)**2+(irow-iheight/2-50)**2) < 25 ? 0 : mask
mask = SQRT((icol-iwidth/2+100)**2+(irow-iheight/2+50)**2) < 25 ? 0 : mask
mask.ShowImage()
// Apply mask
img_FFT *= mask
img_FFT.SetName( "Masked FFT" )
img_FFT.ShowImage()
// Transform back
image img_filter := modulus(iFFT(img_FFT))
img_filter.SetName( img.GetName() + " Filtered" )
img_filter.ShowImage()
// Just arrange
EGUPerformActionWithAllShownImages("arrange")

Related

Texture transformation

I am working on eigen transformation - texture to detect object from an image. This work was published in ACCV 2006 page number 71. Full pdf is available on chapter-3 in this pdf https://www.diva-portal.org/smash/get/diva2:275069/FULLTEXT01.pdf. I am not able to follow after getting texture descriptors.
I am working on the attached image. The image size is 9541440.
I took image patches of 3232 and for every patch calculated eigenvalues and got the texture descriptor. After that what to do with these texture descriptors is what I am not able to follow.
Any help to unblock will be really appreciated. Code looks for calculating descriptors looks like below:
descriptors = np.zeros((gray.shape[0]//w, gray.shape[1]//w))
w = 32
for i in range(gray.shape[0]//w):
temp = []
for j in range(gray.shape[1]//w):
sorted_eigen = -np.sort(-np.linalg.eigvals(gray[i*w:
(i+1)*w,j*w:(j+1)*w]))
l = i*w + 13
k = (i+1)*w
theta_svd = (1/(k-l+1))* np.sum([np.abs(val) for val in s[l:k]])
descriptors[i,j] = theta_svd

Gabor filter parametrs for fingerprint image enhancement?

i am biggner in image processing and in gabor filter and i want to use this filter to enhance fingerprint image
i read many articles about fingerprint image enhancement and i know that the steps for that is
read image -> noramalize -> get orientation map -> gabor filter -> binarize -> skeleton
now i am in step 4 , my question is how to get the right values for ( lambds and gamma ) for gabor
filter
my image :
my code :
1- read image and get the orientation map using HOG features
imgc = imread(r'C:\Users\iP\Desktop\printe.jpg',as_gray=True)
imgc = resize(imgc, (64*3,128*3))
rows,cols=imgc.shape
offset=24
ori=9 # to get angels (0,45,90,135) only
fd, hog_image = hog(imgc, orientations=ori, pixels_per_cell=(offset, offset),
cells_per_block=(1, 1), visualize=True, multichannel=None,feature_vector=False
)
orientation map :
2- reshape the orientation map from (8, 16, 1, 1, 9) to (8, 16, 9),,,
8 ->rows , 16 -> cols , 9 orientation
fd=np.array(fd)
fd=np.reshape(fd,(fd.shape[0],fd.shape[1],ori))
# from (8, 16, 9) to (8, 16, 1)
# Choose the angle that has the most potential ( biggest magntude )
angels=np.zeros((fd.shape[0],fd.shape[1],1))
for r in range(fd.shape[0]):
for c in range(fd.shape[1]):
bloc_prop = fd[r,c]
angelss=bloc_prop.reshape((1,ori))
angel=np.argmax(angelss)
angels[r,c]=angel
angels=angels.astype(np.int32)
3- the convolve function
def conv_gabor(img,orient_map,gabor_kernel_shape):
#
# loop on all pixels in the image and convolve it with it's angel in the orientation map
#
roo,coo=img.shape
#to get the padding value for immage before convolving it with kernels
pad=(gabor_kernel_shape-1)
padded=np.zeros((img.shape[0]+pad,img.shape[1]+pad)) # adding the cols and rows
padded[int(pad/2):-int(pad/2),int(pad/2):-int(pad/2)]=img # copy image to inside the padded
image
#result image
dst=padded.copy()
# start from the image that inside the padded
for r in range(int(pad/2),int(pad/2)+roo):
for c in range(int(pad/2),int(pad/2)+coo):
# get the angel from the orientation map
ro=(r-int(pad/2))//offset
co=(c-int(pad/2))//offset
ang=angels[ro,co]
real_angel=(((180/ori)*ang))
# bloack around the pixe to convolve it
block=padded[r-int(pad/2):r+int(pad/2)+1,c-int(pad/2):c+int(pad/2)+1]
# get Gabor kernel
# here is my question ->> what to get the parametres values for ( lambda and gamma
and phi)
ker= cv2.getGaborKernel( (gabor_kernel_shape,gabor_kernel_shape), 3,
np.deg2rad(real_angel),np.pi/4,0.001,0 )
dst[r,c]=np.sum((ker*block))
return dst
dst=conv_gabor(imgc,angels,11)
dst :
you see the image is too bad i dont know why this , i think because the lambda and gamma or what ?
but when i filter with one angel only 45 :
ker= cv2.getGaborKernel( (11,11), 2, np.deg2rad(45),np.pi/4,0.5,0 )
filt = cv2.filter2D(imgc,cv2.CV_64F,ker)
plt.imshow(filt,'gray')
reslut :
you see the edges that has 45 on the left is good quality
can anyone help me please , and tell me what should i do in this probelm ?
thanks all :)
EDIT:
i searched for another way and i found that i can use gabor fiter bank with many orientation and get best score in filtred images , so how can i find best score for pixels from filtred images
this is the output when i use gabor fiter bank with 45,60,65,90,135 angels and divide the filtered images to 16*16 and find the highest standard deviation (best score -> i use standard deviation as the score) for each block and get the best filtred image
so as you can see there are good and bad parts in the image ,i think using standard deviation alone is ineffective in some parts of the image , so my new question is what is best score function that gives me good output parts in the image
original image :
In my opinion, weighting the filtered images might be enough for your task. Considering your filter orientations, the filters with angle 45 and 135 respond quite well at different regions of the image. So, you can calculate the weighted sum to get the best filter result.
img = cv2.imread('fingerprint.jpg',0)
w_45 = 0.5
w_135 = 0.5
img_45 = cv2.filter2D(img,cv2.CV_64F,cv2.getGaborKernel( (11,11), 2, np.deg2rad(45),np.pi/4,0.5,0 ))
img_135 = cv2.filter2D(img,cv2.CV_64F,cv2.getGaborKernel( (11,11), 2, np.deg2rad(135),np.pi/4,0.5,0 ))
result = img_45*w_45+img_135*w_135
result = result/np.amax(result)*255
plt.imshow(result,cmap='gray')
plt.show()
Feel free to play with the weights. The result totally depends on what your next step is.

Homography matrix in Opencv?

In LATCH_match.cpp in opencv_3.1.0 the homography matrix is defined and used as:
Mat homography;
FileStorage fs("../data/H1to3p.xml", FileStorage::READ);
...
fs.getFirstTopLevelNode() >> homography;
...
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
...
Why H1to3p.xml is:
<opencv_storage><H13 type_id="opencv-matrix"><rows>3</rows><cols>3</cols><dt>d</dt><data>
7.6285898e-01 -2.9922929e-01 2.2567123e+02
3.3443473e-01 1.0143901e+00 -7.6999973e+01
3.4663091e-04 -1.4364524e-05 1.0000000e+00 </data></H13></opencv_storage>
With which criteria these numbers were chosen? They can be used for any other homography test for filtering keypoints (as in LATCH_match.cpp)?
I assume that your "LATCH_match.cpp in opencv_3.1.0" is
https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp
In that file, you find:
// If you find this code useful, please add a reference to the following paper in your work:
// Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015
And so, looking at http://arxiv.org/pdf/1501.03719v1.pdf you will find
For each set, we compare the first image against each of the remaining
five and check for correspondences. Performance is measured using the
code from [16, 17]1 , which computes recall and 1-precision
using known ground truth homographies between the images.
I think that the image ../data/graf1.png is https://github.com/Itseez/opencv/blob/3.1.0/samples/data/graf1.png that I show here:
According to the comment Homography matrix in Opencv? by Catree the original dataset is at http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/graf.tar.gz where it is said that
Homographies between image pairs included.
So I think that the homography stored in file ../data/H1to3p.xml is the homography between image 1 and image 3.

how can i removing sinusoidal noise w\ frequency domain in opencv

i'm trying to remove '5 lines' in section, in music papers, my original image is this : http://en.wikipedia.org/wiki/Requiem_(Mozart)#/media/File:K626_Requiem_Mozart.jpg
First, i apply gaussian filter and binarized with threshold (min:100, max 255).
Then applying dft to this image, erase some appropriate lines, and reconstruct image by inverse dft.
i use sample code in opencv documentation, actually i doubt myself that i understand this code. :(
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
in this sample code, there's 2 Mats. one is 'complexI' for spectrum, another is 'magI' for actual visualized. the result of cv::dft is complexI, magI is normalized complexI. my question is this. how can i add a black line(to cancel in freq domain) and reconstruct?
OpenCV (now) provides a detailed tutorial on how to deal with periodic noise by spectral filtering: https://docs.opencv.org/trunk/d2/d0b/tutorial_periodic_noise_removing_filter.html
It hinges on using cv::dft(), cv::idft(), cv::mulSpectrums(), and cv::magnitude().
The core function (from the tutorial) to perform the filtering goes like so:
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat& H)
{
Mat planes[2] = { Mat_<float>(inputImg.clone()), Mat::zeros(inputImg.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
// find FT of image
dft(complexI, complexI, DFT_SCALE);
Mat planesH[2] = { Mat_<float>(H.clone()), Mat::zeros(H.size(), CV_32F) };
Mat complexH;
merge(planesH, 2, complexH);
Mat complexIH;
// apply spectral filter
mulSpectrums(complexI, complexH, complexIH, 0);
// reconstruct the filtered image
idft(complexIH, complexIH);
split(complexIH, planes);
outputImg = planes[0];
}
Refer to the tutorial for more information.

Converting RGB to grayscale/intensity

When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.
It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.
However, I didn't find a good reference for this on the web. What is the source of these values?
See also these previous questions: here and here.
The specific numbers in the question are from CCIR 601 (see Wikipedia article).
If you convert RGB -> grayscale with slightly different numbers / different methods,
you won't see much difference at all on a normal computer screen
under normal lighting conditions -- try it.
Here are some more links on color in general:
Wikipedia Luma
Bruce Lindbloom 's outstanding web site
chapter 4 on Color in the book by Colin Ware, "Information Visualization", isbn 1-55860-819-2;
this long link to Ware in books.google.com
may or may not work
cambridgeincolor :
excellent, well-written
"tutorials on how to acquire, interpret and process digital photographs
using a visually-oriented approach that emphasizes concept over procedure"
Should you run into "linear" vs "nonlinear" RGB,
here's part of an old note to myself on this.
Repeat, in practice you won't see much difference.
### RGB -> ^gamma -> Y -> L*
In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),
are called "nonlinear" or
Gamma corrected.
"Linear" values are defined as
Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma
where gamma is 2.2 for many PCs.
The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))
(purists tongue-click) but here I'll drop the '.
Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma,
so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.
(LCD displays are more complex;
furthermore, some graphics cards compensate for gamma.)
To get the measure of lightness called L* from RGB,
first divide R G B by 255, and compute
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
This is Y in XYZ color space; it is a measure of color "luminance".
(The real formulas are not exactly x^gamma, but close;
stick with x^gamma for a first pass.)
Finally,
L* = 116 * Y ^ 1/3 - 16
"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --
Wikipedia Lab color space
I found this publication referenced in an answer to a previous similar question. It is very helpful, and the page has several sample images:
Perceptual Evaluation of Color-to-Grayscale Image Conversions by Martin Čadík, Computer Graphics Forum, Vol 27, 2008
The publication explores several other methods to generate grayscale images with different outcomes:
CIE Y
Color2Gray
Decolorize
Smith08
Rasche05
Bala04
Neumann07
Interestingly, it concludes that there is no universally best conversion method, as each performed better or worse than others depending on input.
Heres some code in c to convert rgb to grayscale.
The real weighting used for rgb to grayscale conversion is 0.3R+0.6G+0.11B.
these weights arent absolutely critical so you can play with them.
I have made them 0.25R+ 0.5G+0.25B. It produces a slightly darker image.
NOTE: The following code assumes xRGB 32bit pixel format
unsigned int *pntrBWImage=(unsigned int*)..data pointer..; //assumes 4*width*height bytes with 32 bits i.e. 4 bytes per pixel
unsigned int fourBytes;
unsigned char r,g,b;
for (int index=0;index<width*height;index++)
{
fourBytes=pntrBWImage[index];//caches 4 bytes at a time
r=(fourBytes>>16);
g=(fourBytes>>8);
b=fourBytes;
I_Out[index] = (r >>2)+ (g>>1) + (b>>2); //This runs in 0.00065s on my pc and produces slightly darker results
//I_Out[index]=((unsigned int)(r+g+b))/3; //This runs in 0.0011s on my pc and produces a pure average
}
Check out the Color FAQ for information on this. These values come from the standardization of RGB values that we use in our displays. Actually, according to the Color FAQ, the values you are using are outdated, as they are the values used for the original NTSC standard and not modern monitors.
What is the source of these values?
The "source" of the coefficients posted are the NTSC specifications which can be seen in Rec601 and Characteristics of Television.
The "ultimate source" are the CIE circa 1931 experiments on human color perception. The spectral response of human vision is not uniform. Experiments led to weighting of tristimulus values based on perception. Our L, M, and S cones1 are sensitive to the light wavelengths we identify as "Red", "Green", and "Blue" (respectively), which is where the tristimulus primary colors are derived.2
The linear light3 spectral weightings for sRGB (and Rec709) are:
Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y
These are specific to the sRGB and Rec709 colorspaces, which are intended to represent computer monitors (sRGB) or HDTV monitors (Rec709), and are detailed in the ITU documents for Rec709 and also BT.2380-2 (10/2018)
FOOTNOTES
(1) Cones are the color detecting cells of the eye's retina.
(2) However, the chosen tristimulus wavelengths are NOT at the "peak" of each cone type - instead tristimulus values are chosen such that they stimulate on particular cone type substantially more than another, i.e. separation of stimulus.
(3) You need to linearize your sRGB values before applying the coefficients. I discuss this in another answer here.
Starting a list to enumerate how different software packages do it. Here is a good CVPR paper to read as well.
FreeImage
#define LUMA_REC709(r, g, b) (0.2126F * r + 0.7152F * g + 0.0722F * b)
#define GREY(r, g, b) (BYTE)(LUMA_REC709(r, g, b) + 0.5F)
OpenCV
nVidia Performance Primitives
Intel Performance Primitives
Matlab
nGray = 0.299F * R + 0.587F * G + 0.114F * B;
These values vary from person to person, especially for people who are colorblind.
is all this really necessary, human perception and CRT vs LCD will vary, but the R G B intensity does not, Why not L = (R + G + B)/3 and set the new RGB to L, L, L?

Resources