How to Do Live FFT of sub-area (ROI) of a Image using DigitalMicrograph script - roi

I mean both the sub-area change and the data change will trigger the FFT process, so I can see sub-area FFT of a Live Image.

You can simply use the command NewLiveFFT for this.
Below is an example script. Note that if a ROI selection is present on the image, it will be used. Otherwise the whole image is used. You can, of course, create a specific ROI via script and add it as well.
number kRasterDisplay = 1
image img
if (!GetFrontImage(img)) exit(0)
imageDisplay disp = img.ImageGetImageDisplay(0)
if ( kRasterDisplay != disp.ImageDisplayGetDisplayType() ) exit(0)
ROI sel = disp.ImageDisplayGetRoi(0)
NewLiveFFT(disp,sel,0)
Note that LiveFFTs will automatically stop if the data type of the source image changes, or if the source image is closed. You can also not resize the ROI of a LiveFFT. It would be possible to create a script which allows ROI resize, but then you would have to use ROI-listeners and code all the according links and FFTs yourself.

Related

Change the background from transparent to some color using gimp

I've large set of images. I wan't to chage their background to specific color. Lets say green. All of the images have transparent background. Is there a way to perform this action using python-fu scripting in Gimp. Or some other tool available to do this specific task in automated fashion.
Yes there is -
Although this question is not an exact duplicate, i is not practical to just type in the basics of creating a Python plug-in every time someone asks to automate some task in the GIMP
I will have to ask you to look at GIMP: Create image stack from all image files in folder and maybe some other python-fu related answers for the basics.
After you get a simple "hello world" script going, simply
register a script that asks for
a string with the desired folder, use Python's os.listdir, or glob.glob to fetch the paths
to the iamge files, and simply loop through them repeating calls to:
image = pdb.gimp_file_load (...)
image.new_layer(pos=1, fill_mode = FOREGROUND_FILL)
pdb.gimp_file_save(...)
pdb.gimp_image_delete(image)
The parameters to the PDB calls are easy to check at GIMP's help procedure browser -
the image's new_layermethod is not really documented, and can replace 3-4 pdb calls - the possible parameters to it are: "name", "width", "height", "offset_x", "offset_y",
"alpha", "pos", "opacity", "mode", "fill_mode",
all of which are optional. "pos" is the layer position: 0 is at the top of the image. "1" would be just bellow the topmost layer.
It should be clear that the gimp_image_delete call will just remove the image from memory - not the file from disk. Simply de-referencing it on the Python side won't make GIMP forget it. Likewise, if you want to interact with any image open in this way, you have to call pdb.gimp_display_new for that image.
You should be able to accomplish this by simply setting the background color to your desired color and then simply flatten the image which will make all transparent areas become set to the current background color.
pdb.gimp_palette_set_background( 'green' )
image = pdb.gimp_file_load ('myImage.png')
flatLayer = pdb.gimp_image_flatten( image )
pdb.gimp_file_save( image, flatLayer, 'myFlatFile.png' , 'myFlatFile.png')

Find the maximum connected component

I have a binary image (image1). Now I want to detection where is the figure ( may be include big text) in original image. I use haar wavelet transform and detec a image B include some position may be the figure of A. (image 2). If I use image A - image B = image C (image 3) it may be not good be cause we have some boundary. Now I want remove the boundary or detect exactly the figure in image A? how to do that ?.
I tried to use connected component, but it run over time.
There is my image: ( I can't upload directly image here)
image A (original)
image B (position of figure)
Image A- imageB =Image C ( that mean if A(i,j)==1 and B(i,j)==1 then C(i,j)=0;)
Standard connected component algorithms will work fine and execute in linear time.
I would recommend doing it by BFS (Breadth-First-Search) rather than a recursive DFS (Depth-First-Search), to avoid possible stack overflows.

How to find the difference between one frame to another for a specified part of an image only

I'm using OpenCV for my project on video feature tracking. I need to create a mask which is the difference between one frame to the next one. I know to do this, we can use the cv.absdiff function. The problem is I want the mask to contain only the difference for a specified small part of the frame, i.e. not the entire image. I'm not sure how to go about doing this.
// define regions of interest in images
Rect roiRect1(x1, y1, roiWidth, roiHeight);
Rect roiRect2(x2, y2, roiWidth, roiHeight);
// create images of regions of interest. no copy is performed, this is just new pointers to existing data
Mat roiImage1(frame1, roiRect1);
Mat roiImage2(frame2, roiRect2);
// do whatever you want with those images
......

Manipulating a subsection of an image in MATLAB

I have a task where I need to track a series of objects across several frames, and compose the background from the image. The issue arises because one of the objects does not move until near the end, so I'm forced to take a shoddy average of the image. However, if I can blur out the objects, I think I'll be able to improve the background average.
I can identify a subsection of the image where the object is, an m by m array. I just need the ability to blur out this section with a filter. However, imfilter uses a fullsized array (image) as its input, so I cannot simply move along this array pixel by pixel in a for loop. But, if I try removing the image to take an image, I cannot put it back in without using another for loop, which would be computational expensive.
Is there a method of mapping a blur to a subsection of an image using MATLAB? Can this be done without using two for loops?
Try this...
sub_image = original_image(ii:jj,mm:nn)
blurred_sub_image = imfilter(sub_image, etc)
original_iamge(ii:jj,mm:nn) = blurred_sub_image
In short, you don't need to use a for loop to address a subsection of an image. You can do it directly, both for reading and writing.

What processing steps should I use to clean photos of line drawings?

My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.

Resources