setting threshold and batch processing in ImageJ (FIJI) macro - imagej

I know this has been posted elsewhere and that this is no means a difficult problem but I'm very new to writing macros in FIJI and am having a hard time even understanding the solutions described in various online resources.
I have a series of images all in the same folder and want to apply the same operations to them all and save the resultant excel files and images in an output folder. Specifically, I'd like to open, smooth the image, do a Max intensity Z projection, then threshold the images to the same relative value.
This thresholding is one step causing a problem. By relative value I mean that I would like to set the threshold so that the same % of the intensity histogram is included. Currently, in FIJI if you go to image>adjust>threshold you can move the sliders such that a certain percentage of the image is thresholded and it will display that value for you in the open window. In my case 98% is what I am trying to achieve, eg thresholding all but the top 2% of the data.
Once the threshold is applied to the MIP, I convert it to binary and do particle analysis and save the results (summary table, results, image overlay.
My approach has been to try and automate all the steps/ do batch processing but I have been having a hard time adapting what I have written to work based on instructions found online. Instead I've been just opening every image in the directory one by one and applying the macro that I wrote, then saving the results manually. Obviously this is a tedious approach so any help would be much appreciated!
What I have been using for my simple macro:
run("Smooth", "stack");
run("Z Project...", "projection=[Max Intensity]");
setAutoThreshold("Default");
//run("Threshold...");
run("Convert to Mask");
run("Make Binary");
run("Analyze Particles...", " show=[Overlay Masks] display exclude clear include summarize in_situ");

You can use the Process ▶ Batch ▶ Macro... command for this.
For further details, see the Batch Processing page of the ImageJ wiki.

Related

image preprocessing methods that can be used for identification of industrial parts name (stuck or engraved) on the surface?

I am working on a project where my task is to identify machine part by its part number written on label attached to it or engraved on its surface. One such example of label and engraved part is shown in below figures.
My task is to recognise 9 or 10 alphanumerical number (03C 997 032 D in 1st image and 357 955 531 in 2nd image). This seems to be easy task however I am facing problem in distinguishing between useful information in the image and rest of the part i.e. there are many other numbers and characters in both image and I want to focus on only mentioned numbers. I tried many things but no success as of now. Does anyone know the image pre processing methods or any ML/DL model which I should apply to get desired result?
Thanks in advance!
JD
You can use OCR to the get all characters from the image and then use regular expressions to extract the desired patterns.
You can use OCR method, like Tesseract.
Maybe, you want to clean the images before running the text-recognition system, by performing some filtering to remove noise / remove extra information, such as:
Convert to gray scale (colors are not relevant, aren't them?)
Crop to region of interest
Canny Filter
A good start can be one of this tutorial:
OpenCV OCR with Tesseract (Python API)
Recognizing text/number with OpenCV (C++ API)

Maxima plots auto save

I am using Maxima and I have a lot of resulting plots that I want to save on drive for other uses (making GIF...etc)
This is what I am looking at:
Is there any code that can autosave the plots instead of having to save it manually one by one?
Thank you in advance.
Well, one approach is to specify a file name in the arguments of plot2d. Then the plot is output directly to the file and it doesn't show up in the GUI. E.g.,
plot2d (sin(x), [x, 0, 10], [png_file, "mysinplot.png"]);
plot2d recognizes png_file, pdf_file, ps_file and svg_file. In each case, ? png_file, etc, will show some info about that.
Note that there isn't any file output flag for GIF output. The closest thing is PNG which is similar to GIF.
I think draw also recognizes different file formats but I don't know about that without searching the documentation.
If you are generating a lot of plots, it might be convenient to automatically generate file names via sconcat, e.g. sconcat("myplot", i, ".png") produces "myplot10.png" when i is equal to 10.

"Separate image files" and "Image stack" in MicroManager plugin - easy way to convert between the two?

Apologies for tagging this just ImageJ - it's a problem regarding MicroManager, a microscopy plugin for it and I thought this would be best.
I'd recently taken images for an important experiment using MicroManager (a recent version, though I cannot recall the exact number). The IT services at my institution have recently been having some networking problems and my saved preferences for the software had been erased. I'd got half way through my experiment when I realised that I'd saved my images as separate image files (three greyscale TIFFs plus metadata text files) instead of OME-TIFF iamge stacks.
All of my ImageJ macros for image processing rely on having a multiple channel image stack, so this is a bit of a problem. Is there any easy way in MicroManager (or ImageJ) to bulk convert these single channel greyscale images into the OME-TIFF image stack after the images have already been taken?
Cheers.
You can start with a macro like this one:
// Convert your images to a stack
run("Images to Stack", "name=Stack title=[] use");
// The stack will default the images to time points. Convert to channels
run("Stack to Hyperstack...", "order=xyczt(default) channels=3 slices=1 frames=1 display=Color");
// Export as OME-TIFF
run("Bio-Formats Exporter");
This is designed to reconstruct one dataset at a time (open 3 images, run the macro and export the OME-TIFF).
If you don't want any dialogs to show you can pass an output directory to the Bio-Formats exporter:
run("Bio-Formats Exporter", "save=/path/to/image.ome.tif export compression=Uncompressed");
For the output file name you can get the original image name in the macro with getTitle()
There is also a template example on iterating over all the files in a directory, if you want to completely automate the macro. However this may take some tweaking since you want to operate on your images 3 at a time.
Hope that helps!

Difficulty counting cells due to clustering and pixel value cut off

EDIT:
I have continued working on my problem among other things and have made significant process. Using one Dr. Ashby's macro provided on ImageJwiki, and using some of my own makeshift code I can now do batch processing of images taken of Hoescht, Calcein AM, and Ethidium Homodimer stains and get decent recognition of objects. Reducing exposure time and levels of stain used (specifically calcein AM) has helped with the pixel value cut offs I was dealing with earlier. The macro still has problems with distinguishing clumped cells from one another though. To address this issue I want to implement a command in my macro that divides clusters of cells that it identifies as one cell based on the average size of our cells. The only problem is that in all my reading I haven't seen anything that mentions this. Does anyone have any thoughts on how I could implement this code? I have copied the macro below.
//get appropriate directories from user
dir1 = getDirectory("Choose Source Directory ");
dir2 = getDirectory("Choose Destination directory");
list = getFileList(dir1);
//give user an opportunity to adjust default parameters to better fit their application
Dialog.create("Adjust for objective magnification");
Dialog.addNumber("Objective Magnification (use 10 if unknown)", 10);
Dialog.addMessage("\tIf needed particle size limits can be adjusted below \nLeave mag. at 10 if customizing particle size limits\n");
Dialog.addNumber("Minimum particle size (pixels^2)",420);
Dialog.addNumber("Maximum particle size (pixels^2)",1600);
Dialog.addMessage("\tIn the following dialogs select \n first the Source Directory, \nthen a Destinaion directory for Results");
Dialog.show();
//Assigning the entered values to variables
magnification=Dialog.getNumber();
userMin=Dialog.getNumber();
userMax=Dialog.getNumber();
sMin=magnification*magnification/100*userMin;
sMax=magnification*magnification/100*userMax;
setBatchMode(true);
for (i=0; i<list.length; i++){
//print (list[i]);
open(dir1+list[i]);
name=File.nameWithoutExtension;
//Prepare the image by removing any scale and making 8-bit
run("Set Scale...", "distance=0 known=0 pixel=1 unit=pixel");
run("8-bit");
saveAs("Tiff", dir2+i+" Original "+name);//Saving with this naming scheme is required for the MeLast macro to function
//run("Brightness/Contrast...");
setMinAndMax(50, 255);
setOption("BlackBackground", false);
run("Make Binary", "method=Yen background=Light calculate black");
run("Watershed", "stack");
//Analyze particles
run("Analyze Particles...", "size="+sMin+"-"+sMax+" circularity=0.50-1.00 show=[Count Masks] display exclude include summarize");
//Save the masks file
saveAs("Tiff", dir2+i+" CountMask "+name);//Saving with this naming scheme is required for the MeLast macro to function
close();
//Save the thresholded image
saveAs("Tiff", dir2+i+" Thresholded "+name);//Saving with this naming scheme is required for the MeLast macro to function
}
//Save the results
selectWindow("Results");
saveAs("Results", dir2+"ZZ Results.xls");
//Save the summary
selectWindow("Summary");
saveAs("Text", dir2+"Z Summary.txt");
You need to find those clusters and analyze each to guess how many cells might belong to that cluster, using spatial information of the cells and other specific information in your problem domain. I believe that's an usual image analysis task.
As for cut-off pixel values, I guess you can consider the cut-off pixels as censored data. But I am not sure how meaningful it would be for 8 bit depth images.
There is another free, open-source program called CellProfiler (http://www.cellprofiler.org) that has some more specialized methods for separating cells -- more advanced than the standard watershed. See, for example, part of the manual here: http://www.cellprofiler.org/CPmanual/IdentifyPrimaryObjects.html.
Perhaps CellProfiler can do the job, or point you to the right algorithms to bring into the ImageJ macro.

pipeline image compression

I have a custom made web server running that I use for scanning documents. To activate the scanner and load the image on screen, I have a scan button that links to a page with the following image tag:
<img src="http://myserver/archive/location/name.jpg?scan" />
When the server receives the request for a ?scan file it streams the output of the following command, and writes it to disk on the requested location.
scanimage --resolution 150 --mode Color | convert - jpg:-
This works well and I am happy with this simple setup. The problem is that convert (ImageMagick) buffers the output of scanimage, and spits out the jpeg image only when the scan is complete. The result of this is that the webpage is loading for a long time with the risk of timeouts. It also keeps me from seeing the image as it is scanned, which should otherwise be possible because it is exactly how baseline encoded jpeg images show up on slow connections.
My question is: is it possible to do jpeg encoding without buffering the image, or is the operation inherently global? If it is possible, what tools could I use? One thought I had is separately encoding strips of eight lines, but I do not know how to put these chunks together. If it is not possible, is there another compression format that does allow this sort of pipeline encoding? My only restriction is that the format should be supported by the mainstream browsers.
Thanks!
You want to subdivide the image with a space-filling-curve. A sfc recursivley subivide the surface in smaller tiles and because of it's fractal dimension reduce the 2d complexity to a 1d complexity. When you have subdivide the image you can you use this curve to continously scan the image. Or you can use a BFS and some sort of an image-low-frequency-detail filter to continuously scan higher resolution of your image. You want to look for Nick's spatial index hilbert curve quadtree blog but I don't think you can put the tiles together with a jpg format (cat?). Or you can continously reduce the resolution?
scanimage --resolution [1-150] --mode Color | convert - jpg:-

Resources