I'm planning to process quite a large number of images and would like to average every 5 consecutive images. My images are saved as .dm4 file format.
Essentially, I want to produce a single averaged image output for each 5 images that I can save. So for instance, if I had 400 images, I would like to get 80 averaged images that would represent the 400 images.
I'm aware that there's the Running Z Projector plugin but it does a running average and doesn't give me the reduced number of images I'm looking for. Is this something that has already been done before?
Thanks for the help!
It looks like the Image>Stacks>Tools>Grouped Z_Projector does exactly you want.
I found it by opening the command finder ('L') and filtering on "project".
Related
Consider I have created an image unknown.tiff from a page of a PDF, named doc.pdf, where the exact command for conversion aren't known. The size of this image is ~ 1MB.
The exact command isn't known, but it is known that the changes are majorly on depth and density. (A subset of these two, would do too)
Now, the normal command pattern is:
convert -density 300 PDF.pdf[page-number] -depth 8 image.tiff
But this gives me a file of ~17 MB, which obviously isn't the one I am looking for. If I remove depth, then I get a file of ~34 MB, and when I remove both, I get a blurred image of 2 MB. I also removed density only, then too the results don't match (~37 MB).
Since the output size of the image unknown.tiff is so low, I've hypothesized that it might take less time to get produced.
Since the time of conversion is of great concern to me, I want to know the ways I can come to the exact command which produced unknown.tiff
I want to use NiftyNet to implement Deep Learning on medical image processing. However, there is one thing I haven't figured out regarding the data input: how does it join the multi-modality images? I saw the demo of BRATS2017, they seems to use 4 different modalities, and in the configuration file, they just included the directory of the images and they claim it will "concatenate" the images. But I want to know more, as those images are 3D, how are they concatenated? [slice1-30]:[slice1-30].. or [slice1, slice1, slice1 ...]:[slice2, slice2, slice2...]?
And can we control the data organization part? If so, which file should I modify?
Any suggestion would be greatly appreciated!
In this case, the 3D images are concatenated in an additional dimension. You control the order they're concatenated in by specifying the order of files to load in the *.ini files.
However, as long as you're consistent, it shouldn't matter what order the modalities go in.
The images are concatenated in the channel dimension. For 2D images, the dimensions are NSSC: batch size, 2 spatial dimensions, then channel. For 3D images, the dimensions are NSSSC: batch size, 3 spatial dimensions, then channel.
I know this has been posted elsewhere and that this is no means a difficult problem but I'm very new to writing macros in FIJI and am having a hard time even understanding the solutions described in various online resources.
I have a series of images all in the same folder and want to apply the same operations to them all and save the resultant excel files and images in an output folder. Specifically, I'd like to open, smooth the image, do a Max intensity Z projection, then threshold the images to the same relative value.
This thresholding is one step causing a problem. By relative value I mean that I would like to set the threshold so that the same % of the intensity histogram is included. Currently, in FIJI if you go to image>adjust>threshold you can move the sliders such that a certain percentage of the image is thresholded and it will display that value for you in the open window. In my case 98% is what I am trying to achieve, eg thresholding all but the top 2% of the data.
Once the threshold is applied to the MIP, I convert it to binary and do particle analysis and save the results (summary table, results, image overlay.
My approach has been to try and automate all the steps/ do batch processing but I have been having a hard time adapting what I have written to work based on instructions found online. Instead I've been just opening every image in the directory one by one and applying the macro that I wrote, then saving the results manually. Obviously this is a tedious approach so any help would be much appreciated!
What I have been using for my simple macro:
run("Smooth", "stack");
run("Z Project...", "projection=[Max Intensity]");
setAutoThreshold("Default");
//run("Threshold...");
run("Convert to Mask");
run("Make Binary");
run("Analyze Particles...", " show=[Overlay Masks] display exclude clear include summarize in_situ");
You can use the Process ▶ Batch ▶ Macro... command for this.
For further details, see the Batch Processing page of the ImageJ wiki.
Apologies for tagging this just ImageJ - it's a problem regarding MicroManager, a microscopy plugin for it and I thought this would be best.
I'd recently taken images for an important experiment using MicroManager (a recent version, though I cannot recall the exact number). The IT services at my institution have recently been having some networking problems and my saved preferences for the software had been erased. I'd got half way through my experiment when I realised that I'd saved my images as separate image files (three greyscale TIFFs plus metadata text files) instead of OME-TIFF iamge stacks.
All of my ImageJ macros for image processing rely on having a multiple channel image stack, so this is a bit of a problem. Is there any easy way in MicroManager (or ImageJ) to bulk convert these single channel greyscale images into the OME-TIFF image stack after the images have already been taken?
Cheers.
You can start with a macro like this one:
// Convert your images to a stack
run("Images to Stack", "name=Stack title=[] use");
// The stack will default the images to time points. Convert to channels
run("Stack to Hyperstack...", "order=xyczt(default) channels=3 slices=1 frames=1 display=Color");
// Export as OME-TIFF
run("Bio-Formats Exporter");
This is designed to reconstruct one dataset at a time (open 3 images, run the macro and export the OME-TIFF).
If you don't want any dialogs to show you can pass an output directory to the Bio-Formats exporter:
run("Bio-Formats Exporter", "save=/path/to/image.ome.tif export compression=Uncompressed");
For the output file name you can get the original image name in the macro with getTitle()
There is also a template example on iterating over all the files in a directory, if you want to completely automate the macro. However this may take some tweaking since you want to operate on your images 3 at a time.
Hope that helps!
I have a custom made web server running that I use for scanning documents. To activate the scanner and load the image on screen, I have a scan button that links to a page with the following image tag:
<img src="http://myserver/archive/location/name.jpg?scan" />
When the server receives the request for a ?scan file it streams the output of the following command, and writes it to disk on the requested location.
scanimage --resolution 150 --mode Color | convert - jpg:-
This works well and I am happy with this simple setup. The problem is that convert (ImageMagick) buffers the output of scanimage, and spits out the jpeg image only when the scan is complete. The result of this is that the webpage is loading for a long time with the risk of timeouts. It also keeps me from seeing the image as it is scanned, which should otherwise be possible because it is exactly how baseline encoded jpeg images show up on slow connections.
My question is: is it possible to do jpeg encoding without buffering the image, or is the operation inherently global? If it is possible, what tools could I use? One thought I had is separately encoding strips of eight lines, but I do not know how to put these chunks together. If it is not possible, is there another compression format that does allow this sort of pipeline encoding? My only restriction is that the format should be supported by the mainstream browsers.
Thanks!
You want to subdivide the image with a space-filling-curve. A sfc recursivley subivide the surface in smaller tiles and because of it's fractal dimension reduce the 2d complexity to a 1d complexity. When you have subdivide the image you can you use this curve to continously scan the image. Or you can use a BFS and some sort of an image-low-frequency-detail filter to continuously scan higher resolution of your image. You want to look for Nick's spatial index hilbert curve quadtree blog but I don't think you can put the tiles together with a jpg format (cat?). Or you can continously reduce the resolution?
scanimage --resolution [1-150] --mode Color | convert - jpg:-