NIftyNet data organization - niftynet

I want to use NiftyNet to implement Deep Learning on medical image processing. However, there is one thing I haven't figured out regarding the data input: how does it join the multi-modality images? I saw the demo of BRATS2017, they seems to use 4 different modalities, and in the configuration file, they just included the directory of the images and they claim it will "concatenate" the images. But I want to know more, as those images are 3D, how are they concatenated? [slice1-30]:[slice1-30].. or [slice1, slice1, slice1 ...]:[slice2, slice2, slice2...]?
And can we control the data organization part? If so, which file should I modify?
Any suggestion would be greatly appreciated!

In this case, the 3D images are concatenated in an additional dimension. You control the order they're concatenated in by specifying the order of files to load in the *.ini files.
However, as long as you're consistent, it shouldn't matter what order the modalities go in.

The images are concatenated in the channel dimension. For 2D images, the dimensions are NSSC: batch size, 2 spatial dimensions, then channel. For 3D images, the dimensions are NSSSC: batch size, 3 spatial dimensions, then channel.

Related

Store GIS Quadtree in Raw File (w/ Geohash) or PNG?

I am collecting GIS data consisting of normalized four values for whole world. I am curious on what would be the best way to store this data and wanted to take your advise. Would it be more efficient (in terms of size) to store the four values of the quadtree, along with a Geohash index via Z-order (Morton) or Hilbert curve? Or would it be more efficient to store it in a PNG file using alpha = 0 for empty spaces and lossless compression? The enclosed image 1 only visualizes one of the four values over Google Maps and I need to store this global data each day. Please, note that I will only store leaf nodes as visualized in the image 1 rather than the whole quadtree. I will also store this over time so I would also like to know your ideas about how video compression would improve.
Thank you all in advance for your time and consideration!

Content-based image retrieval features

I'm trying to implement Content-based image retrieval in my application. I found a LIRE library that look pretty good.
I need to analyze my image collection for similar(from human point of view) images. In my catalog I have a big amount of absolutely different uncategorized/unstructured images
In order to analyze images LIRE contains following list of algorithms:
CEDD,
AutoColorCorrelogram,
BinaryPatternsPyramid,
ColorLayout,
EdgeHistogram,
FCTH,
FuzzyColorHistogram,
Gabor,
JCD,
JointHistogram,
JpegCoefficientHistogram,
LocalBinaryPatterns,
LuminanceLayout,
OpponentHistogram,
PHOG,
RankAndOpponent,
RotationInvariantLocalBinaryPatterns,
ScalableColor,
SimpleCentrist,
SimpleColorHistogram,
SPACC,
SpatialPyramidCentrist,
SPCEDD,
SPFCTH,
SPJCD,
SPLBP,
Tamura
Based on your experience, could you please recommend one of them that can be most suitable(from human point of view) for such kind of image suite(mix of uncategorized images) in order to find a similar images?
I think JCD is the best one because combine two approach at the same time, and each approach is combining two features (color & texture).

Caffe mean file creation without database

I run caffe using an image_data_layer and don't want to create an LMDB or LevelDB for the data, But The compute_image_mean tool only works with LMDB/LevelDB databases.
Is there a simple solution for creating a mean file from a list of files (the same format that image_data_layer is using)?
You may notice that recent models (e.g., googlenet) do not use a mean file the same size as the input image, but rather a 3-vector representing a mean value per image channel. These values are quite "immune" to the specific dataset used (as long as it is large enough and contains "natural images").
So, as long as you are working with natural images you may use the same values as e.g., GoogLenet is using: B=104, G=117, R=123.
The simplest solution is to create a LMDB or LevelDB database of the image set.
The complicated solution is to write a tool similar to compute_image_mean, which takes image inputs and do the transformations and find the mean!

"Separate image files" and "Image stack" in MicroManager plugin - easy way to convert between the two?

Apologies for tagging this just ImageJ - it's a problem regarding MicroManager, a microscopy plugin for it and I thought this would be best.
I'd recently taken images for an important experiment using MicroManager (a recent version, though I cannot recall the exact number). The IT services at my institution have recently been having some networking problems and my saved preferences for the software had been erased. I'd got half way through my experiment when I realised that I'd saved my images as separate image files (three greyscale TIFFs plus metadata text files) instead of OME-TIFF iamge stacks.
All of my ImageJ macros for image processing rely on having a multiple channel image stack, so this is a bit of a problem. Is there any easy way in MicroManager (or ImageJ) to bulk convert these single channel greyscale images into the OME-TIFF image stack after the images have already been taken?
Cheers.
You can start with a macro like this one:
// Convert your images to a stack
run("Images to Stack", "name=Stack title=[] use");
// The stack will default the images to time points. Convert to channels
run("Stack to Hyperstack...", "order=xyczt(default) channels=3 slices=1 frames=1 display=Color");
// Export as OME-TIFF
run("Bio-Formats Exporter");
This is designed to reconstruct one dataset at a time (open 3 images, run the macro and export the OME-TIFF).
If you don't want any dialogs to show you can pass an output directory to the Bio-Formats exporter:
run("Bio-Formats Exporter", "save=/path/to/image.ome.tif export compression=Uncompressed");
For the output file name you can get the original image name in the macro with getTitle()
There is also a template example on iterating over all the files in a directory, if you want to completely automate the macro. However this may take some tweaking since you want to operate on your images 3 at a time.
Hope that helps!

Image Averaging and Saving output

I'm planning to process quite a large number of images and would like to average every 5 consecutive images. My images are saved as .dm4 file format.
Essentially, I want to produce a single averaged image output for each 5 images that I can save. So for instance, if I had 400 images, I would like to get 80 averaged images that would represent the 400 images.
I'm aware that there's the Running Z Projector plugin but it does a running average and doesn't give me the reduced number of images I'm looking for. Is this something that has already been done before?
Thanks for the help!
It looks like the Image>Stacks>Tools>Grouped Z_Projector does exactly you want.
I found it by opening the command finder ('L') and filtering on "project".

Resources