Mathematical Operations on an Image Stack in ImageJ (Fiji) - image-processing

I am writing an imageJ/Fiji plugin in Jython using the pydev plugin in eclipse.The plugin will be the ImageJ version of an already existing denoising software called CANDLE written as a matlab program. Changing the value of every pixel(voxel) of an image in matlab is trivial:
InputImage = 2 * sqrt(InputImage + (3/8));
Median3DFilteredImage = 2 * sqrt(Median3DFiltered + (3/8));
Here "InputImage" and "Median3DFilteredImage" are 3D Matrices, with the last dimension being time (slices). To reproduced the following operation on an ImageJ image, I had to employ two for loops, one to iterate through the image slices (3rd dimension) and the other loop to iterate over all the pixels in a particular slice:
medFiltStack = medianFilteredImage.getStack()
newMedFiltStack = ImageStack(medianFilteredImage.width, medianFilteredImage.height)
InputStack = InputImage.getStack()
newInputStack = ImageStack(InputImage.width, InputImage.height)
for i in xrange(1 , medianFilteredImage.getNSlices() + 1):
ip = medFiltStack.getProcessor(i).convertToFloat()
ip2 = InputStack.getProcessor(i).convertToFloat()
pixels = ip.getPixels()
pixels2 = ip2.getPixels()
for j in xrange (len(pixels)):
pixels[j] = 2 * javaMath.sqrt(pixels[j] + (3.0/8.0) )
pixels2[j] = 2 * javaMath.sqrt(pixels2[j] + (3.0/8.0) )
newMedFiltStack.addSlice(ip)
newInputStack.addSlice(ip2)
medianFilteredImage = ImagePlus("MedianFiltered-Image", newMedFiltStack)
InputImage = ImagePlus("Input-Image", newInputStack)
My question is as follows: Is there a way to perform mathematical operations on an image Stack, i.e. on every pixel (voxel) in the image stack, without having to write code that explicitly visits every pixel in every slice of the image, i.e. for loops. It just seems to be a very primitive way of going about it and I am wondering if there isn't an optimal way of doing this operation. I also had to work with copies and then gave the new images the same names as before as opposed to working with the original images and editing them directly. So is there a way to edit the pixel values of the original images rather than copies of the images? Any help would be appreciated as there are plenty of more math operations that I have to perform. It would be super useful to find a way to do mathematical operations on images in an optimal way both in terms of the amount of code and if possible, in terms of speed.

In pure ImageJ 1.x, the answer is: no, there's no other way than to visit every slice and get its ImageProcessor. That's the way how ImageJ1 deals with its limited number of dimensions (z, time, channel), you always have a (Hyper-)Stack of 2D planes.
There is however a more powerful way of dealing with n-dimensional images called ImgLib, which is included into Fiji together with ImageJ2.
To avoid re-inventing the wheel, you should have a look a Jean-Yves Tinevez's great plugin Image Expression Parser. Use it headlessly with Fiji, or just have look at its source code (it uses a previous version though, ImgLib1, but the idea is the same: you avoid hard-coding the dimensions by using Java generics), see e.g. for the sqrt function:
public final <R extends RealType<R>> float evaluate(final R alpha) {
return (float) Math.sqrt(alpha.getRealDouble());
}

Related

Difficulty counting cells due to clustering and pixel value cut off

EDIT:
I have continued working on my problem among other things and have made significant process. Using one Dr. Ashby's macro provided on ImageJwiki, and using some of my own makeshift code I can now do batch processing of images taken of Hoescht, Calcein AM, and Ethidium Homodimer stains and get decent recognition of objects. Reducing exposure time and levels of stain used (specifically calcein AM) has helped with the pixel value cut offs I was dealing with earlier. The macro still has problems with distinguishing clumped cells from one another though. To address this issue I want to implement a command in my macro that divides clusters of cells that it identifies as one cell based on the average size of our cells. The only problem is that in all my reading I haven't seen anything that mentions this. Does anyone have any thoughts on how I could implement this code? I have copied the macro below.
//get appropriate directories from user
dir1 = getDirectory("Choose Source Directory ");
dir2 = getDirectory("Choose Destination directory");
list = getFileList(dir1);
//give user an opportunity to adjust default parameters to better fit their application
Dialog.create("Adjust for objective magnification");
Dialog.addNumber("Objective Magnification (use 10 if unknown)", 10);
Dialog.addMessage("\tIf needed particle size limits can be adjusted below \nLeave mag. at 10 if customizing particle size limits\n");
Dialog.addNumber("Minimum particle size (pixels^2)",420);
Dialog.addNumber("Maximum particle size (pixels^2)",1600);
Dialog.addMessage("\tIn the following dialogs select \n first the Source Directory, \nthen a Destinaion directory for Results");
Dialog.show();
//Assigning the entered values to variables
magnification=Dialog.getNumber();
userMin=Dialog.getNumber();
userMax=Dialog.getNumber();
sMin=magnification*magnification/100*userMin;
sMax=magnification*magnification/100*userMax;
setBatchMode(true);
for (i=0; i<list.length; i++){
//print (list[i]);
open(dir1+list[i]);
name=File.nameWithoutExtension;
//Prepare the image by removing any scale and making 8-bit
run("Set Scale...", "distance=0 known=0 pixel=1 unit=pixel");
run("8-bit");
saveAs("Tiff", dir2+i+" Original "+name);//Saving with this naming scheme is required for the MeLast macro to function
//run("Brightness/Contrast...");
setMinAndMax(50, 255);
setOption("BlackBackground", false);
run("Make Binary", "method=Yen background=Light calculate black");
run("Watershed", "stack");
//Analyze particles
run("Analyze Particles...", "size="+sMin+"-"+sMax+" circularity=0.50-1.00 show=[Count Masks] display exclude include summarize");
//Save the masks file
saveAs("Tiff", dir2+i+" CountMask "+name);//Saving with this naming scheme is required for the MeLast macro to function
close();
//Save the thresholded image
saveAs("Tiff", dir2+i+" Thresholded "+name);//Saving with this naming scheme is required for the MeLast macro to function
}
//Save the results
selectWindow("Results");
saveAs("Results", dir2+"ZZ Results.xls");
//Save the summary
selectWindow("Summary");
saveAs("Text", dir2+"Z Summary.txt");
You need to find those clusters and analyze each to guess how many cells might belong to that cluster, using spatial information of the cells and other specific information in your problem domain. I believe that's an usual image analysis task.
As for cut-off pixel values, I guess you can consider the cut-off pixels as censored data. But I am not sure how meaningful it would be for 8 bit depth images.
There is another free, open-source program called CellProfiler (http://www.cellprofiler.org) that has some more specialized methods for separating cells -- more advanced than the standard watershed. See, for example, part of the manual here: http://www.cellprofiler.org/CPmanual/IdentifyPrimaryObjects.html.
Perhaps CellProfiler can do the job, or point you to the right algorithms to bring into the ImageJ macro.

DirectX 11, Combining pixel shaders to prevent bottlenecks

I'm trying to implement one complex algorithm using GPU. The only problem is HW limitations and maximum available feature level is 9_3.
Algorithm is basically "stereo matching"-like algorithm for two images. Because of mentioned limitations all calculations has to be performed in Vertex/Pixel shaders only (there is no computation API available). Vertex shaders are rather useless here so I considered them as pass-through vertex shaders.
Let me shortly describe the algorithm:
Take two images and calculate cost volume maps (basically conterting RGB to Grayscale -> translate right image by D and subtract it from the left image). This step is repeated around 20 times for different D which generates Texture3D.
Problem here: I cannot simply create one Pixel Shader which calculates
those 20 repetitions in one go because of size limitation of Pixel
Shader (max. 512 arithmetics), so I'm forced to call Draw() in a loop
in C++ which unnecessary involves CPU while all operations are done on
the same two images - it seems to me like I have one bottleneck here. I know that there are multiple render targets but: there are max. 8 targets (I need 20+), if I want to generate 8 results in one pixel shader I exceed it's size limit (512 arithmetic for my HW).
Then I need to calculate for each of calculated textures box filter with windows where r > 9.
Another problem here: Because window is so big I need to split box filtering into two Pixel Shaders (vertical and horizontal direction separately) because loops unrolling stage results with very long code. Manual implementation of those loops won't help cuz still it would create to big pixel shader. So another bottleneck here - CPU needs to be involved to pass results from temp texture (result of V pass) to the second pass (H pass).
Then in next step some arithmetic operations are applied for each pair of results from 1st step and 2nd step.
I haven't reach yet here with my development so no idea what kind of bottlenecks are waiting for me here.
Then minimal D (value of parameter from 1st step) is taken for each pixel based on pixel value from step 3.
... same as in step 3.
Here basically is VERY simple graph showing my current implementation (excluding steps 3 and 4).
Red dots/circles/whatever are temporary buffers (textures) where partial results are stored and at every red dot CPU is getting involved.
Question 1: Isn't it possible somehow to let GPU know how to perform each branch form up to the bottom without involving CPU and leading to bottleneck? I.e. to program sequence of graphics pipelines in one go and then let the GPU do it's job.
One additional question about render-to-texture thing: Does all textures resides in GPU memory all the time even between Draw() method calls and Pixel/Vertex shaders switching? Or there is any transfer from GPU to CPU happening... Cuz this may be another issue here which leads to bottleneck.
Any help would be appreciated!
Thank you in advance.
Best regards,
Lukasz
Writing computational algorithms in pixel shaders can be very difficult. Writing such algorithms for 9_3 target can be impossible. Too much restrictions. But, well, I think I know how to workaround your problems.
1. Shader repetition
First of all, it is unclear, what do you call "bottleneck" here. Yes, theoretically, draw calls in for loop is a performance loss. But does it bottleneck? Does your application really looses performance here? How much? Only profilers (CPU and GPU) can answer. But to run it, you must first complete your algorithm (stages 3 and 4). So, I'd better stick with current solution, and started to implement whole algorithm, then profile and than fix performance issues.
But, if you feel ready to tweaks... Common "repetition" technology is instancing. You can create one more vertex buffer (called instance buffer), which will contains parameters not for each vertex, but for one draw instance. Then you do all the stuff with one DrawInstanced() call.
For you first stage, instance buffer can contain your D value and index of target Texture3D layer. You can pass-through them from vertex shader.
As always, you have a tradeof here: simplicity of code to (probably) performance.
2. Multi-pass rendering
CPU needs to be involved to pass results from temp texture (result of
V pass) to the second pass (H pass)
Typically, you do chaining like this, so no CPU involved:
// Pass 1: from pTexture0 to pTexture1
// ...set up pipeline state for Pass1 here...
pContext->PSSetShaderResources(slot, 1, pTexture0); // source
pContext->OMSetRenderTargets(1, pTexture1, 0); // target
pContext->Draw(...);
// Pass 2: from pTexture1 to pTexture2
// ...set up pipeline state for Pass1 here...
pContext->PSSetShaderResources(slot, 1, pTexture1); // previous target is now source
pContext->OMSetRenderTargets(1, pTexture2, 0);
pContext->Draw(...);
// Pass 3: ...
Note, that pTexture1 must have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET flags. You can have multiple input textures and multiple render targets. Just make sure, that every next pass knows what previous pass outputs.
And if previous pass uses more resources than current, don't forget to unbind unneeded, to prevent hard-to-find errors:
pContext->PSSetShaderResources(2, 1, 0);
pContext->PSSetShaderResources(3, 1, 0);
pContext->PSSetShaderResources(4, 1, 0);
// Only 0 and 1 texture slots will be used
3. Resource data location
Does all textures resides in GPU memory all the time even between
Draw() method calls and Pixel/Vertex shaders switching?
We can never know that. Driver chooses appropriate location for resources. But if you have resources created with DEFAULT usage and 0 CPU access flag, you can be almost sure it will always be in video memory.
Hope it helps. Happy coding!

DSL for Clojure image synthesis

I'm experimenting with creating a small library/DSL for image synthesis in Clojure. Basically the idea is to allow users of the library to compose sets of mathematical functions to procedurally create interesting images.
The functions need to operate on double values, and take the form of converting a location vector into a colour value, e.g. (x,y,z) - > (r,g,b,a)
However I'm facing a few interesting design decisions:
Inputs could have 1,2,3 or maybe even 4 dimensions (x,y,z plus time)
It would be good to provide vector maths operations (dot products, addition, multiplication etc.)
It would be valuable to compose functions with operations such as rotate, scale etc.
For performance reasons, it is important to use primitive double maths throughout (i.e. avoid creating boxed doubles in particular). So a function which needs to return red, green and blue components perhaps needs to become three separate functions which return the primitive red, green and blue values respectively.
Any ideas on how this kind of DSL can reasonably be achieved in Clojure (1.4 beta)?
A look at the awesome ImageMagick tools http://www.imagemagick.org can give you an idea of what kind of operations would be expected from such a library.
Maybe you'll see that you won't need to drop down to vector math if you replicate the default IM toolset.
OK, so I eventually figured out a nice way of doing this.
The trick was to represent functions as a vector of code (in the "code is data" sense, e.g.
[(Math/sin (* 10 x))
(Math/cos (* 12 y))
(Math/cos (+ (* 5 x) (* 8 y)))]
This can then be "compiled" to create 3 objects that implement a Java interface with the following method:
public double calc(double x, double y, double z, double t) {
.....
}
And these function objects can be called with primitive values to get the Red, Green and Blue colour values for each pixel. Results are something like:
Finally, it's possible to compose the functions using a simple DSL, e.g. to scale up a texture you can do:
(vscale 10 some-function-vector)
I've published all the code on GitHub for anyone interested:
https://github.com/mikera/clisk

Interpolation and Morphing of an image in labview and/or openCV

I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map
proj_x, proj_y <--> cam_x, cam_y for scattered point pairs
My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows
[pgridx, pgridy] = meshgrid(allprojxpts, allprojypts)
fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy);
fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy);
and the reverse for the camera to projector mapping
Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle)
I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick.
So my question is one of the following
1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ?
2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV.
Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"
So this ends up being a specific fix for bugs in Labview 8.5. Nevertheless, since they're poorly documented, and I've spent a day of pain on them, I figure I'll post them so someone else googling this problem will come across it.
1) Meshgrid bombs. Don't know when this was fixed, definitely a bug in 8.5. Solution: use the meshgrid-like function on the interpolation&extrapolation pallet instead. Or upgrade to LV2009 which apparently works (thanks Underflow)
2) Griddata is defective in 8.5. This is badly documented. The 8.6 upgrade notes say that a problem with griddata and the "cubic" setting, but it is fact also a problem with the DEFAULT LINEAR setting. Solutions in descending order of kludginess: 1) pass 'v4' flag, which does some kind of spline interpolation, but does not have bugs. 2) upgrade to at least version 8.6. 3) Beat the ni engineers with reeds until they document bugs properly.
3) I was able to use the openCV remap function to do the actual transformation from one image to another. I tried just using the matlab built in interp2 vi, but it choked on large arrays and gave me out of memory errors. On the other hand, it is fairly straightforward to map an IMAQ image to an IPL image, so this isn't that bad, except for the addition of the outside library.

How to read a bitmap in OCAML?

I want to read a bitmap file (from the file system) using OCAML and store the pixels (the colors) inside an array which have th dimension of the bitmap, each pixel will take one cell in the array.
I found the function Graphics.dump_image image -> color array array
but it doesn't read from a file.
CAMLIMAGE should do it. There is also a debian package (libcamlimage-ocmal-dev), as well as an installation through godi, if you use that to manage your ocaml packages.
As a useful example of reading and manipulating images in ocaml, I suggest looking over the code for a seam removal algorithm over at eigenclass.
You can also, as stated by jonathan --but not well-- call C functions from ocaml, such as ImageMagick. Although you're going to do a lot of manipulation of the image data to bring the image into ocaml, you can always write c for all your functions to manipulate the image as an abstract data type --this seems to be completely opposite of what you want though, writing most of the program in C not ocaml.
Since I recently wanted to play around with camlimages (and had some trouble installing it --I had to modify two of the ml files from compilation errors, very simple ones though). Here is a quick program, black_and_white.ml, and how to compile it. This should get someone painlessly started with the package (especially, dynamic image generation):
let () =
let width = int_of_string Sys.argv.(1)
and length = int_of_string Sys.argv.(2)
and name = Sys.argv.(3)
and black = {Color.Rgb.r = 0; g=0; b=0; }
and white = {Color.Rgb.r = 255; g=255; b=255; } in
let image = Rgb24.make width length black in
for i = 0 to width-1 do
for j = 0 to (length/2) - 1 do
Rgb24.set image i j white;
done;
done;
Png.save name [] (Images.Rgb24 image)
And to compile,
ocamlopt.opt -I /usr/local/lib/ocaml/camlimages/ ci_core.cmxa graphics.cmxa ci_graphics.cmxa ci_png.cmxa black_and_white.ml -o black_and_white
And to run,
./black_and_white 20 20 test1.png
I don't know of an out-of-the box way to do it. You could open the file with open_in and read it byte at a time with input_char, suck in the header and the data and build up the color array array that way for simple formats (e.g. BMPs) but for anything like JPGs or PNGs a roll your-own solution would probably be more work than you want to get into.
You could also use one of the numerous SDL bindings for OCaml, specifically the SDL_image ones, which let you load all kinds of images easily, and provides functions to access individual pixels and raw data as an array.
OCamlSDL is a popular one.
If you don't want to use CAMLIMAGE, usually raw RGB or PNM/PPM (which have an easy to create header format followed by RGB values) images are used. ImageMagick allows you to then view this formats or convert them into more usable formats.

Resources