Image Stack Color Analysis - imagej

I have a few images of the same rock with each of them revealing the presence of a certain element in a different color. So one of them will show bright green when Silicon is present, another may show dark red where Calcium is present, etc.
Is there any way to overlay these and be able to tell what is the concentration of all the minerals at a given point on the rock?

Is there any way to overlay these
If you have seven or fewer images, you can open the images and make them into a single image stack using Image > Stacks > Images to Stack. Or open them using File > Import > Image Sequence if they are numbered. Or if they are all RGB, you can blend them using the Image Calculator (Process > Image Calculator). If you do end up with a single stack, you can use ImageJ's Composite image feature (Image > Color > Make Composite) to overlay the colors.
and be able to tell what is the concentration of all the minerals at a given point on the rock?
That part is harder. You may face some of the same challenges of colocalization analysis. Qualitative analysis is too subjective, and human eyes are too easily fooled. See Why scatter plots instead of colour merge images on the Fiji wiki for more information.

You want the RGB composer plugin. It will take grayscale images and turn them into the R, G, and B channels of a color image. (Note you must downconvert to 8 bit.) It has great settings for adjusting the color intensities to make features pop.
If you want to do quantitative ratios, you have to make sure that your element maps are quantitative atomic weight rather than simple count or intensity maps. If you have made sure that your maps are quantitative, then it becomes a powerful way to rapidly find the phases you are interested in. For example when looking for olivine, I will often produce an Mg, Fe, and Si map. Then I use the Image Expression Parser command to make an image from (Mg+Fe)/Si. Pixels which have olivine will be pretty close to 2.

Related

Role of color profile in graphics program

I think I understand what color profiles are. I do not understand, what is the difference in manipulating photo for example in photoshop in 16bpp sRGB and 16bpp Adobe RGB. My monitor can only show me sRGB.
Is there any difference in algorithms?
Maybe there is some preprocessing executed before program displays effects of my work (for example AdobeRGB(0.3, 0.25, 0.82) is being displayed as sRGB(0.301, 0.253, 0.819) in my monitor)?
Is there any sense in using different color profiles when I am not using ICC profile of my monitor/printer?
In general – what should I do if I would want to develop my own graphics-manipulating application that supports profile different than sRGB (for example in Qt)?
The color space your image uses determines how your 16 bits per pixel should relate to the output produced by your monitor, i.e., it determines what colors the numbers actually represent.
This can make a difference in the way some algorithms are processed if they are supposed to make realistic, natural-looking, or consistent results.
Let's say you composite a semi-transparent yellow on top of a dark red background? What kind of brown do you get? If the algorithm always mixes the pixel data the same way, then even when the yellow and red look the same on your monitor, the brown you get might be different because of your color space.
A more 'correct' way to do mixing would be to transform your pixel data into a consistent color space, mix, and then transform back. If the original colors look the same on two monitors with different calibrated profiles, then they will transform into the same numbers in a consistent color space, and the mix result will transform back into results that look the same on both monitors even though the pixel values might be different.
Natural-looking compositing with semi-transparency is a good example of an algorithm that has to take your color space into account in order to produce realistic results. Other effects that have to look 'natural', like specular highlights, shadows, etc., similarly need to do physically accurate math in a consistent color space.
To answer your specific questions:
Yes, as explained, many algorithms should perform different calculations with different color spaces.
Yes, there is. The image's color space defines what the data means in terms of physical light. If you display it with an ICC calibrated profile, it is transformed into the numbers that your monitor needs to accurately display your image.
It should make very little difference what color space you use for your image, except that some display software won't take it into account. Making sRGB images is better for cross-system compatibility, but I think Adobe RBG has a bigger gamut and can actually represent some green colors that sRGB can't. You should use printer and monitor calibration so that you can SEE what your image really looks like.
I think I answered that above.
Their's no differences in algorithms because you operate in RGB color space and not in XYZ color space. Monitors like you said shows colors differently, the red on one monitor may not exactly match the red primary on another monitor. In order to define different RGB color spaces in a common manner, monitors use the CIE 1931 XYZ color space. Every monitor or system calculate RGB color to XYZ according to used profiles, for example: RGB (1,0,0) = XYZ (0.4358, 0.2224, 0.0139) in sRGB and XYZ (0.7977, 0.2880, 0.0000) in ProPhotoRGB.
For further information see:
http://ninedegreesbelow.com/photography/xyz-rgb.html
http://www.ryanjuckett.com/programming/rgb-color-space-conversion/
Gamut mapping explained by analogy
If you change color spaces, you may lose some of the information because the mapping from one to the other may not be injective (invertible). You may choose among different rendering intents to pick the mapping that throws only the information you find least useful away.
This analogy might illustrate the consequences of converting an image to a smaller color space when the original space is larger than the one of your device: You can very well represent a 3D object in the computer, but you will never actually see it, because your screen is flat and thus able to display only 2D images. You can view projections of the object, you can view cuts through the object, but you need a 3D printer to get something really 3D out of it.
Even if you have no 3D printer, it is worth representing the object in 3D and not as a fixed 2D projection. Otherwise, you would not be able to make all those 2D cuts and projections, and even if you bought a 3D printer in the future, you could not print the object anymore.
The 3D object is a picture in the larger space, a fixed 2D projection is a picture in a smaller space, screen is a device with the smaller color space and 3D printer is a device with the larger color space. End of analogy.
ICC workflow
If you take a photo, you camera should assign a profile to it, describing the device color space of the camera. The profile defines mapping of the numbers inside the picture (coordinates in device color space) to real-world colors (coordinates in an absolute color space). Therefore, without a profile, the numbers really have no meaning and anyone is free to make up any mapping they like.
If you shoot RAW, you do the color space conversion when developing the photo; if you shoot JPEG, the camera performs this task for you.
In the opposite direction, when displaying or printing: If the display device is not calibrated and has no profile, the real-world colors stored in the image might not match what comes out of the device in reality. The mapping between the image color space and the output device space could not guarantee that the colors will be preserved and is somewhat arbitrary.
Actual answers
The difference in manipulating the photo in sRGB and Adobe RGB is that Adobe RGB is larger and thus preserves more information for further processing.
The difference in algorithms has already been explained by Matt Timmermans in another answer. Regarding color blending, you might want to know more about perceptually uniform color spaces (see e.g. a closed Q & A on SO).
Yes, conversion from Adobe RGB to sRGB is not identity and thus requires some processing. Where exactly this processing is done (device driver, OS kernel, image processing software) depends on the source and target, the OS and their settings. If you convert the spaces in Photoshop, it does the computation itself. Windows have a built-in color management module that takes care of converting an image with profile to the device color space of the output device.
The image you want to display/print might be stored in some rather exotic color space. If the OS guesses it is in sRGB (Windows would), it might give odd results. It is better to provide as much information as possible to the color management system. Even uncalibrated devices might be assigned some generic profiles, some guesswork might take place. And maybe, you’ll calibrate and characterize your device someday, or you’ll send the image to someone with such a device.
Qt itself does not support color management. However, KDE, which is built atop Qt, supports some color management via Oyranos.
When should we expect complete color management for KDE?
If we are talking about color management in Qt, not anytime soon. If we are talking about decent color management implemented in the compositor (KWin), sooner than not anytime soon. It also depends on how quickly the graphics applications adapt to these new color management things.
You could use Oyranos or another color management system directly in your application. Google told me about a thesis about getting color management to Qt, too.
Related reading
Generalities about colors # color-management-guide.com
ICC FAQ
Windows 7: Change color management settings
Windows Vista: Color management settings FAQ
Introduction to Color Management in Microsoft Windows Operating Systems
Windows Color System # MSDN

Image Segmentation for Color Analysis in OpenCV

I am working on a project that requires me to:
Look at images that contain relatively well-defined objects, e.g.
and pick out the color of n-most (it's generic, could be 1,2,3, etc...) prominent objects in some space (whether it be RGB, HSV, whatever) and return it.
I am looking into ways to segment images like this into the independent objects. Once that's done, I'm under the impression that it won't be particularly difficult to find the contours of the segments and analyze them for average or centroid color, etc...
I looked briefly into the Watershed algorithm, which seems like it could work, but I was unsure of how to generate the marker image for an indeterminate number of blobs.
What's the best way to segment such an image, and if it's using Watershed, what's the best way to generate the corresponding marker image of integers?
Check out this possible approach:
Efficient Graph-Based Image Segmentation
Pedro F. Felzenszwalb and Daniel P. Huttenlocher
Here's what it looks like on your image:
I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.
From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.
Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.
Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.
UPDATE
Based on your comment, here's another potential approach that might work for you:
If you believe the objects will have mostly uniform color you might attempt to process the image to:
remove noise;
map original image to reduced color space (i.e. 256 or event 16 colors)
detect connected components based on pixel color and determine which ones are large enough
You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.
The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.

Balancing contrast and brightness between stitched images

I'm working on an image stitching project, and I understand there's different approaches on dealing with contrast and brightness of an image. I could of course deal with this issue before I even stitched the image, but yet the result is not as consistent as I would hope. So my question is if it's possible by any chance to "balance" or rather "equalize" the contrast and brightness in color pictures after the stitching has taken place?
You want to determine the histogram equalization function not from the entire images, but on the zone where they will touch or overlap. You obviously want to have identical histograms in the overlap area, so this is where you calculate the functions. You then apply the equalization functions that accomplish this on the entire images. If you have more than two stitches, you still want to have global equalization beforehand, and then use a weighted application of the overlap-equalizing functions that decreases the impact as you move away from the stitched edge.
Apologies if this is all obvious to you already, but your general question leads me to a general answer.
You may want to have a look at the Exposure Compensator class provided by OpenCV.
Exposure compensation is done in 3 steps:
Create your exposure compensator
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
You input all of your images along with the top left corners of each of them. You can leave the masks completely white by default unless you want to specify certain parts of the image to work on.
compensator->feed(corners, images, masks);
Now it has all the information of how the images overlap, you can compensate each image individually
compensator->apply(image_index, corners[image_index], image, mask);
The compensated image will be stored in image

uneven illuminated images

How to get rid of uneven illumination from images, that contain text data, usually printed but may be handwritten? It can have some spots of lights because the light reflected while making picture.
I've seen the Halcon program's segment_characters function that is doing this work perfectly,
but it is not open source.
I wish to convert an image to the image that has a constant illumination at background and more dark colored regions of text. So that binarization will be easy and without noise.
The text is assumed to be dark colored than it's background.
Any ideas?
Strictly speaking, assuming you have access to the image's pixels (you can search online for how to accomplish this in your programming language as the topic is abundantly available), the exercise involves going over the pixels once to determine a "darkness threshold". In order to do this you convert each pixel from RGB to HSL in order to get the lightness level component for each pixel. During this process you calculate an average lightness for the whole image which you can use as your "darkness threshold"
Once you have the image average lightness level, you can go over the image pixels once more and if a pixel is less than the darkness threshold, set it's color to full white RGB(255,255,255), otherwise, set it's color to full black RGB (0,0,0). This will give you a binary image with in which the text should be black - the rest should be white.
Of course, the key is in finding the appropriate darkness threshold - so if the average method doesn't give you good results you may have to come up with a different method to augment that step. Such a method could involve separating the image in the primary channels Red, Green, Blue and computing the darkness threshold for each channel separately and then using the aggressive threshold of the three..
And lastly, a better approach may be to compute the light levels distribution - as opposed to simply the average - and then from that, the range around the maximum is what you want to keep. Again, go over each pixel and if it's lightness fits the band make it black, otherwise, make it white.
EDIT
For further reading about HSL I recommend starting with the Wiky entry on HSL and HSV Color spaces.
Have you tried using morphological techniques? Closure-by-reconstruction (as presented in Gonzalez, Woods and Eddins) can be used to create a grayscale representation of background illumination levels. You can more-or-less standardize the effective illumination by:
1) Calculating the mean intensity of all the pixels in the image
2) Using closure-by-reconstruction to estimate background illumination levels
3) Subtract the output of (2) from the original image
4) Adding the mean intensity from (1) to every pixel in the output of (3).
Basically what closure-by-reconstruction does is remove all image features that are smaller than a certain size, erasing the "foreground" (the text you want to capture) and leaving only the "background" (illumination levels) behind. Subtracting the result from the original image leaves behind only small-scale deviations (the text). Adding the original average intensity to those deviations is simply to make the text readable, so that the resulting picture looks like a light-normalized version of the original image.
Use Local-Thresholding instead of the global thresholding algorithm.
Divide your image(grayscale) in to a grid of smaller images (say 50x50 px) and apply the thresholding algorithm on each individual image.
If the background features are generally larger than the letters, you can try to estimate and subsequently remove the background.
There are many ways to do that, a very simple one would be to run a median filter on your image. You want the filter window to be large enough that text inside the window rarely makes up more than a third of the pixels, but small enough that there are several windows that fit into the bright spots. This filter should result in an image without text, but with background only. Subtract that from the original, and you should have an image that can be segmented with a global threshold.
Note that if the bright spots are much smaller than the text, you do the inverse: choose the filter window such that it removes the light only.
The first thing you need to try and do it change the lighting, use a dome light or some other light that will give you a more diffuse and even light.
If that's not possible, you can try some of the ideas in this question or this one. You want to implement some type of "adaptive threshold", this will apply a local threshold to individual parts of the image so that the change in contrast won't be as noticable.
There is also a simple but effective method explained here. The simple outline of the alrithm is the following:
Split the image up into NxN regions or neighbourhoods
Calculate the mean or median pixel value for the neighbourhood
Threshold the region based on the value calculated in 2) or the value from 2) minus C (where C is a chosen constant)
It seems like what you're trying to do is improve local contrast while attenuating larger scale lighting variations. I'll agree with other posters that optimizing the image through better lighting should always be the first move.
After that, here are two tricks.
1) Use smooth_image() operator to convolve a gaussian on your original image. Use a relaitively large kernel, like 20-50px. Then subtract this blurred image from your original image. Apply scale and offset within sub_image() operator, or use equ_histo() to equalize histogram.
This basically subtracts the low spatial frequency information from the original, leaving the higher frequency information intact.
2) You could try highpass_image() operator, or one of the laplacian operators to extract a gradiant image.

How to compensate for uneven illumination in a photograph of a printed page?

I am trying to teach my camera to be a scanner: I take pictures of printed text and then convert them to bitmaps (and then to djvu and OCR'ed). I need to compute a threshold for which pixels should be white and which black, but I'm stymied by uneven illumination. For example if the pixels in the center are dark enough, I'm likely to wind up with a bunch of black pixels in the corners.
What I would like to do, under relatively simple assumptions, is compensate for uneven illumination before thresholding. More precisely:
Assume one or two light sources, maybe one with gradual change in light intensity across the surface (ambient light) and another with an inverse square (direct light).
Assume that the white parts of the paper all have the same reflectivity/albedo/whatever.
Find some algorithm to estimate degree of illumination at each pixel, and from that recover the reflectivity of each pixel.
From a pixel's reflectivity, classify it white or black
I have no idea how to write an algorithm to do this. I don't want to fall back on least-squares fitting since I'd somehow like to ignore the dark pixels when estimating illumination. I also don't know if the algorithm will work.
All helpful advice will be upvoted!
EDIT: I've definitely considered chopping the image into pieces that are large enough so they still look like "text on a white background" but small enough so that illumination of a single piece is more or less even. I think if I then interpolate the thresholds so that there's no discontinuity across sub-image boundaries, I will probably get something halfway decent. This is a good suggestion, and I will have to give it a try, but it still leaves me with the problem of where to draw the line between white and black. More thoughts?
EDIT: Here are some screen dumps from GIMP showing different histograms and the "best" threshold value (chosen by hand) for each histogram. In two of the three a single threshold for the whole image is good enough. In the third, however, the upper left corner really needs a different threshold:
I'm not sure if you still need a solution after all this time, but if you still do. A few years ago I and my team photographed about 250,000 pages with a camera and converted them to (almost black and white ) grey scale images which we then DjVued ( also make pdfs of).
(See The catalogue and complete collection of photographic facsimiles of the 1144 paper transcripts of the French Institute of Pondicherry.)
We also ran into the problem of uneven illumination. We came up with a simple unsophisticated solution which worked very well in practice. This solution should also work to create black and white images rather than grey scale (as I'll describe).
The camera and lighting setup
a) We taped an empty picture frame to the top of a table to keep our pages in the exact same position.
b) We put a camera on a tripod also on top of the table above and pointing down at the taped picture frame and on a bar about a foot wide attached to the external flash holder on top of the camera we attached two "modelling lights". These can be purchased at any good camera shop. They are designed to provide even illumination. The camera was shaded from the lights by putting small cardboard box around each modelling light. We photographed in greyscale which we then further processed. (Our pages were old browned paper with blue ink writing so your case should be simpler).
Processing of the images
We used the free software package irfanview.
This software has a batch mode which can simultaneously do color correction, change the bit depth and crop the images. We would take the photograph of a page and then in interactive mode adjust the brightness, contrast and gamma settings till it was close to black and white. (We used greyscale but by setting the bit depth to 2 you will get black and white when you batch process all the pages.)
After determining the best color correction we then interactively cropped a single image and noted the cropping settings. We then set all these settings in the batch mode window and processed the pages for one book.
Creating DjVu images.
We used the free DjVu Solo 3.1 to create the DjVu images. This has several modes to create the DjVu images. The mode which creates black and white images didn't work well for us for photographs, but the "photo" mode did.
We didn't OCR (since the images were handwritten Sanskrit) but as long as the letters are evenly illuminated I think your OCR software should ignore big black areas like between a two page spread. But you can always get rid of the black between a two page spread or at the edges by cropping the pages twices once for the left hand pages and once for the right hand pages and the irfanview software will allow you to cleverly number your pages so you can then remerge the pages in the correct order. I.e rename your pages something like page-xxxA for lefthand pages and page-xxxB for righthand pages and the pages will then sort correctly on name.
If you still need a solution I hope some of the above is useful to you.
i would recommend calibrating the camera. considering that your lighting setup is fixed (that is the lights do not move between pictures), and your camera is grayscale (not color).
take a picture of a white sheet of paper which covers the whole workable area of your "scanner". store this picture, it tells what is white paper for each pixel. now, when you take take a picture of a document to scan, you can reload your "white reference picture" and even the illumination before performing a threshold.
let's call the white reference REF, the picture DOC, the even illumination picture EVEN, and the maximum value of a pixel MAX (for 8bit imaging, it is 255). for each pixel:
EVEN = DOC * (MAX/REF)
notes:
beware of the parenthesis: most image processing library uses the image pixel type for performing computation on pixel values and a simple multiplication will overload your pixel. eventually, write the loop yourself and use a 32 bit integer for intermediate computations.
the white reference image can be smoothed before being used in the process. any smoothing or blurring filter will do, and don't hesitate to apply it aggressively.
the MAX value in the formula above represents the target pixel value in the resulting image. using the maximum pixel value targets a bright white, but you can adjust this value to target a lighter gray.
Well. Usually the image processing I do is highly time sensitive, so a complex algorithm like the one you're seeking wouldn't work. But . . . have you considered chopping the image up into smaller pieces, and re-scaling each sub-image? That should make the 'dark' pixels stand out fairly well even in an image of variable lighting conditions (I am assuming here that you are talking about a standard mostly-white page with dark text.)
Its a cheat, but a lot easier than the 'right' way you're suggesting.
This might be horrendously slow, but what I'd recommend is to break the scanned surface into quarters/16ths and re-color them so that the average grayscale level is similar across the page. (Might break if you have pages with large margins though)
I assume that you are taking images of (relatively) small black letters on a white background.
One approach could be to "remove" the small black objects, while keeping the illumination variations of the background. This gives an estimate of how the image is illuminated, which can be used for normalizing the original image. It is often enough to subtract the illumination estimate from the original image and then do a threshold based segmentation.
This approach is based on gray scale morphological filters, and could be implemented in matlab like below:
img = imread('filename.png');
illumination = imclose(img, strel('disk', 10));
imgCorrected = img - illumination;
thresholdValue = graythresh(imgCorrected);
bw = imgCorrected > thresholdValue;
For an example with real images take a look at this guide from mathworks. For further reading about the use of morphological image analysis this book by Pierre Soille can be recommended.
Two algorithms come to my mind:
High-pass to alleviate the low-frequency illumination gradient
Local threshold with an appropriate radius
Adaptive thresholding is the keyword. Quote from a 2003 article by R.
Fisher, S. Perkins, A. Walker, and E. Wolfart: “This more sophisticated version
of thresholding can accommodate changing lighting conditions in the image, e.g.
those occurring as a result of a strong illumination gradient or shadows.”
ImageMagick's -lat option can do it, for example:
convert -lat 50x50-2000 input.jpg output.jpg
input.jpg
output.jpg
You could try using an edge detection filter, then a floodfill algorithm, to distinguish the background from the foreground. Interpolate the floodfilled region to determine the local illumination; you may also be able to modify the floodfill algorithm to use the local background value to jump across lines and fill boxes and so forth.
You could also try a Threshold Hysteresis with a rate of change control. Here is the link to the normal Threshold Hysteresis. Set the first threshold to a typical white value. Set the second threshold to less than the lowest white value in the corners.
The difference is that you want to check the difference between pixels for all values in between the first and second threshold. Ideally if the difference is positive, then act normally. But if it is negative, you only want to threshold if the difference is small.
This will be able to compensate for lighting variations, but will ignore the large changes between the background and the text.
Why don't you use simple opening and closing operations?
Try this, just lool at the results:
src - cource image
src - open(src)
close(src) - src
and look at the close - src result
using different window size, you will get backgound of the image.
I think this helps.

Resources