ImageJ - how to ensure two images are directly comparable (what scripting command?) - imagej

I need to present some microscopy images side by side and ensure they are directly comparable by eye in a presentation. The pictures were all taken with the same exposures, gain et cetera so the underlying pixel values should be comparable.
However, the microscopy software has a nasty habit of saving the files with one of the colour channels saturated (for some reason), so I have to process the images for presentations.
Previously I'd been using a macro which processes through a folder and calls the scripting command
run("Enhance Contrast", "saturated=0.35");
But on reflection I don't think this is the proper command to call. I don't think it would produce images that are directly comparable, by eye, to each other.
I had thought that the command
run("Color Balance...");
resetMinAndMax();
would be best as it should show the full display range. But the display values shown on the histogram do vary depending on the image.
Is this appropriate for making directly comparable images or should I run a command like
setMinAndMax();
more appropriate. With 0 as minimum and an arbitrary figure as the maximum. This is driving me mad, as I keep on getting asked about whether my images are directly comparable but I simply don't know!

Usually, resetMinAndMax(); is the best way to ensure that your images are displayed consistently.
Note however that it also depends on the bit depth of your images.
8-bit grayscale images and RGB color images are displayed with a range of 0-255 upon resetMinAndMax();
For 16-bit (short) and 32-bit (float) images, the display range is calculated from the actual minimum and maximum values of the image (see the documentation of the Brightness/Contrast dialog and the resetMinAndMax() macro function)
So for 16-bit and 32-bit images, you can use the Set Display Range dialog (Set button in the B&C window) with one of the default Unsigned 16-bit range options to ensure consistent display, or the macro call:
setMinAndMax(0, 65535);
If you want to use the images in a presentation, copy them using Edit > Copy to System, or convert them to either 8-bit or RGB before saving them and inserting them in a presentation.

Related

Display exr/hdr luminance in a viewer

I want to know if it's possible to display an exr/hdr image or just RGB color that have different luminance in a language ?
I tried sdl, python, cpp, opengl and also qt, but any language don't have much documentation about it.
I also saw about io and oiio, all of them is doing a tone mapping when displaying the image.
What I would like is like the DisplayHDR Test from VESA that are showing some test with different luminance, meaning a monitor with 100nits would not show difference with an image with 100 or 200nits but a monitor superior to 200nits would per example.
I am not sure but I think they use video and not image.

How to overlay the RTDOSE and Image data in the correct position?

I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.

What software is recommended to automate image annotation?

We make images like the following in Excel. The raw image is imported and positioned in the generally correct area within the annotations, which are themselves images linked to ranges, the contents of which differ depending on selections made by the user.
The absolute position and dimensions of each annotation must be adjusted manually for every image. The number of sample names can vary (up to 12 lanes of samples). The size ladder on the left can also vary depending on the size of the protein being analyzed.
After everything is correctly sized and aligned, the range containing the raw image + annotations is copied and saved as a jpg (which is then imported into an Access database).
Though we've automated some parts of this with VBA, the process of tweaking every image (widths of columns, text size, position of size ladder, etc.) can get very tedious. Surely there is some software out there that will make this process more efficient. It takes one of our staff members hours to adjust and finalize about 10-20 of these images.
Any recommendations are welcomed.
This procedure is called electrophoresis. Samples (in this case proteins) are loaded into a polyacrylamide gel (each sample in its own "lane") and pulled through the gel with electricity. This process separates all of the proteins in each lane by size and charge.
The "ladder" is a solution of various proteins of known size. It's used to determine the sizes of the proteins in the other lanes.
The image on the left contains the range of sizes in the ladder (in this case 10, 15,...150, 200). Each "step" in the ladder image is aligned with the black bands that appear in the ladder lane in the experiment (the actual ladder lane that contains the black bands is not present in this case...it's cropped post-alignment to improve the overall look of the image).
The images on the right are protein names and point to the location on the gel where that particular protein should appear. The protein Actin, for example, is supposed to come out at around 42 kilodaltons. The fact that there is a prominent black band in that location is good supporting evidence that this sample contains Actin protein.
Many gels will also describe the sample source at the top or the bottom. So, for example, if the sample in lane 1 was derived from mouse liver cells, lane 1 would be annotated as "mouse liver."
The raw image is captured in the lab and is saved as a jpg. This jpg is then manually copied directly into an Excel sheet, where extraneous parts of the image are cropped. The cropped image is then moved to within the area of the worksheet that contains the annotations (ladder, protein names, sample names). These annotations are themselves images (linked to other ranges in the workbook that change with every experiment...protein names, samples names, ladder type can be different for every experiment). These annotation images require fine positioning in each case (as described previously) to align with the lanes and with the protein sizes. Once everything is aligned, it is saved as a jpg and moved into Access.
My question is...Is there software already out there designed specifically for tasks like these? Just as Excel is not a database program, it is also not an image annotation program. I want to know if there is an application out there, ready to go, that is specifically designed to annotate images with elements that can vary from image to image.
Of course, there will still be a need for manually moving elements around the image to get everything aligned (I'm not looking for a miracle here). I'm thinking that there has to be something better than Excel for this.

iOS: Is there a way to alter the color of every pixel on the screen?

How does apple alter the color of every single pixel on the screen (i.e. grayscale / inversion of colors), regardless of what object the color belongs to. It obviously isn't reading background color properties since it even affects images, as well.
How would one approach this?
To clarify my question, how can I change the intensity / hue of every pixel on the screen, similar to how f.lux does it?
How does apple alter the color of every single pixel on the screen?
Apple probably uses an API called CGSetDisplayTransferByTable which is not publicly available on iOS.
The display transfer table controls how each possible value in each of the three RGB channels is displayed on screen and can convert it to a different value. It works similar to Photoshop's "Curves" tool. By using the right transfer table it's possible to invert the screen, adjust the hue or enhance contrast.
Since the transfer table is part of the graphics hardware and is always active, there's zero performance overhead involved. On Mac OS there are actually two transfer tables: one for the application and one for the OS.
how can I change the intensity / hue of every pixel on the screen
Without jailbreak, you can't.

Multilingual Unicode rendering in opengl

I have to extend an OpenGL-Rendering system to support international characters (especially Hebrew, Arabic and cyrillic).
Development platform is Windows(XP|Vista|7), alas using Embercardero Delphi 2010.
I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings.
While this is feasible for Latin-1 Characters, building the full Unicode character set in advance is pretty time-consuming (about 8.5 minutes on my machine), so I am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (Latin, Greek, Cyrillic, Arabic and Hebrew) to include just the glyphs I need, but that would just be a solution for my current needs, and will become insufficient once other encoding is needed.
On the upside, I do not have to worry about left-to right or right-to left direction as our application can handle this already.
I would expect this to be a well-known problem, so I would like to ask if there is any reference material on this on the web, or if you could share some insight on this?
Edit
A clarification: I use a polygonal font representations. Each Font is constructed at unit size (1.0) in advance and scaled appropriately using glScalef(...) before rendering. I did decide against pre-rasterizing since the users might zoom in quite closely (The application is used for CAD), so rastering artifacts would become visible.
Additionally, since a scene seldom exceeds more then a few hundred characters (mainly labels and measurements), the speed gain from pre-rasterization is negligible.
Don't pre-build the display lists :- create an intermediate sprite that builds the lists on demand, and caches them. Trying to pre-compute lists - or pre generate rasterized textures at every font size, font face, and for all characters, is impractical, Especially when you scale to far eastern character sets.
You need to replace the wglOutLineFont.
To do that, generate/render to texture the required glyphs using the wglOutLineFont, and then save the texture into a raster image file. Once application loads, it needs to load the texture image and the glyph texture coordinates (4 coords for each glyph), and to generate the display lists (one list for each glyph, each display list shall draw a single glyph as textured quad).
Each short representing a glyph shall have a corresponding display list (their value much match, and glListBase can aid in this).
I suppose loading a texture is faster than generating font display lists at runtime. Pratically you move offline the glyph raster computation. But the display list generation can be heavy (many glyphs). Indeed you can run in a separated thread the display list generation or generate only the display lists required by your needs.
I've had good luck transliterating this tutorial into C++, though I'm not sure how well it will transfer to Delphi.

Resources