What is Multiframe image in DICOM? - image-processing

What is the MultiFrame image in DICOM. How multiframe is different from having multiple images in a single series?

A multi-frame image is typically a more compact representation of a multi-image (single-frame) series. In a single-frame image series, you would need to repeat the same header data (patient information, image properties etc.) in every image; in a multi-frame image the header data is given once.
Multi-frame images inevitably have some limitations in relation to single-frame image series; in particular, all frames in the multi-frame image would need to have the same size, orientation, etc.
Multi-frame images have historically also not been as widely supported by DICOM viewers, PACS systems etc. as single-frame images, although I believe that this situation is improved nowadays.

See link. There are at least three kinds of multiframe, indicated by
FIP "Frame Increment Pointer" (0028,0009) and SPP
"Stereo Pairs Present" (0022,0028).
time: each frame is the same location, but different time. You can play the multiframe like a movie. Indicated by FIP="Frame Time" (0018,1063).
location: each frame is the same time, but different location. Indicated by FIP="Image Position Patient" (0020, 0032).
stereo: there are only two frames, one for left eye, one for right. Indicated by SPP="YES"
I suppose a modality could combine these, for example a stereo movie or a volume over time. In that case the FIP would be an array, perhaps?

Related

How to overlay the RTDOSE and Image data in the correct position?

I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.

What software is recommended to automate image annotation?

We make images like the following in Excel. The raw image is imported and positioned in the generally correct area within the annotations, which are themselves images linked to ranges, the contents of which differ depending on selections made by the user.
The absolute position and dimensions of each annotation must be adjusted manually for every image. The number of sample names can vary (up to 12 lanes of samples). The size ladder on the left can also vary depending on the size of the protein being analyzed.
After everything is correctly sized and aligned, the range containing the raw image + annotations is copied and saved as a jpg (which is then imported into an Access database).
Though we've automated some parts of this with VBA, the process of tweaking every image (widths of columns, text size, position of size ladder, etc.) can get very tedious. Surely there is some software out there that will make this process more efficient. It takes one of our staff members hours to adjust and finalize about 10-20 of these images.
Any recommendations are welcomed.
This procedure is called electrophoresis. Samples (in this case proteins) are loaded into a polyacrylamide gel (each sample in its own "lane") and pulled through the gel with electricity. This process separates all of the proteins in each lane by size and charge.
The "ladder" is a solution of various proteins of known size. It's used to determine the sizes of the proteins in the other lanes.
The image on the left contains the range of sizes in the ladder (in this case 10, 15,...150, 200). Each "step" in the ladder image is aligned with the black bands that appear in the ladder lane in the experiment (the actual ladder lane that contains the black bands is not present in this case...it's cropped post-alignment to improve the overall look of the image).
The images on the right are protein names and point to the location on the gel where that particular protein should appear. The protein Actin, for example, is supposed to come out at around 42 kilodaltons. The fact that there is a prominent black band in that location is good supporting evidence that this sample contains Actin protein.
Many gels will also describe the sample source at the top or the bottom. So, for example, if the sample in lane 1 was derived from mouse liver cells, lane 1 would be annotated as "mouse liver."
The raw image is captured in the lab and is saved as a jpg. This jpg is then manually copied directly into an Excel sheet, where extraneous parts of the image are cropped. The cropped image is then moved to within the area of the worksheet that contains the annotations (ladder, protein names, sample names). These annotations are themselves images (linked to other ranges in the workbook that change with every experiment...protein names, samples names, ladder type can be different for every experiment). These annotation images require fine positioning in each case (as described previously) to align with the lanes and with the protein sizes. Once everything is aligned, it is saved as a jpg and moved into Access.
My question is...Is there software already out there designed specifically for tasks like these? Just as Excel is not a database program, it is also not an image annotation program. I want to know if there is an application out there, ready to go, that is specifically designed to annotate images with elements that can vary from image to image.
Of course, there will still be a need for manually moving elements around the image to get everything aligned (I'm not looking for a miracle here). I'm thinking that there has to be something better than Excel for this.

ImageJ - how to ensure two images are directly comparable (what scripting command?)

I need to present some microscopy images side by side and ensure they are directly comparable by eye in a presentation. The pictures were all taken with the same exposures, gain et cetera so the underlying pixel values should be comparable.
However, the microscopy software has a nasty habit of saving the files with one of the colour channels saturated (for some reason), so I have to process the images for presentations.
Previously I'd been using a macro which processes through a folder and calls the scripting command
run("Enhance Contrast", "saturated=0.35");
But on reflection I don't think this is the proper command to call. I don't think it would produce images that are directly comparable, by eye, to each other.
I had thought that the command
run("Color Balance...");
resetMinAndMax();
would be best as it should show the full display range. But the display values shown on the histogram do vary depending on the image.
Is this appropriate for making directly comparable images or should I run a command like
setMinAndMax();
more appropriate. With 0 as minimum and an arbitrary figure as the maximum. This is driving me mad, as I keep on getting asked about whether my images are directly comparable but I simply don't know!
Usually, resetMinAndMax(); is the best way to ensure that your images are displayed consistently.
Note however that it also depends on the bit depth of your images.
8-bit grayscale images and RGB color images are displayed with a range of 0-255 upon resetMinAndMax();
For 16-bit (short) and 32-bit (float) images, the display range is calculated from the actual minimum and maximum values of the image (see the documentation of the Brightness/Contrast dialog and the resetMinAndMax() macro function)
So for 16-bit and 32-bit images, you can use the Set Display Range dialog (Set button in the B&C window) with one of the default Unsigned 16-bit range options to ensure consistent display, or the macro call:
setMinAndMax(0, 65535);
If you want to use the images in a presentation, copy them using Edit > Copy to System, or convert them to either 8-bit or RGB before saving them and inserting them in a presentation.

best way to upload a large image

I have an image captured every second from my web cam of size 720x576.
I ultimately display this in a canvas control via my server.
I convert this jpeg to bytes (31553) and upload it using WCF.
I have been debating whether to split this image into 4 smaller images and uploading them 1 after the other. When each image is uploaded it is drawn on a hidden canvas. Then once all 4 images are uploaded I update the visible canvas with the 'cached' canvas.
Will this be a better/faster way to upload the image by splitting into 4 images or will it make no difference at all?
I will write and conduct tests for this code now but thought I would set myself up to be educated as to what the done/accepted wisdom is.
Thanks
If you look from compression point of view, the total size of the four images should be more than one image. You can imagine as compressing redundant information 4 times. If you keep on dividing and compressing, you may end up sending every pixel.
Another way to look at this is from network's point of view. Many times internet bandwidth is the limiting factor, so probably sending one file would be best (as it would be smaller). In another scenario, there might be congestion in the network so multiple data streams (if you upload them in parallel and the server is multi-threaded) are more likely to get a larger chunk of the bandwidth.

Programmatically get non-overlapping images from MP4

My ultimate goal is to get meaningful snapshots from MP4 videos that are either 30 min or 1 hour long. "Meaningful" is a bit ambitious, so I have simplified my requirements.
The image should be crisp - non-overlapping, and ideally not blurry. Initially, I thought getting a keyframe would work, but I had no idea that keyframes could have overlapping images embedded in them like this:
Of course, some keyframe images look like this and those are much better:
I was wondering if someone might have source code to:
Take a sequence of say 10-15 continuous keyframes (jpg or png) and identify the best keyframe from all of them.
This must happen entirely programmatically. I found this paper: http://research.microsoft.com/pubs/68802/blur_determination_compressed.pdf
and felt that I could "rank" a few images based on the above paper, but then I was dissuaded by this link: Extracting DCT coefficients from encoded images and video given that my source video is an MP4. Of course, this confuses me because the input into the system is just a sequence of jpg images.
Another link that is interesting is:
Detection of Blur in Images/Video sequences
However, I am not sure if this will work for "overlapping" images.
Th first pic is from a interlaced video at scene change.The two fields belong to different scenes. De-interlacing the video will help, try the ffmpeg filter -filter:v yadif . I am not sure how yadiff works but if it extracts the two fields and scale them to original size, it would work. Another approach is to detect if the two fields(extract alternate lines and form images with half the height and diff them) are very different from each other and ignore those images.

Resources