I suppose the simplest understanding of what a (bitmap) image is would be an array of pixels. After that, it gets pretty technical.
I've been trying to understand the sort of information that an image may provide and have come across a large collection of technical terms like "mipmap", "pitch", "stride", "linear", "depth", as well as other format-specific things.
These seem to pop up across a lot of different formats so it'd probably be useful to understand what purpose they serve in an image. Looking at the DDS, BMP, PNG, TGA, JPG documentations has only made it clear that an image is pretty confusing.
Though searching around for some hours, there wasn't any nice tutorial-like break-down of just what an image is and all of the different properties.
The eventual goal would be to take proprietary image formats and convert them to more common formats like DDS or BMP. Or to make up some image format.
Any good readings?
Even your simplified explanation of an image doesn't encompass all the possibilities. For example an image can be divided by planes, where the red pixel values are all together followed by the green pixel values, followed by the blue pixel values. Such layouts are uncommon but still possible.
Assuming a simple layout of pixels you must still determine the pixel format. You might have a paletted image where some number of bits (1, 4, or 8) will be an index into a palette or color table which will define the RGB color of the pixel along with the transparency of the pixel (one index will typically be reserved as a transparent pixel). Otherwise the pixel will be 3 or 4 bytes depending on whether a transparency or alpha value is included. The order of the values (R,G,B) or (B,G,R) will depend on the format - Windows bitmaps are B,G,R while everything else will most likely be R,G,B.
The stride is the number of bytes between rows of the image. Windows bitmaps for example will take the width of the image times the number of bytes per pixel and round it up to the next multiple of 4 bytes.
I've never heard of DDA, and BMP is only common in the Windows world (and there's a lot more computing in the non-windows world than you might think). Rather than worry about all of the technical details of this, why not just use an existing toolkit such as image magick, which can already batch convert from dozens of formats to your one common format?
Unless you're doing specialized work, where you would need something fancy like hdr (which most image formats don't even support -- so most of your sources would not have it in the first place), you're probably best off picking something standard like PNG or JPG. They both have plusses and minuses. You might want to support both of those depending on the image.
Related
A DICOM file may have either uncompressed pixel data or compressed pixel data. It's PhotometricInterpretation (0028,0004) can be MONOCHROME1/MONOCHROME2/RGB/PALETTE COLOR/YBR etc. There is also a Pydicom page about color space.
But from these pages or any other DICOM websites, it is not clear to me that how to get the color depth.
Is either BitsAllocated (0028,0100) or BitsStored (0028,0101) tag referring to color depth? Can its color depth be different than these two tag values?
How to get color depth of DICOM pixel data in reliable way?
Bits Stored is the number of bits that is used for the actual color or grayscale data, so it is at least related to the color depth. Bits Allocated is always a multiple of 8, as the data is always organized in bytes, where some of the upper bits may not be used for data (with the exception of Bit Data, where it is 1).
Getting the bit depth is not as straightforward as it may seem. While the number of bits used for the data can mostly be defined, the resolution of the data (e.g. the distance between adjacent values) may also depend on the Photometric Interpretation, and of course on the resolution provided by the modality itself.
The easiest case is monochrome data (Photometric Interpretation is MONOCHROME1 or MONOCHROME2), where the color depth is directly defined by Bits Stored typical values being 12, 14 or 16. The same is mostly true for RGB data (e.g. data originally recorded as RGB), and while it is true that Bits Stored can have different values for JPEG2000 encoded images as correctly mentioned by #kritzel_sw, I yet have to see any RGB data with Bits Stored different from 8. Update: I still haven't seen this, but found that RTDOSE images can have 32 Bits Stored.
For color data in the YBR color space (Photometric Interpretation is YBR_xxx) this is less clear. It somewhat depends on your definition of color depth. Given that the used color space is YBR instead of RGB, and the number of bits used for each component maybe different (for example in YBR_FULL_422, which is used for some JPEG compressed images, 2 channels our downsampled), the resulting image if converted into RGB (what is mostly done) uses 8 bits for each color component, but the actual number of possible values is less than 256 for that reason. So if your definition of color depth depends on the number of bits used per RGB channel, the answer would probably be 8 in this case, but if you define the color depth per YBR channel, the answer could be different and depends both on the Photometric Interpretation and Bits Stored.
A special case is the PhotometricInterpretation of PALETTE COLOR, where the possible colors are defined in the color table. In this case, the number of colors per color component is defined in the first value of the Palette Color Lookup Table Descriptor (0028,1101-1104), which is equal for all 3 tables (e.g. for the Red, Green and Blue components). The actual color depth has to be derived from that value.
Given all that the answer is probably: it depends. I'll also add the note by #kritzel_sw, that many of the IODs limit the degrees of freedom of how pixel data is encoded significantly, which will narrow down the possibilities for the color depth for any concrete type of images.
I'm interested if anybody has a more straightforward answer.
I have noticed in some tile based game engines, tiles are saved as grayscale or sometimes even black or white, and the colour is then added through storing a 'palette' along with it to apply to certain pixels however i've never seen how it knows which pixels.
Just to name a few engines i've seen use this, Notch's Minicraft and the old Pokemon games for Gameboy. This is what informed me of how a colour palette is used in old games: deconstructulator
From the little i've seen of people use this technique in tutorials it uses a form of bit-shifting however i'd like to know how that was so efficient that it was next to mandatory in old 8-bit consoles - how it is possible to apply red, green and blue to specific pixels of an image every frame instead of saving the whole coloured image (some pseudo-code would be nice).
The efficient thing about it is that it saves memory. Storing the RGB values usually requires 24 bits (8 bits per channel). While having a palette of 256 colors (requiring 256*24 bits = 768 bytes) each pixel requires just 8 bits (2^8 = 256 colors). So three times as many pixels can be stored in the same amount of memory (if you don't count the needed palette), but with a limitied set of colors obviously. This used to be supported in hardware so the graphics memory could be more efficiently used too (well this is actually still supported in modern PC hardware but almost never used since the graphics memory isn't that limited anymore).
Some hardware (including the Super Gameboy supported by the first Pokemon games) used more than one hardware palette at a time. Different sets of tiles is mapped to different palettes. But the way the tiles are mapped to different palettes is very hardware dependent and often not very straight forward.
The way the bits of the image is stored isn't always so straight forward either. If the pixels are 8 bits it can be an array of bytes where every byte simply is one pixel (as for the classic VGA mode used in many old DOS games). But the Gameboy for example uses 2 bits per pixel and others used 4, that is 2^4 = 16 colors per palette. A common way to arrange the bits in memory is by using bitplanes. That is (in the case of 16 color graphics) storing 4 separate b/w images. But the bitplanes can in some cases also be interleaved in different ways. So there is no simple and generic answer to how to decode/encode graphics this way. So I guess you have to be more specific what you want pseudo code for.
And I'm not sure how any of this applies to Minicraft. Maybe just how the graphics is stored on disk. But it has no significance once the graphics is loaded into graphics memory. (Maybe you have some other feature of Minicraft in mind?)
I think this can be a stupid question but after read a lot and search a lot about image processing every example I see about image processing uses gray scale to work
I understood that gray scale images use just one channel of color, that normally is necessary just 8 bit to be represented, etc... but, why use gray scale when we have a color image? What are the advantages of a gray scale? I could imagine that is because we have less bits to treat but even today with faster computers this is necessary?
I am not sure if I was clear about my doubt, I hope someone can answer me
thank you very much
As explained by John Zhang:
luminance is by far more important in distinguishing visual features
John also gives an excellent suggestion to illustrate this property: take a given image and separate the luminance plane from the chrominance planes.
To do so you can use ImageMagick separate operator that extracts the current contents of each channel as a gray-scale image:
convert myimage.gif -colorspace YCbCr -separate sep_YCbCr_%d.gif
Here's what it gives on a sample image (top-left: original color image, top-right: luminance plane, bottom row: chrominance planes):
To elaborate a bit on deltheil's answer:
Signal to noise. For many applications of image processing, color information doesn't help us identify important edges or other features. There are exceptions. If there is an edge (a step change in pixel value) in hue that is hard to detect in a grayscale image, or if we need to identify objects of known hue (orange fruit in front of green leaves), then color information could be useful. If we don't need color, then we can consider it noise. At first it's a bit counterintuitive to "think" in grayscale, but you get used to it.
Complexity of the code. If you want to find edges based on luminance AND chrominance, you've got more work ahead of you. That additional work (and additional debugging, additional pain in supporting the software, etc.) is hard to justify if the additional color information isn't helpful for applications of interest.
For learning image processing, it's better to understand grayscale processing first and understand how it applies to multichannel processing rather than starting with full color imaging and missing all the important insights that can (and should) be learned from single channel processing.
Difficulty of visualization. In grayscale images, the watershed algorithm is fairly easy to conceptualize because we can think of the two spatial dimensions and one brightness dimension as a 3D image with hills, valleys, catchment basins, ridges, etc. "Peak brightness" is just a mountain peak in our 3D visualization of the grayscale image. There are a number of algorithms for which an intuitive "physical" interpretation helps us think through a problem. In RGB, HSI, Lab, and other color spaces this sort of visualization is much harder since there are additional dimensions that the standard human brain can't visualize easily. Sure, we can think of "peak redness," but what does that mountain peak look like in an (x,y,h,s,i) space? Ouch. One workaround is to think of each color variable as an intensity image, but that leads us right back to grayscale image processing.
Color is complex. Humans perceive color and identify color with deceptive ease. If you get into the business of attempting to distinguish colors from one another, then you'll either want to (a) follow tradition and control the lighting, camera color calibration, and other factors to ensure the best results, or (b) settle down for a career-long journey into a topic that gets deeper the more you look at it, or (c) wish you could be back working with grayscale because at least then the problems seem solvable.
Speed. With modern computers, and with parallel programming, it's possible to perform simple pixel-by-pixel processing of a megapixel image in milliseconds. Facial recognition, OCR, content-aware resizing, mean shift segmentation, and other tasks can take much longer than that. Whatever processing time is required to manipulate the image or squeeze some useful data from it, most customers/users want it to go faster. If we make the hand-wavy assumption that processing a three-channel color image takes three times as long as processing a grayscale image--or maybe four times as long, since we may create a separate luminance channel--then that's not a big deal if we're processing video images on the fly and each frame can be processed in less than 1/30th or 1/25th of a second. But if we're analyzing thousands of images from a database, it's great if we can save ourselves processing time by resizing images, analyzing only portions of images, and/or eliminating color channels we don't need. Cutting processing time by a factor of three to four can mean the difference between running an 8-hour overnight test that ends before you get back to work, and having your computer's processors pegged for 24 hours straight.
Of all these, I'll emphasize the first two: make the image simpler, and reduce the amount of code you have to write.
I disagree with the implication that gray scale images are always better than color images; it depends on the technique and the overall goal of the processing. For example, if you wanted to count the bananas in an image of a fruit bowl image, then it's much easier to segment when you have a colored image!
Many images have to be in grayscale because of the measuring device used to obtain them. Think of an electron microscope. It's measuring the strength of an electron beam at various space points. An AFM is measuring the amount of resonance vibrations at various points topologically on a sample. In both cases, these tools are returning a singular value- an intensity, so they implicitly are creating a gray-scale image.
For image processing techniques based on brightness, they often can be applied sufficiently to the overall brightness (grayscale); however, there are many many instances where having a colored image is an advantage.
Binary might be too simple and it could not represent the picture character.
Color might be too much and affect the processing speed.
Thus, grayscale is chosen, which is in the mid of the two ends.
First of starting image processing whether on gray scale or color images, it is better to focus on the applications which we are applying. Unless and otherwise, if we choose one of them randomly, it will create accuracy problem in our result. For example, if I want to process image of waste bin, I prefer to choose gray scale rather than color. Because in the bin image I want only to detect the shape of bin image using optimized edge detection. I could not bother about the color of image but I want to see rectangular shape of the bin image correctly.
I have some (millions) of 16-bit losslessly compressed TIFFs (about 2MB each) and after exhausting TB of disk space I think it's time I archive the older TIFFs as 8-bit JPEGs. Each individual image is a grayscale image, though there may be as many as 5 such images representing the same imaging area at different wavelengths. Now I want to preserve as much information as possible in this process, including the ability to restore the images to their approximate original values. I know there are ways to get further savings through spatial correlations across multiple channels, but the number of channels can vary, and it would be nice to be able to load channels independently.
The images themselves suggest some possible strategies to use since close to ~60% of the area in each image is dark 'background'. So one way to preserve more of the useful image range is just to threshold away anything below this 'background' before scaling and reducing the bit depth. This strategy is, of course, pretty subjective, and I'm looking for any other suggestions for strategies that are demonstrably superior and/or more general. Maybe something like trying to preserve the most image entropy?
Thanks.
Your 2MB TIFFs are already losslessly compressed, so you would be hard-pressed to find a method that allows you to "restore the images" to their original value ranges without some loss of intensity detail.
So here are some questions to narrow down your problem a bit:
What are the image dimensions and number of channels? It's a bit difficult to guess from the filesize and bit depth alone, because as you've mentioned you're using lossless compression. A sample image would be good.
What sort of images are they? E.g. are they B/W blueprints, X-ray/MRI images, color photographs. You mention that around 60% of the images is "background" -- could you tell us more about the image content?
What are they used for? Is it just for a human viewer, or are they training images for some computer algorithm?
What kind of coding efficiency are you expecting? E.g. for the current 2MB filesize, how small do you want your compressed files to be?
Based on that information, people may be able to suggest something. For example, if your images are just color photographs that people will look at, 4:2:0 chroma subsampling will give you a 50% reduction in space without any visually detectable quality loss. You may even be able to keep your 16-bit image depth, if the reduction is sufficient.
Finally, note that you've compared two fundamentally different things in your question:
"top ~40% of the pixels" -- here it sounds like you're talking about contiguous parts of the intensity spectrum (e.g. intensities from 0.6 to 1.0) -- essentially the probability density function of the image.
"close to ~60% of the area in each image" -- here you're talking about the distribution of pixels in the spatial domain.
In general, these two things are unrelated and comparing them is meaningless. There may be an exception for specific image content -- please put up a representative image to make it obvious what you're dealing with.
If you edit your question, I'll have a look and reply if I think of something.
What is the efficient way to compare two images in visual c..?
Also in which format images has to be stored.(bmp, gif , jpeg.....)?
Please provide some suggestions
If the images you are trying to compare have distinctive characteristics that you are trying to differentiate then PCA is an excellent way to go. The question of what format of the file you need is irrelevant really; you need to load it into the program as an array of numbers and do analysis.
Your question opens a can of worms in terms of complexity.
If you want to compare two images to check if they are the same, then you need to perform an md5 on the file (removing possible metainfos which could distort your result).
If you want to compare if they look the same, then it's a completely different story altogether. "Look the same" is intended in a very loose meaning (e.g. they are exactly the same image but stored with two different file formats). For this, you need advanced algorithms, which will give you a probability for two images to be the same. Not being an expert in the field, I would perform the following "invented out of my head" algorithm:
take an arbitrary set of pixel points from the image.
for each pixel "grow" a polygon out of the surrounding pixels which are near in color (according to HSV colorspace)
do the same for the other image
for each polygon of one image, check the geometrical similitude with all the other polygons in the other image, and pick the highest value. Divide this value by the area of the polygon (to normalize).
create a vector out of the highest values obtained
the higher is the norm of this vector, the higher is the chance that the two images are the same.
This algorithm should be insensitive to color drift and image rotation. Maybe also scaling (you normalize against the area). But I restate: not an expert, there's probably much better, and it could make kittens cry.
I did something similar to detect movement from a MJPEG stream and record images only when movement occurs.
For each decoded image, I compared to the previous using the following method.
Resize the image to effectively thumbnail size (I resized fairly hi-res images down by a factor of ten
Compare the brightness of each pixel to the previous image and flag if it is much lighter or darker (threshold value 1)
Once you've done that for each pixel, you can use the count of different pixels to determine whether the image is the same or different (threshold value 2)
Then it was just a matter of tuning the two threshold values.
I did the comparisons using System.Drawing.Bitmap, but as my source images were jpg, there were some artifacting.
It's a nice simple way to compare images for differences if you're going to roll it yourself.
If you want to determine if 2 images are the same perceptually, I believe the best way to do it is using an Image Hashing algorithm. You'd compute the hash of both images and you'd be able to use the hashes to get a confidence rating of how much they match.
One that I've had some success with is pHash, though I don't know how easy it would be to use with Visual C. Searching for "Geometric Hashing" or "Image Hashing" might be helpful.
Testing for strict identity is simple: Just compare every pixel in source image A to the corresponding pixel value in image B. If all pixels are identical, the images are identical.
But I guess don't want this kind of strict identity. You probably want images to be "identical" even if certain transformations have been applied to image B. Examples for these transformations might be:
changing image brightness globally (for every pixel)
changing image brightness locally (for every pixel in a certain area)
changing image saturation golbally or locally
gamma correction
applying some kind of filter to the image (e.g. blurring, sharpening)
changing the size of the image
rotation
e.g. printing an image and scanning it again would probably include all of the above.
In a nutshell, you have to decide which transformations you want to treat as "identical" and then find image measures that are invariant to those transformations. (Alternatively, you could try to revert the translations, but that's not possible if the transformation removes information from the image, like e.g. blurring or clipping the image)