I have some (millions) of 16-bit losslessly compressed TIFFs (about 2MB each) and after exhausting TB of disk space I think it's time I archive the older TIFFs as 8-bit JPEGs. Each individual image is a grayscale image, though there may be as many as 5 such images representing the same imaging area at different wavelengths. Now I want to preserve as much information as possible in this process, including the ability to restore the images to their approximate original values. I know there are ways to get further savings through spatial correlations across multiple channels, but the number of channels can vary, and it would be nice to be able to load channels independently.
The images themselves suggest some possible strategies to use since close to ~60% of the area in each image is dark 'background'. So one way to preserve more of the useful image range is just to threshold away anything below this 'background' before scaling and reducing the bit depth. This strategy is, of course, pretty subjective, and I'm looking for any other suggestions for strategies that are demonstrably superior and/or more general. Maybe something like trying to preserve the most image entropy?
Thanks.
Your 2MB TIFFs are already losslessly compressed, so you would be hard-pressed to find a method that allows you to "restore the images" to their original value ranges without some loss of intensity detail.
So here are some questions to narrow down your problem a bit:
What are the image dimensions and number of channels? It's a bit difficult to guess from the filesize and bit depth alone, because as you've mentioned you're using lossless compression. A sample image would be good.
What sort of images are they? E.g. are they B/W blueprints, X-ray/MRI images, color photographs. You mention that around 60% of the images is "background" -- could you tell us more about the image content?
What are they used for? Is it just for a human viewer, or are they training images for some computer algorithm?
What kind of coding efficiency are you expecting? E.g. for the current 2MB filesize, how small do you want your compressed files to be?
Based on that information, people may be able to suggest something. For example, if your images are just color photographs that people will look at, 4:2:0 chroma subsampling will give you a 50% reduction in space without any visually detectable quality loss. You may even be able to keep your 16-bit image depth, if the reduction is sufficient.
Finally, note that you've compared two fundamentally different things in your question:
"top ~40% of the pixels" -- here it sounds like you're talking about contiguous parts of the intensity spectrum (e.g. intensities from 0.6 to 1.0) -- essentially the probability density function of the image.
"close to ~60% of the area in each image" -- here you're talking about the distribution of pixels in the spatial domain.
In general, these two things are unrelated and comparing them is meaningless. There may be an exception for specific image content -- please put up a representative image to make it obvious what you're dealing with.
If you edit your question, I'll have a look and reply if I think of something.
Related
I used Imagemagick in my project. I implemented a sub-image detection system using the compare command of ImageMagick. It is working well giving fine results. By reading articles i got to know that ImageMagick compares pixels of small image at every possible position within the pixels of larger image.And also i got to know ImageMagick detects rotated images and scaled images using Fuzzy factor.Though i have an rough idea about how the algorithm behave i couldn't find any article related to the algorithms of ImageMagick. Any idea about how this algorithm of compare command actually works?
AT my job I'm currently working on a tool which does some sophisticated optimizations on PDF pages when they're too heavy (too much vectors etc), but sometimes whites out rectangular parts of the image due to the drawing order of PDF objects.
I decided to use the compare tool of ImageMagick v6 (retrocompatibility issues forbid v7 for the moment) to check the page rendering after treatment against original rendering and detect when there have been accidents.
I have tested the available -metrics parameters on pages where the rendering was nearly identical to the original, while there was one page where white out parts were happening, making renderings very different.
I used -fuzz 10% to accept a minor color variation, and used 99 for JPG quality in my tool so JPEG compression doesn't generate too much differences. Beware on this one, as a low quality on jpg compression forces you to augment the fuzz factor, with the risk of missing major visual differences. Unfortunately, you won't find the information in the JPG headers.
I have made my images mid-res (150 dpi), because lo-res proved similar to low JPG quality and generated too much differences. The renderings are same resolution (549*819 = 449 631 pixels), the compare tool is not really strong at finding part of an image in another image. (You'd better off with OpenCV for this.)
Here is a table with some significant results on three different pages in my tool, and then my interpretation of each metric.
The AE stands for absolute error count. This metric roughly gives the number of pixels considered as different, within the 10% fuzz acceptation that I have used. On nearly identical renderings, this value is typically very low, only 18 or 1400 pixels on a total of 450K, while the very different images show nearly 40K different pixels. I think this metric can be checked against a low percentage of total pixels, but in my case this is not distinctive enough. Say I have 1% = 4500 pixels, it could be not neglectable if they happen in only one rectangle. It could be usefull with a geographical dispersion factor but this is more an OpenCV job.
The MEPP is Mean Error Per Pixel. As with all mean statistics it's not obvious to interpret, but the results are quite distinctive. Jumping from about 6K for same images to a huge 1.7M on different images. The problem is to decide a limit value. You can see in my table I had a page with 470K on this metric but it was a visually acceptable rendering artifact. What is an acceptable value, then? As often with mean values, significancy can prove to be very arbitrary and not always appropriate. The only way to find a reasonnable limit is to make a lot of measures on significant cases, maybe using machine learing.
MSE metric is mean error squared, average of the channel error squared. Squared delta values are often more significant in statistics, because they lower the minor differences and accentuate the major ones. (Linear regression correlation factor benefits from this mathematical behaviour.) This metric proves very interesting in my case because values are very consistent for similar cases : even on the page with a visual artifact and 450K different pixels, the value stays well below 1, while the page with whited out parts jumps to 9. This MSE metric is clearly very useable in my case.
NCC means normalized cross correlation. Normalized values are not appropriate in my case, as they make colors closer and diminish the differences between different cases. You can see this on values, although identical rendering are remarquably close to 1, different renderings have a value of 0.86 which is not so far from 1.
PAE gives you the absolute peak difference on color channels on all pixels, so it doesn't tell how many pixels are different. And it doesn't work well for RGB or CMYK as it doesn't tell you which channels are concerned. It's no use in my case.
PHASH is perceptual hash for the sRGB and HCLp colorspaces. I'm not really sure about this one but the result are very interesting as my images are RGB, and mostly because the values on similar rendering are well below 1 even for the page with acceptable visual artifact - while the different renderings give a value of 26. This is very appropriate for my case, just as MSE. I consider this metric as it seems to even more accentuate the differences between minor and major gaps in color values.
PSNR is peak signal to noise ratio : as for PAE, this is a peak value which won't tell how many pixels are concerned so it is absolutely unseable in my case.
RMSE is for root mean squared. Whatever that means, you find the same squared means behaviour as with MSE, so this value can be interesting as it reveals major differences more than minor ones. Here my values are 0.7 and 3.2 for identical images and a hefty 48.7 for different ones. The difficulty is to decide for an acceptable limit: as with AE or MEPP, the only way is to conduct a lot of tests and measure values for a lot of cases and decide for an appropriate value. Machine learning could help for this.
As a conclusion, I decided to use the PHASH metric. But the very conclusion of this study is that you must conduct tests and measure them before deciding the metric to use, as comparison contextes can be very different and metrics can show very different behaviours.
OpenCV is way more appropriate if you have to compare images from different sources, or parts of a global image, or images with big lighting variations. Also, it's quite easy to use from Python and C++. ImageMagick is quite good in my case because I am the one who generates both images.
The fuzz factor in ImageMagick allows two pixels to be compared and considered as the same although their colours may differ slightly.
The trick to understanding it, is to consider an RGB colour cube with Red, Green, Blue, Cyan, Magenta, Yellow and Black and White as the vertices. A fuzz factor of 100% represents the greatest possible distance in that cube, i.e. the length of the diagonal from Black to White and everything is scaled relative to that. It is shown dotted in this diagram.
In general, I would recommend using a percentage value rather than an absolute value, because an absolute fuzz factor of 255 means all colours are the same (black=white) on an 8-bit image, whereas on a 16-bit image, it would be hard to even perceive two colours that differ by 255.
As an example, let's see if a single black pixel is the same as a single mid-grey pixel with 49% fuzz:
compare -metric ae -fuzz 49% xc:black xc:gray null:
1
No, it is different, there is one pixel difference. Now let's try again allowing the pixels to be 51% different yet still match:
compare -metric ae -fuzz 51% xc:black xc:gray null:
0
Now they are considered the same.
I would like to know if it is possible to take low resolution image from street camera, increase it
and see image details (for example a face, or car plate number). Is there any software that is able to do it?
Thank you.
example of image: http://imgur.com/9Jv7Wid
Possible? Yes. In existence? not to my knowledge.
What you are referring to is called super-resolution. The way it works, in theory, is that you combine multiple low resolution images, and then combine them to create a high-resolution image.
The way this works is that you essentially map each image onto all the others to form a stack, where the target portion of the image is all the same. This gets extremely complicated extremely fast as any distortion (e.g. movement of the target) will cause the images to differ dramatically, on the pixel level.
But, let's you have the images stacked and have removed the non-relevant pixels from the stack of images. You are left hopefully with a movie/stack of images that all show the exact same image, but with sub-pixel distortions. A sub-pixel distortion simply means that the target has moved somewhere inside the pixel, or has moved partially into the neighboring pixel.
You can't measure if the target has moved within the pixel, but you can detect if the target has moved partially into a neighboring pixel. You can do this by knowing that the target is going to give off X amount of photons, so if you see 1/4 of the photons in one pixel and 3/4 of the photons in the neighboring pixel you know it's approximate location, which is 3/4 in one pixel and 1/4 in the other. You then construct an image that has a resolution of these sub-pixels and place these sub-pixels in their proper place.
All of this gets very computationally intensive, and sometimes the images are just too low-resolution and have too much distortion from image to image to even create a meaningful stack of images. I did read a paper about a lab in a university being able to create high-resolution images form low-resolution images, but it was a very very tightly controlled experiment, where they moved the target precisely X amount from image to image and had a very precise camera (probably scientific grade, which is far more sensitive than any commercial grade security camera).
In essence to do this in the real world reliably you need to set up cameras in a very precise way and they need to be very accurate in a particular way, which is going to be expensive, so you are better off just putting in a better camera than relying on this very imprecise technique.
Actually it is possible to do super-resolution (SR) out of even a single low-resolution (LR) image! So you don't have to hassle taking many LR images with sub-pixel shifts to achieve that. The intuition behind such techniques is that natural scenes are full of many repettitive patterns that can be use to enahance the frequency content of similar patches (e.g. you can implement dictionary learning in your SR reconstruction technique to generate the high-resolution version). Sure the enhancment may not be as good as using many LR images but such technique is simpler and more practicle.
Photoshop would be your best bet. But know that you cannot reliably inclrease the size of an image without making the quality even worse.
I think this can be a stupid question but after read a lot and search a lot about image processing every example I see about image processing uses gray scale to work
I understood that gray scale images use just one channel of color, that normally is necessary just 8 bit to be represented, etc... but, why use gray scale when we have a color image? What are the advantages of a gray scale? I could imagine that is because we have less bits to treat but even today with faster computers this is necessary?
I am not sure if I was clear about my doubt, I hope someone can answer me
thank you very much
As explained by John Zhang:
luminance is by far more important in distinguishing visual features
John also gives an excellent suggestion to illustrate this property: take a given image and separate the luminance plane from the chrominance planes.
To do so you can use ImageMagick separate operator that extracts the current contents of each channel as a gray-scale image:
convert myimage.gif -colorspace YCbCr -separate sep_YCbCr_%d.gif
Here's what it gives on a sample image (top-left: original color image, top-right: luminance plane, bottom row: chrominance planes):
To elaborate a bit on deltheil's answer:
Signal to noise. For many applications of image processing, color information doesn't help us identify important edges or other features. There are exceptions. If there is an edge (a step change in pixel value) in hue that is hard to detect in a grayscale image, or if we need to identify objects of known hue (orange fruit in front of green leaves), then color information could be useful. If we don't need color, then we can consider it noise. At first it's a bit counterintuitive to "think" in grayscale, but you get used to it.
Complexity of the code. If you want to find edges based on luminance AND chrominance, you've got more work ahead of you. That additional work (and additional debugging, additional pain in supporting the software, etc.) is hard to justify if the additional color information isn't helpful for applications of interest.
For learning image processing, it's better to understand grayscale processing first and understand how it applies to multichannel processing rather than starting with full color imaging and missing all the important insights that can (and should) be learned from single channel processing.
Difficulty of visualization. In grayscale images, the watershed algorithm is fairly easy to conceptualize because we can think of the two spatial dimensions and one brightness dimension as a 3D image with hills, valleys, catchment basins, ridges, etc. "Peak brightness" is just a mountain peak in our 3D visualization of the grayscale image. There are a number of algorithms for which an intuitive "physical" interpretation helps us think through a problem. In RGB, HSI, Lab, and other color spaces this sort of visualization is much harder since there are additional dimensions that the standard human brain can't visualize easily. Sure, we can think of "peak redness," but what does that mountain peak look like in an (x,y,h,s,i) space? Ouch. One workaround is to think of each color variable as an intensity image, but that leads us right back to grayscale image processing.
Color is complex. Humans perceive color and identify color with deceptive ease. If you get into the business of attempting to distinguish colors from one another, then you'll either want to (a) follow tradition and control the lighting, camera color calibration, and other factors to ensure the best results, or (b) settle down for a career-long journey into a topic that gets deeper the more you look at it, or (c) wish you could be back working with grayscale because at least then the problems seem solvable.
Speed. With modern computers, and with parallel programming, it's possible to perform simple pixel-by-pixel processing of a megapixel image in milliseconds. Facial recognition, OCR, content-aware resizing, mean shift segmentation, and other tasks can take much longer than that. Whatever processing time is required to manipulate the image or squeeze some useful data from it, most customers/users want it to go faster. If we make the hand-wavy assumption that processing a three-channel color image takes three times as long as processing a grayscale image--or maybe four times as long, since we may create a separate luminance channel--then that's not a big deal if we're processing video images on the fly and each frame can be processed in less than 1/30th or 1/25th of a second. But if we're analyzing thousands of images from a database, it's great if we can save ourselves processing time by resizing images, analyzing only portions of images, and/or eliminating color channels we don't need. Cutting processing time by a factor of three to four can mean the difference between running an 8-hour overnight test that ends before you get back to work, and having your computer's processors pegged for 24 hours straight.
Of all these, I'll emphasize the first two: make the image simpler, and reduce the amount of code you have to write.
I disagree with the implication that gray scale images are always better than color images; it depends on the technique and the overall goal of the processing. For example, if you wanted to count the bananas in an image of a fruit bowl image, then it's much easier to segment when you have a colored image!
Many images have to be in grayscale because of the measuring device used to obtain them. Think of an electron microscope. It's measuring the strength of an electron beam at various space points. An AFM is measuring the amount of resonance vibrations at various points topologically on a sample. In both cases, these tools are returning a singular value- an intensity, so they implicitly are creating a gray-scale image.
For image processing techniques based on brightness, they often can be applied sufficiently to the overall brightness (grayscale); however, there are many many instances where having a colored image is an advantage.
Binary might be too simple and it could not represent the picture character.
Color might be too much and affect the processing speed.
Thus, grayscale is chosen, which is in the mid of the two ends.
First of starting image processing whether on gray scale or color images, it is better to focus on the applications which we are applying. Unless and otherwise, if we choose one of them randomly, it will create accuracy problem in our result. For example, if I want to process image of waste bin, I prefer to choose gray scale rather than color. Because in the bin image I want only to detect the shape of bin image using optimized edge detection. I could not bother about the color of image but I want to see rectangular shape of the bin image correctly.
Iam a beginner in image mining. I would like to know the minimum dimension required for effective classification of textured images. As what i feel if a image is too small feature extraction step will not extract enough features. And if the image size goes beyond a certain dimension the processing time will increase exponentially with image size.
This is a complex question that requires a bit of thinking.
Short answer: It depends.
Long answer: It depends on the type of texture you want to classify and the type of feature your classification is based on. If the feature extracted is, say, color only, you can use "texture" as small as 1x1 pixel (in that case, using the word "texture" is a bit of an abuse). If you want to classify, say for example characters, you can usually extract a lot of local information from edges (Hough transform, Gabor filters, etc). The image plane just have to be big enough to hold the characters (say 16x16 pixels for Latin alphabet).
If you want to be able to classify any kind of images in any kind of number, you can also base your classification on global information, like entropy, correlogram, energy, inertia, cluster shade, cluster prominence, color and correlation. Those features are used for content based image retrieval.
From the top of my head, I would try using texture as small as 32x32 pixels if the kind of texture you are using is a priori unknown. If on the contrary the kind of texture is a priori known, I would choose one or more feature that I know would classify the images according to my needs (1x1 pixel for color-only, 16x16 pixels for characters, etc). Again, it really depends on what you are trying to achieve. There isn't a unique answer to your question.
This is really a two part question, since I don't fully understand how these things work just yet:
My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.
My questions:
Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?
I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...
On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.
Am I even close?
I'm using GD2 library on PHP if that is of any use
You can view compress level using the identify tool in ImageMagick. Download and installation instructions can be found at the official website.
After you install it, run the following command from the command line:
identify -format '%Q' yourimage.jpg
This will return a value from 0 (low quality, small filesize) to 100 (high quality, large filesize).
Information source
JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).
You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.
Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.
According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.
For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:
$> identify -format '%Q' filename.jpg
92%
Jpeg compression algorithm has some parameters which influence on the quality of the result image.
One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.
Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.
Quantization tables are also different.
There are other parameters such subsampling, dct method and etc.
So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.
More information:
http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html
Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.
Here is a formula I've found to work well:
jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7
jpgxsize = jpg100size*x (x = percent, e.g. 0.65)
so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality
As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):
If you wish to have a simple output of filename: quality for use on multiple images, you can use
identify -format '%f: %Q' *
to show the filename + compression of all files within the current directory.
So, there are basically two cases you care about:
If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.
If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.
From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.
Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.
If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.
This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php
Here are some ways to achieve your (1) and get it right.
There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:
https://github.com/GuidoBartoli/sherloq
The relevant (python) code is at https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py
There is another algorithm written up in https://arxiv.org/abs/1802.00992 - you might consider contacting the author for any code etc.
You can also simulate file_size(image_dimensions,quality_level) and then invert that function/lookup table to get quality_level(image_dimensions,file_size). Hey presto!
Finally, you can adopt a brute-force https://en.wikipedia.org/wiki/Error_level_analysis approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).
Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.
I can add other links for this as they become available - please put them in the comments.
If you trust Irfanview estimation of JPEG compression level you can extract that information from the info text file created by the following Windows line command (your path to i_view32.exe might be different):
"C:\Program Files (x86)\IrfanView\i_view32.exe" <image-file> /info=txtfile
Jpg compression level is recorded in the IPTC data of an image.
Use exiftool (it's free) to get the exif data of an image then do a search on the returned string for "Photoshop Quality". Or at least put the data returned into a text document and check to see what's recorded. It may vary depending on the software used to save the image.
"Writer Name : Adobe Photoshop
Reader Name : Adobe Photoshop CS6
Photoshop Quality : 7"