Is it possible to tell the quality level of a JPEG? - image-processing

This is really a two part question, since I don't fully understand how these things work just yet:
My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.
My questions:
Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?
I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...
On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.
Am I even close?
I'm using GD2 library on PHP if that is of any use

You can view compress level using the identify tool in ImageMagick. Download and installation instructions can be found at the official website.
After you install it, run the following command from the command line:
identify -format '%Q' yourimage.jpg
This will return a value from 0 (low quality, small filesize) to 100 (high quality, large filesize).
Information source

JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).
You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.
Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.
According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.

For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:
$> identify -format '%Q' filename.jpg
92%

Jpeg compression algorithm has some parameters which influence on the quality of the result image.
One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.
Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.
Quantization tables are also different.
There are other parameters such subsampling, dct method and etc.
So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.
More information:
http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html
Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.

Here is a formula I've found to work well:
jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7
jpgxsize = jpg100size*x (x = percent, e.g. 0.65)
so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality

As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):
If you wish to have a simple output of filename: quality for use on multiple images, you can use
identify -format '%f: %Q' *
to show the filename + compression of all files within the current directory.

So, there are basically two cases you care about:
If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.
If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.
From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.

Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.

If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.

This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php

Here are some ways to achieve your (1) and get it right.
There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:
https://github.com/GuidoBartoli/sherloq
The relevant (python) code is at https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py
There is another algorithm written up in https://arxiv.org/abs/1802.00992 - you might consider contacting the author for any code etc.
You can also simulate file_size(image_dimensions,quality_level) and then invert that function/lookup table to get quality_level(image_dimensions,file_size). Hey presto!
Finally, you can adopt a brute-force https://en.wikipedia.org/wiki/Error_level_analysis approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).
Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.
I can add other links for this as they become available - please put them in the comments.

If you trust Irfanview estimation of JPEG compression level you can extract that information from the info text file created by the following Windows line command (your path to i_view32.exe might be different):
"C:\Program Files (x86)\IrfanView\i_view32.exe" <image-file> /info=txtfile

Jpg compression level is recorded in the IPTC data of an image.
Use exiftool (it's free) to get the exif data of an image then do a search on the returned string for "Photoshop Quality". Or at least put the data returned into a text document and check to see what's recorded. It may vary depending on the software used to save the image.
"Writer Name : Adobe Photoshop
Reader Name : Adobe Photoshop CS6
Photoshop Quality : 7"

Related

libjpeg gives poor quality image

I'm using libjpeg to produce jpeg from raw RGB data at work. It works correctly, though the quality of the jpeg output is not satisfactory, and differences from similarly saved png (libpng) are significant. I thought, may be the default quality setting is set to low, but I checked that the default is set to 100, which is the maximum you can get for that library. Even setting that explicitly to 100 using jpeg_set_quality() didn't help. I then looked in the library description, and changed the default J_DCT_METHOD from JDCT_ISLOW to JDCT_FLOAT, just because it says the latter is a little more accurate than the former. The final output however is no different, and the image is still 'blurry' at places. I also checked that 'smoothing' is set to zero which could have made a difference if non-zero. If I didn't care about speed/memory, are there any other settings I can change to increase the fidelity of my image produced? I'm referencing this page for library methods : https://www4.cs.fau.de/Services/Doc/graphics/doc/jpeg/libjpeg.html
Thanks!
Poor JPEG quality comes from a combination of subsampling and quantization tables. In LibJpeg, the "quality" setting affects the latter. If you have set that to 100 and are still getting bad output, check the subsampling.
Also, JPEG is sensitive to the type of image. JPEG does not do a good job with images that consist of areas of the identical color (e.g., drawing, cartoons—as opposed to photographs)
Changing the DCT method, as you have suggested, will do little for the quality.

Best way to store motion changes to reduce memory

I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg.
How can I store such motion changes efficiently?
thanks
While taking the consecutive images the camera might also move or not. If the camera is fixed, only the items on the view move and some portion of the image changes every time. If the camera also moves, even if the objects stand still, the image changes significantly. There are some algorithms to discard the effect of the motion of the camera. So the main idea is when compared with the sampling frequency of the camera (e.g. 25 frames per second) most of the objects nearly standing still.
Because most of the image is unchanged between the frames, it becomes feasible to use difference of the images. It provides some compression ratios. However after some amount of time the newly received image shows big difference with the reference image, so it becomes better to get a new image reference. Which is named a "reference frame".
In fact, modern video compression algorithms uses advanced techniques to detect the objects and follow them, which results better compression ratios.
Wikipedia - Different compression techniques
Check This - OpenCV should handle the storing of consecutive images in different video formats.

How does streaming stream 30 480x640 images over a 2mbit/s line

I'm having a strange realization while working on a project I'm having.
I created a streaming solution where i stream a image with the resolution 480x640 totaling at 30’720 pixels, and every pixel contains 32bits of data and by my calculations this means that every frame totals to 1,2MB of data which means that 30fps would total to a 36MB/s line.
So to my question how does a streaming solution stream 30fps over f.ex 2mbit/s line?
I'm guessing that the same question can probably used to explain how a jpg image with a 480x640 resolution takes up <100KB
Compression is your friend.
I don't know the specifics of your solution, but a few assumptions can be made.
First off, even if you send each frame as a full frame, they should be compressed. Even lossless compression should get you some pretty good compression rates, but if you go with something lossy (like jpg) then you can get even more.
But that's not all you get. Any good video codec should provide significant compression as well. Parts of the image that don't change between frames don't need to be sent at all, and other parts can be compressed nicely too (I don't know much specifics about the compression used, but there's a lot of stuff that's done to compress it).
This all adds up to a lot of savings over sending a full 32bit bitmap for every frame.
Compression is a very broad topic. Just to get an idea, try reading the wikipedia page about image compression
As a very basic solution to your problem, I would personally jpeg-encode the first frame, then, jpeg-encode the differences between two consecutive frames.
For jpeg compression there are many libraries providing the functionality, without the need to implement it yourself.
If you are not so interested in the quality, you can also subsample the video, for example obtaining frames of resolution 240*320

good ways to preserve image information when reducing bit depth

I have some (millions) of 16-bit losslessly compressed TIFFs (about 2MB each) and after exhausting TB of disk space I think it's time I archive the older TIFFs as 8-bit JPEGs. Each individual image is a grayscale image, though there may be as many as 5 such images representing the same imaging area at different wavelengths. Now I want to preserve as much information as possible in this process, including the ability to restore the images to their approximate original values. I know there are ways to get further savings through spatial correlations across multiple channels, but the number of channels can vary, and it would be nice to be able to load channels independently.
The images themselves suggest some possible strategies to use since close to ~60% of the area in each image is dark 'background'. So one way to preserve more of the useful image range is just to threshold away anything below this 'background' before scaling and reducing the bit depth. This strategy is, of course, pretty subjective, and I'm looking for any other suggestions for strategies that are demonstrably superior and/or more general. Maybe something like trying to preserve the most image entropy?
Thanks.
Your 2MB TIFFs are already losslessly compressed, so you would be hard-pressed to find a method that allows you to "restore the images" to their original value ranges without some loss of intensity detail.
So here are some questions to narrow down your problem a bit:
What are the image dimensions and number of channels? It's a bit difficult to guess from the filesize and bit depth alone, because as you've mentioned you're using lossless compression. A sample image would be good.
What sort of images are they? E.g. are they B/W blueprints, X-ray/MRI images, color photographs. You mention that around 60% of the images is "background" -- could you tell us more about the image content?
What are they used for? Is it just for a human viewer, or are they training images for some computer algorithm?
What kind of coding efficiency are you expecting? E.g. for the current 2MB filesize, how small do you want your compressed files to be?
Based on that information, people may be able to suggest something. For example, if your images are just color photographs that people will look at, 4:2:0 chroma subsampling will give you a 50% reduction in space without any visually detectable quality loss. You may even be able to keep your 16-bit image depth, if the reduction is sufficient.
Finally, note that you've compared two fundamentally different things in your question:
"top ~40% of the pixels" -- here it sounds like you're talking about contiguous parts of the intensity spectrum (e.g. intensities from 0.6 to 1.0) -- essentially the probability density function of the image.
"close to ~60% of the area in each image" -- here you're talking about the distribution of pixels in the spatial domain.
In general, these two things are unrelated and comparing them is meaningless. There may be an exception for specific image content -- please put up a representative image to make it obvious what you're dealing with.
If you edit your question, I'll have a look and reply if I think of something.

Image processing with lossy compression

If we compare image procesing of the losslessly compressed images with the image processing of the lossy compressed images, does the latter provide the results comparable to the former one.
I am asking this question because the images prodiced by lossless compression are ok for human eye but they vary at minute details which may effect the processing of images by the computer. But I can't tell how much.
I don't see much of a question here, but you are right. It is especially visible if processing a JPG image with a medium compression ratio -- the 8x8 squares of which JPG's are built of tend to get more visible after filtering.
This is comparable to the rising of computational error when operating on computer-based floating point numbers.
Your best bet for image processing is using lossless formats for image processing -- PNG's are a good choice, cause they both provide lossless compression, as well as a decent support for bitdepths, transparency and are browser-compatible.
Another format, more often used in the professional world are TIFF's (Targa).
However, note that if your source image is already in a loss-based format, converting it to a lossless one will only prevent adding additional artifact's, not spreading and enhancing the old one. You can however reduce the extent of error by converting it to a lossless format and running it through a small seed gaussian blur.
Perhaps you are looking for the Perceptual Image Diff utility?

Resources