resize image without losing quality with Gimp - gimp

i have a bunch of images which are way too big i need to decrease their size from 30 kb to 10 or 5 kb without loosing quality. I tried to change the dpi and pixels with no succeed. The images got blurred, and as they have text i can't read anything after the changes. Is there anyway i can accomplish this without loosing quality? I have almost a dozen images in my application.
Thanks in advance and have a nice day.

for batch resizing I use IrfanView (despite it's "lite-ness" it's very powerful).
It has a nice batch dialog, with a lot of options.
If you're working with png files try using better compression, and/or different color depth settings (if you're not using transparency you could try converting them to jpeg, although you might lose some quality)
changing color depth/range/compression might not affect image quality (not visibile anyway, if used with moderation) and it will decrease the size of the picture - in most of the cases anyway
if you want to stick to Gimp (I never personally used it), it should have some export features where you can select some settings for the image, like format and options

You cannot leave out data without reducing quality. Data has meaning.
You may try to use improved compression, pngcrush is the tool that automatically tries several approaches for you and picks the best.
Reducing colour depth will reduce the file size (while reducing colour quality). You can also turn on dithering in some image editors, but that's another loss in quality.
If your image has photographic content rather than graphical, convert to JPEG and use the JPEG quality settings, experiment with them a bit.

It seems that if I have a large png of 2500px wide and I want to resize it down to 100px wide, If I scale the image all at once the the desired size the image becomes way to distorted to use.
However If I scale the image in small increments of 200 pixels and repeat until you reach the desired length the image does not get as distorted. So if Im at 2500px then I would scale the image to 2300px then to 2100 and so on. The smaller the scale the less distortion.

Any resize method will have some loss, no matter how small. Following steps will make you lose quality.
steps for a single layer
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as
steps for multiple layers
layer->new layer group
move all layers to layer group
select layer group
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as

Related

Calculating Difference In Rotation Between Two Images of Different Quality

The goal is replace all low resolution images by referring to a repository of high resolution images.
I was able to replace the images, but I noticed that the images were rotated and I also need to reflect changed in the images that I am adding. Also, there is no pattern for changing the rotation of the images. The rotation of the image was correct manually and no records were made for almost 50% of the Images.
I was unable to find a way to calculate the rotation since the images were of different quality (same WIDTHxHEIGHT, but different file size)
The following is one of the cases that need to be resolved:
Original Low Quality Image
Added High Quality Image
Like phoenixstudio said, first downsample the images to the same size. With OpenCV, you can do this with the resize function.
Then compare rotations of the images. Beware that even if the images came from the same high resolution source, it is unlikely that they will be bit-identical for the correct rotation. Different downsampling method or distortion from lossy compression could create a minor difference in pixel values. So compare with a tolerance like mean((A - B)^2) < tol.
Another thought: If these are JPEG images, there might be a rotation field in the EXIF metadata, which might help: see https://jdhao.github.io/2019/07/31/image_rotation_exif_info/

Does scale up or down images effect image information?

i'm work on graduation project for image forgery detection using CNN , Most of the paper i read before feed the data set to the network they Down scale the image size, i want to know how Does this process effect image information ?
Images are resized/rescaled to a specific size for a few reasons:
(1) It allows the user to set the input size to their network. When designing a CNN you need to know the shape (dimensions) of your data at each step; so, having a static input size is an easy way to make sure your network gets data of the shape it was designed to take.
(2) Using a full resolution image as the input to the network is very inefficient (super slow to compute).
(3) For most cases the features desired to be extracted/learned from an image are also present when downsampling the image. So in a way resizing an image to a smaller size will denoise the image, filtering out much of the unimportant features within the image for you.
Well you change the images size. Of course it changes it's information.
You cannot reduce image size without omitting information. Simple case: Throw away every second pixel to scale image to 50%.
Scaling up adds new pixels. In its simplest form you duplicate pixels, creating redundant information.
More complex solutions create new pixels (less or more) by averaging neighbouring pixels or interpolating between them.
Scaling up is reversible. It doesn't create nor destroy information.
Scaling down divides the amount of information by the square of the downscaling factor*. Upscaling after downscaling results in a blurred image.
(*This is true in a first approximation. If the image doesn't have high frequencies, they are not lost, hence no loss of information.)

Changing image DPI for usage with tesseract

I am working on a project to recognize text in Business Cards and map them to appropriate fields.I am using opencv for image processing.I need to feed the preprocessed image to Tesseract-OCR engine for text recognition.This link
states that images should have atleast a DPI of 300.My image pixel size is 2560x1536 with 72 DPI.
How to increase the DPI to 300?
It is also said that it is beneficial to resize image.How to resize my image optimally for good OCR results
Tesseract works best on images which have a DPI of at least 300 dpi, so it may be beneficial to resize images. What does 'so' imply here.What is the relation between resizing an image and DPI?
For OCR, what really matters is the resolution in pixels. Because the physical characters can range from tiny to huge, independently of the DPI of the acquisition device.
As a rule of thumb, stroke width around 3 pixels is a good start. If lower, resizing might not be helpful because the information is missing. If much higher, the running time might be excessive (or the OCR function not be taylored to deal with it).
Also check that the package will not attempt to resize internally, based on its own assumption of stroke width and the DPI info stored in the header, if there is a mismatch.

Increase image size, without messing up clarity

Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.

Is it possible to tell the quality level of a JPEG?

This is really a two part question, since I don't fully understand how these things work just yet:
My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.
My questions:
Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?
I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...
On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.
Am I even close?
I'm using GD2 library on PHP if that is of any use
You can view compress level using the identify tool in ImageMagick. Download and installation instructions can be found at the official website.
After you install it, run the following command from the command line:
identify -format '%Q' yourimage.jpg
This will return a value from 0 (low quality, small filesize) to 100 (high quality, large filesize).
Information source
JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).
You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.
Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.
According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.
For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:
$> identify -format '%Q' filename.jpg
92%
Jpeg compression algorithm has some parameters which influence on the quality of the result image.
One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.
Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.
Quantization tables are also different.
There are other parameters such subsampling, dct method and etc.
So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.
More information:
http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html
Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.
Here is a formula I've found to work well:
jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7
jpgxsize = jpg100size*x (x = percent, e.g. 0.65)
so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality
As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):
If you wish to have a simple output of filename: quality for use on multiple images, you can use
identify -format '%f: %Q' *
to show the filename + compression of all files within the current directory.
So, there are basically two cases you care about:
If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.
If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.
From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.
Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.
If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.
This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php
Here are some ways to achieve your (1) and get it right.
There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:
https://github.com/GuidoBartoli/sherloq
The relevant (python) code is at https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py
There is another algorithm written up in https://arxiv.org/abs/1802.00992 - you might consider contacting the author for any code etc.
You can also simulate file_size(image_dimensions,quality_level) and then invert that function/lookup table to get quality_level(image_dimensions,file_size). Hey presto!
Finally, you can adopt a brute-force https://en.wikipedia.org/wiki/Error_level_analysis approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).
Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.
I can add other links for this as they become available - please put them in the comments.
If you trust Irfanview estimation of JPEG compression level you can extract that information from the info text file created by the following Windows line command (your path to i_view32.exe might be different):
"C:\Program Files (x86)\IrfanView\i_view32.exe" <image-file> /info=txtfile
Jpg compression level is recorded in the IPTC data of an image.
Use exiftool (it's free) to get the exif data of an image then do a search on the returned string for "Photoshop Quality". Or at least put the data returned into a text document and check to see what's recorded. It may vary depending on the software used to save the image.
"Writer Name : Adobe Photoshop
Reader Name : Adobe Photoshop CS6
Photoshop Quality : 7"

Resources