I need to resize an 200*200 image to 60*60 in rmagick without losing image quality. Currently I am doing following for a png image
img = Magick::Image.from_blob(params[:file].read)[0]
img.write(RootPath + params[:dir_str] + "/#{filename}") do
self.quality=100;
# self.compression = Magick::ZipCompression
end
I am losing sharpness in the resulting image. I want to be able to resize by losing the least amount of image quality.
I tried to set it's quality and different compressions, but all of them seems not works fine.
all resulting image are still looks being removed a layer of colors and the word character are losing sharpness
anyone could give me some instructions for resizing png images?
You're resizing a picture from 200x200 = 40,000 down to 60x60 = 3,600 - that is, less than a tenth of the resolution - and you're surprised that you lose image quality? Think of it this way - could you take a 16x16 image and resize it to 5x5 with no loss of quality? That is about the same as you are trying to do here.
If what you are saying you want to do was actually possible, then every picture could be reduced to one pixel with no loss of quality.
With the art designer's 60x60 image being better quality than yours, it depends on the original size of the image that the art designer is working from. For example, if the art designer was working from an 800x800 image and provided your 200x200 image from that, then also reduced the original 800x800 image to 60x60 in PS then that 60x60 image will be better quailty than the one you have. This is because your 60x60 image has gone throuogh two losses of quality: one to go to 200x200 and the second to go from 200x200 to 60x60. Necessarily this will be worse than the image resized from the original.
You could convert the png to a vector image, resize the vector to 60x60 then convert the vector to png. Almost lossless.
Related
Newbie in image processing. I'm confused with these methods when merging two images with Pillow:
PIL.Image.Image
.paste()
.composite()
.alpha_composite()
.blend()
Could anyone provide a quick explanation? Or where could I grab the related background knowledge?
I see it like this:
blend is the simplest. It takes a fixed and constant proportion of each image at each pixel location, e.g. 30% of image A and 70% of image B at each location all over the image. The ratio is a single number. This operation is not really interested in transparency, it is more of a weighted average where a part of both input images will be visible at every pixel location in the output image
paste and composite are synonyms. They use a mask, with the same size as the images, and take a proportion of image A and image B according to the value of the mask which may be different at each location. So you might have a 0-100 proportion of image A and image B at the top and 100-0 proportion at the bottom, and this would look like a smoothly blended transition from one image at the top to the other image at the bottom. Or, it may be like a largely opaque foreground where you only see one input image, but a transparent window through which you see the other input image. The mask, of the same size as the two input images, is key here and it can assume different values at different locations.
alpha compositing is the most complicated and is best described by Wikipedia
——-
Put another way, blend is no alpha/transparency channel and a fixed proportion of each input image present throughout the output image.
paste is a single alpha channel that can vary across the image.
alpha_composite is two alpha channels that can both vary across the image.
I am using fsleyes to resample image by providing reference image, but the generated resampled image is cropped ?
the blue is resampled overlay
I tried increasing the resolution of reference image but that doesn't help either.
I translated the image over referenced image before applying resampling it seemed to work.
Given several large original images and a small image which is cropped and isotropic-scaled from one of large images, the task is to find where the small image comes from.
cropping usually occur at the center of large image
but exact crop boundary is unknown
the size of small image is about 200x200
again, exact size of small image is unknown
if the size of cropped area is (width, height), the size of small image must be (width * k, height * k), where k < 1.0
I've read some related topics in SO and tried methods like ORB / color histograms, however the accuracy is not acceptable. Would you please give me some advice? Is there any efficient algorithm to deal with this problem? Thank you very much.
The wording you are looking for is template matching, as you want to scan the original image and look for the origin of the cropped and scaled one.
The OpenCV tutorial has an extensive explaination for it
I try to understand how images are rendering on devices with the different scale of device and image.
We have image 100x100px if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50 points on device screen with x2 scale(iphone7) and on x3 scale(ipxoneX) why?
How is this working? Will be very thankful for detailed explanation
What we have is image, buffer and coordinate system. Image has its size, buffer has its size and coordinate system may have it.
The scales were introduced in context of coordinate system and buffers when retina displays bace a thing. Basically we used to develop for solid 320x480 coordinate systems which was also the size of a buffer and so a 320x480 image was drawn exactly as full screen perfectly.
Then retina displays suddenly made devices 2x resulting in 640x960 buffers. If there was no scale then all the hardcoded values (we used to have) would produce a messed up layout. So Apple persisted the coordinate system at 320x480 and introduced scale which basically means that UIKit frame-related logic stayed the same. From developer perspective all that changed is that #2x image would initialize a half smaller image view when initialized with image from assets. So a 512x512 image would produce a 256x256 image view using UIImageView(image: myImage).
Now we have quite a lot more then 320x480 and their multipliers and use auto-layout. We have 1x, 2x and 3x and we may get 10x for what e care in the future (probably not going to happen due to limitations of our eyes). And these scales are here actually so that all devices have similar PPI (points per inch) where these points are considered in coordinate system. What that means is that if you place a button with height of 50px it will physically produce similar height on all devices no matter the scale.
So what does all of this have to do with rendering an image? Well, nothing actually. The scale is just a converter between your coordinate system and your buffer. So at scale of 2x if you in code (or IB) create a 50x50 image view you can expect it's buffer to be 100x100. Ergo if you want the image to look nice you should use a 100x100 image. But since you want to do it for all the relevant scales you should have 3 images 50x50, 100x100 and 150x150 and naming them same with suffixing #2x and #3x you will ensure that UIImage(name:) will use correct image depending on current device scale.
So to your question directly:
if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50: You usually don't set a scale of an image. UIImage is a wrapper of CGImage which will always have direct size. UIImage uses its orientation and scale to transform that size. But if you mean you are setting a 100x100 image with #2x then on 2x devices an image view initialized with this image will have a size of 50x50 which is coordinate system but would be drawn to a 100x100 buffer as it is.
It really might be hard to explain this but the key you are looking for is that coordinate system has a scale toward display resolution (rather toward used buffer). If it did not we would need to increase everything on devices with higher resolution; If we put 2 devices together with same physical size (lets say 320x480) of the screen where first had 1x and second 2x resolution then all components would be half the size on the second: An icon of 32x32 would take 10% of width on 1x and 5% on 2x. So to simulate the same physical size we actually need 64x64 icon but then we also need to set the frame to 64x64. We would also need to use larger font sizes or all the texts would suddenly be very small...
I try to understand how images are rendering on devices with the different scale of device and image.: There is no "scale" in concept between device and image. Device (in this case a display screen) will receive a buffer of pixels that it needs to draw. This buffer will be of appropriate size for it so we are only talking about rendering image on this buffer. Image may be rendered through UIKit on this buffer any way you want; If you want you can draw a 400x230 image to 100x300 part of a buffer. But optimally a 400x230 image will be drawn to 400x230 part of the buffer meaning it was not magnified, shrunk or otherwise transformed. So assuming:
image size: 64x64 (actual image pixels)
UIView size: 320x320 (.frame.size)
UIView scale: 2x
Icon size: 32x32 (UIImageView.frame.size)
Buffer size: 640x640 (size of the actual buffer on which the image will be drawn)
UIImage size: 32x32 (.size you get from loaded image)
UIImage scale: 2x (.scale you get from loaded image)
Now from buffer perspective you are drawing 64x64 image to 640x640 buffer which takes 10% of the buffer per dimension. And from coordinate system perspective you are drawing 32x32 image to 320x320 canvas which takes 10% of the canvas per dimension.
I'm working on a project where we need to match original hi-resolution photos to their scaled down counterparts. For example the original may be 2000px x 2000px, and the scaled down version might be 500px x 500px.
In researching how to do this I've found mention that ImageMagick's compare operation can be used to compare larger and smaller images, but that it behaves as though the smaller image has been cropped from the larger--and as a result it performs a very intensive scan (http://www.imagemagick.org/discourse-server/viewtopic.php?f=2&t=16781#p61937).
Is there an option or flag that I can use to indicate that I only want a match if the smaller image has been scaled (not cropped) from the larger image?
You can temporarily scale the larger image down to the size of the smaller image and then compare the resized version to the thumbnails, as described by Marc Maurice on his blog.
convert bigimage.png -resize 500x500 MIFF:- | \
compare - -metric AE -fuzz '10%' smallimage.png null:
Because the resize algorithm is probably different from the original resize algorithm, this will introduce differences, but if the smaller images are only scaled and not changed otherwise, the similarities should be sufficient to do the matching. You'll have to find a suitable metric and threshold though.
If you don't now the thumbnail sizes or if they differ, you may want to downsize both images to a safe size below the minimum of all thumbnail sizes or you grab the thumbnail sizes with
identify -format "%w,%h" smallimage.png