How works Image scale with device scale - ios

I try to understand how images are rendering on devices with the different scale of device and image.
We have image 100x100px if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50 points on device screen with x2 scale(iphone7) and on x3 scale(ipxoneX) why?
How is this working? Will be very thankful for detailed explanation

What we have is image, buffer and coordinate system. Image has its size, buffer has its size and coordinate system may have it.
The scales were introduced in context of coordinate system and buffers when retina displays bace a thing. Basically we used to develop for solid 320x480 coordinate systems which was also the size of a buffer and so a 320x480 image was drawn exactly as full screen perfectly.
Then retina displays suddenly made devices 2x resulting in 640x960 buffers. If there was no scale then all the hardcoded values (we used to have) would produce a messed up layout. So Apple persisted the coordinate system at 320x480 and introduced scale which basically means that UIKit frame-related logic stayed the same. From developer perspective all that changed is that #2x image would initialize a half smaller image view when initialized with image from assets. So a 512x512 image would produce a 256x256 image view using UIImageView(image: myImage).
Now we have quite a lot more then 320x480 and their multipliers and use auto-layout. We have 1x, 2x and 3x and we may get 10x for what e care in the future (probably not going to happen due to limitations of our eyes). And these scales are here actually so that all devices have similar PPI (points per inch) where these points are considered in coordinate system. What that means is that if you place a button with height of 50px it will physically produce similar height on all devices no matter the scale.
So what does all of this have to do with rendering an image? Well, nothing actually. The scale is just a converter between your coordinate system and your buffer. So at scale of 2x if you in code (or IB) create a 50x50 image view you can expect it's buffer to be 100x100. Ergo if you want the image to look nice you should use a 100x100 image. But since you want to do it for all the relevant scales you should have 3 images 50x50, 100x100 and 150x150 and naming them same with suffixing #2x and #3x you will ensure that UIImage(name:) will use correct image depending on current device scale.
So to your question directly:
if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50: You usually don't set a scale of an image. UIImage is a wrapper of CGImage which will always have direct size. UIImage uses its orientation and scale to transform that size. But if you mean you are setting a 100x100 image with #2x then on 2x devices an image view initialized with this image will have a size of 50x50 which is coordinate system but would be drawn to a 100x100 buffer as it is.
It really might be hard to explain this but the key you are looking for is that coordinate system has a scale toward display resolution (rather toward used buffer). If it did not we would need to increase everything on devices with higher resolution; If we put 2 devices together with same physical size (lets say 320x480) of the screen where first had 1x and second 2x resolution then all components would be half the size on the second: An icon of 32x32 would take 10% of width on 1x and 5% on 2x. So to simulate the same physical size we actually need 64x64 icon but then we also need to set the frame to 64x64. We would also need to use larger font sizes or all the texts would suddenly be very small...
I try to understand how images are rendering on devices with the different scale of device and image.: There is no "scale" in concept between device and image. Device (in this case a display screen) will receive a buffer of pixels that it needs to draw. This buffer will be of appropriate size for it so we are only talking about rendering image on this buffer. Image may be rendered through UIKit on this buffer any way you want; If you want you can draw a 400x230 image to 100x300 part of a buffer. But optimally a 400x230 image will be drawn to 400x230 part of the buffer meaning it was not magnified, shrunk or otherwise transformed. So assuming:
image size: 64x64 (actual image pixels)
UIView size: 320x320 (.frame.size)
UIView scale: 2x
Icon size: 32x32 (UIImageView.frame.size)
Buffer size: 640x640 (size of the actual buffer on which the image will be drawn)
UIImage size: 32x32 (.size you get from loaded image)
UIImage scale: 2x (.scale you get from loaded image)
Now from buffer perspective you are drawing 64x64 image to 640x640 buffer which takes 10% of the buffer per dimension. And from coordinate system perspective you are drawing 32x32 image to 320x320 canvas which takes 10% of the canvas per dimension.

Related

Given the texture in the right way

So as you can see, the texture of the photoFrame is a square image. But when I set it to the diffuse contents, the effect is terrible. So how can I display the square image in the rectangle frame but not stretch the image.
A lot of what you see depends on what geometry the texture is mapped onto. Assuming those picture frames are SCNPlane or SCNBox geometries, the face of the frame has texture coordinates ranging from (0,0) in the upper left to (1,1) in the lower right, regardless of the geometry's dimensions or aspect ratio.
SceneKit texture maps images such that the top left of the image is at texture coordinate (0,0) and the lower right is at (1,1) regardless of the pixel dimensions of the image. So, unless you have a geometry whose aspect ratio matches that of the texture image, you're going to see cases like this where the image gets stretched.
There are a couple of things you can do to "fix" your texture:
Know (or calculate) the aspect ratios of your image and the geometry (face) you want to put it on, then use the material's contentsTransform to correct the image.
For example, if you have an SCNPlane whose width is 2 and height is 1, and you assign a square image to it, the image will get stretched horizontally. If you set the contentsTransform to a matrix created with SCNMatrix4MakeScale(1,2,1) it'll double the texture coordinates in the horizontal direction, effectively scaling the image in half in that direction, "fixing" the aspect ratio for your 2:1 plane. Note that you might also need a translation, depending on where you want your half-width image to appear on the face of the geometry.
If you're doing this in the scene editor in Xcode, contentsTransform is the "offset", "scale", and "rotation" controls in the material editor, down below where you assigned an image in your screenshot.
Know (or calculate) the aspect ratio of your geometry, and at least some information about the size of your image, and create a modified texture image to fit.
For example, if you have a 2:1 plane as above, and you want to put 320x480 image on it, create a new texture image with dimensions of 960x480 — that is, matching the aspect ratio of the plane. You can use this image to create whatever style of background you want, with your 320x480 image composited on top of that background at whatever position you want.
I change the scale and offset and WrapT property in the material editor. And the effect is good. But when I run it, I couldn't get the same effect. So I try to program by change the contentsTransform property. But the scale, offset they both affect the contentsTransform. So if the offSet is (0, -4.03) and the Scale is (1, 1,714), what is the contentsTransform?

(iOS) If an image is downscaled does it use less ram?

If I have a 200x200 UImageView and I provide an 1000x1000 image, and the image gets downscaled. Does it use 1000x1000x4 bytes of ram? or does it use the downscaled dimensions x 4?
If you are using the original image data loaded from the PNG, then it consumes W x H x 4 bytes. Note that under iOS, a 2x image would actually be (W x 2) x (H x 2) x 4 since on a 2x scale device the points are less than actual pixels. Scaling an image down to a smaller size does save a lot of memory, but you have to explicitly allocate a graphic context and render the smaller image before using the new smaller context to create a new CGImgRef and UIImage.

How does Pixels Per Centimeter relate to zoom and pixelation of an image

I'm working on something where an admin puts in a threshold for PPI of an image, for example 35. If the uploaded image has PPI of greater than 35 then return true or else return false.
So I'm finding out the PPI of an image using imageMagick:
identify -format "%x x %y" myimg.png
This gives me numbers, for example, 5.51 PixelsPerCentimeter and I convert them to PixelsPerInch by 5.51 * 2.35
This all works fine. However, I am curious as to how the PPI relates to the zoom factor of an image.
Questions
Does a low resolution (say, 10 PPI) image mean it can't be zoomed in as much as a high resolution image can (say, 72 PPI)?
Well I'm sure a low resolution can be zoomed in at a high percentage but the image quality won't be as good i.e. it will be pixelated?
Is there a better metric that I should be looking at rather than PPI to determine whether an image is high resolution or low resolution.

How to determine the maximum scale according to Image resolution

Actually I want to scale Image used in iPhone to iPad.
I have one Image of resolution 300 dpi.
Its size is 320 * 127.
Maximum how much can I scale this Image so that It will not blur ?
As I am stuck with the relation between resolution of an Image and its maximum dimensions.
I don't think you understand the idea of resolution.
Your requirements are simply "so that It will not blur." If you scale an image larger or smaller than its native resolution (which I'm assuming is your "320 * 127"), the display device has to either reduce or increase the number of pixels. It does this by interpolating, or "blurring" the pixel colors.
Now, if you're asking how much can you alter an image's scale so that a human eye can't tell the difference, that's a different question.

resize png image using rmagick without losing quality

I need to resize an 200*200 image to 60*60 in rmagick without losing image quality. Currently I am doing following for a png image
img = Magick::Image.from_blob(params[:file].read)[0]
img.write(RootPath + params[:dir_str] + "/#{filename}") do
self.quality=100;
# self.compression = Magick::ZipCompression
end
I am losing sharpness in the resulting image. I want to be able to resize by losing the least amount of image quality.
I tried to set it's quality and different compressions, but all of them seems not works fine.
all resulting image are still looks being removed a layer of colors and the word character are losing sharpness
anyone could give me some instructions for resizing png images?
You're resizing a picture from 200x200 = 40,000 down to 60x60 = 3,600 - that is, less than a tenth of the resolution - and you're surprised that you lose image quality? Think of it this way - could you take a 16x16 image and resize it to 5x5 with no loss of quality? That is about the same as you are trying to do here.
If what you are saying you want to do was actually possible, then every picture could be reduced to one pixel with no loss of quality.
With the art designer's 60x60 image being better quality than yours, it depends on the original size of the image that the art designer is working from. For example, if the art designer was working from an 800x800 image and provided your 200x200 image from that, then also reduced the original 800x800 image to 60x60 in PS then that 60x60 image will be better quailty than the one you have. This is because your 60x60 image has gone throuogh two losses of quality: one to go to 200x200 and the second to go from 200x200 to 60x60. Necessarily this will be worse than the image resized from the original.
You could convert the png to a vector image, resize the vector to 60x60 then convert the vector to png. Almost lossless.

Resources