(iOS) If an image is downscaled does it use less ram? - ios

If I have a 200x200 UImageView and I provide an 1000x1000 image, and the image gets downscaled. Does it use 1000x1000x4 bytes of ram? or does it use the downscaled dimensions x 4?

If you are using the original image data loaded from the PNG, then it consumes W x H x 4 bytes. Note that under iOS, a 2x image would actually be (W x 2) x (H x 2) x 4 since on a 2x scale device the points are less than actual pixels. Scaling an image down to a smaller size does save a lot of memory, but you have to explicitly allocate a graphic context and render the smaller image before using the new smaller context to create a new CGImgRef and UIImage.

Related

Approximately optimize overlap (masking) of 2d-images on a GPU

Let's say I have two grayscale images. x is 100x100 and y is 125x125.
y is a similar image to x. It is also on a different scale.
What I would like is to stretch y between 80% to 120% on the height and width dimensions (independently). And then pick a shift for the height and width dimensions, i.e. a start point for cropping to 100x100. Let's call y' = transform(y, stretch_h, stretch_w, shift_h, shift_w). I would like to minimize |y' - x|.
i.e. I want to stretch and shift y so that it overlaps / masks x as well as possible.
I'd like to do this on a GPU, ideally in 100ms or less. I prefer this be fast even if it is less exact.
How?

Image Processing: how to find factor x and y of resizing image knowing that the object has an angle up to 90°

I built such classification model that takes as input an image of such rectangular object that has fixed dimensions (w0 x h0). the output of the model is the class of this object.., where w0 = 742 pixels and h0 = 572 pixels
now I have the same problem but with bigger rectangular object that has new fixed dimensions (w x h), where w = 1077 pixels and h = 681 pixels.
I would like to resize the new image so that the object will have exactly the same size of the old object to fit the current model I already built. How can I find the x and y factors of the image resizing knowing that the object is not straight and can has an angle alpha from 0° to 90°? alpha is known for me.
currently I have a bad solution:
rotate the image -alpha
resize the image using factor x = x0/x and y = y0/y
rotate the image back (+alpha)
Is it possible to calculate the resizing factors with respect of the angle alpha so I only resize the image without rotating the image back and forth? or maybe you know such function in opencv that does this calcuation?

How works Image scale with device scale

I try to understand how images are rendering on devices with the different scale of device and image.
We have image 100x100px if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50 points on device screen with x2 scale(iphone7) and on x3 scale(ipxoneX) why?
How is this working? Will be very thankful for detailed explanation
What we have is image, buffer and coordinate system. Image has its size, buffer has its size and coordinate system may have it.
The scales were introduced in context of coordinate system and buffers when retina displays bace a thing. Basically we used to develop for solid 320x480 coordinate systems which was also the size of a buffer and so a 320x480 image was drawn exactly as full screen perfectly.
Then retina displays suddenly made devices 2x resulting in 640x960 buffers. If there was no scale then all the hardcoded values (we used to have) would produce a messed up layout. So Apple persisted the coordinate system at 320x480 and introduced scale which basically means that UIKit frame-related logic stayed the same. From developer perspective all that changed is that #2x image would initialize a half smaller image view when initialized with image from assets. So a 512x512 image would produce a 256x256 image view using UIImageView(image: myImage).
Now we have quite a lot more then 320x480 and their multipliers and use auto-layout. We have 1x, 2x and 3x and we may get 10x for what e care in the future (probably not going to happen due to limitations of our eyes). And these scales are here actually so that all devices have similar PPI (points per inch) where these points are considered in coordinate system. What that means is that if you place a button with height of 50px it will physically produce similar height on all devices no matter the scale.
So what does all of this have to do with rendering an image? Well, nothing actually. The scale is just a converter between your coordinate system and your buffer. So at scale of 2x if you in code (or IB) create a 50x50 image view you can expect it's buffer to be 100x100. Ergo if you want the image to look nice you should use a 100x100 image. But since you want to do it for all the relevant scales you should have 3 images 50x50, 100x100 and 150x150 and naming them same with suffixing #2x and #3x you will ensure that UIImage(name:) will use correct image depending on current device scale.
So to your question directly:
if we set image scale to x2, 1 user point will be 2px, so image size will be 50x50: You usually don't set a scale of an image. UIImage is a wrapper of CGImage which will always have direct size. UIImage uses its orientation and scale to transform that size. But if you mean you are setting a 100x100 image with #2x then on 2x devices an image view initialized with this image will have a size of 50x50 which is coordinate system but would be drawn to a 100x100 buffer as it is.
It really might be hard to explain this but the key you are looking for is that coordinate system has a scale toward display resolution (rather toward used buffer). If it did not we would need to increase everything on devices with higher resolution; If we put 2 devices together with same physical size (lets say 320x480) of the screen where first had 1x and second 2x resolution then all components would be half the size on the second: An icon of 32x32 would take 10% of width on 1x and 5% on 2x. So to simulate the same physical size we actually need 64x64 icon but then we also need to set the frame to 64x64. We would also need to use larger font sizes or all the texts would suddenly be very small...
I try to understand how images are rendering on devices with the different scale of device and image.: There is no "scale" in concept between device and image. Device (in this case a display screen) will receive a buffer of pixels that it needs to draw. This buffer will be of appropriate size for it so we are only talking about rendering image on this buffer. Image may be rendered through UIKit on this buffer any way you want; If you want you can draw a 400x230 image to 100x300 part of a buffer. But optimally a 400x230 image will be drawn to 400x230 part of the buffer meaning it was not magnified, shrunk or otherwise transformed. So assuming:
image size: 64x64 (actual image pixels)
UIView size: 320x320 (.frame.size)
UIView scale: 2x
Icon size: 32x32 (UIImageView.frame.size)
Buffer size: 640x640 (size of the actual buffer on which the image will be drawn)
UIImage size: 32x32 (.size you get from loaded image)
UIImage scale: 2x (.scale you get from loaded image)
Now from buffer perspective you are drawing 64x64 image to 640x640 buffer which takes 10% of the buffer per dimension. And from coordinate system perspective you are drawing 32x32 image to 320x320 canvas which takes 10% of the canvas per dimension.

Do i need to include additional image size for iPhone X, for example 4x images

I mostly use pdfs for vectors so for most of my images this is not an issue. For non vector png's do I need to include 4x images or any additional images for the new iPhone X super retina screen.
iPhone X has a high-resolution display with a scale factor of #3x.

Get original but cropped Co-ordinates of image in iOS

I have two images. One is the original (let's say it's 830 x 1222 pixels. The other is a cropped image (let's say 300 x 300 pixels) takend from the original. Cropped image is as per the ratio of the Green rectangle shown in to the sample image below.
What I'm looking for is a way using objective C (iOS) to look at both images and find the coordinates of the cropped image within the original.
Hope that makes sense? Any help appreciated.

Resources