iOS: Keep real photo resolution when making screen capture with UIGraphicsGetImageFromCurrentImageContext - ios

I want to make a basic photo editing in my application and now I need to be able to add a text over a photo. Original photo have something like >2000 pixels width and height so it will be scaled to fit in screen without modifying its ratio.
So , I put the image in an UIImageView, dragged a Label over it and then save the image on screen with UIGraphicsGetImageFromCurrentImageContext. The problem is I will get a small image (320 X some height).
What is the best approach to accomplish this task but not shrink the resolution?
Thanks a lot!

I had this exact same problem in an app.
The thing I realised is that you can't do this by doing a screen capture. In turn, this means that dragging labels and text onto the image can't really be done (it can but bear with me) with UILabels etc...
What you need to do is keep a track of everything that's going on data-wise.
At the moment you have the frame of your UIImageView. This, in reality is irrelevant. It is purely there to show the user a representation of what is going on.
This is the same for the UILabel. It has a frame too. Again, this is irrelevant to the final image.
What you need is to store the data behind it in terms that are not absolute and then convert those values into frames for displaying on the device.
So, if you have an image that is 3200x4800 pixels (just making it easy for me) and this is displayed on the device and "shrunk" down to 320x480. Now, the user places a label with a frame of 10, 10, 100, 21 with the text "Hello, world" at a particular font size.
Storing the frame 10, 10, 100, 21 is useless because what you need when the image is output is... 100, 100, 1000, 210 (i.e. ten times the size).
So, really you should be storing information in the background like...
frame = 0.031, 0.021, 0.312, 0.044
// these are all percentages
Now, you have percentage values of where the label should be and how big it should be based on the size of the image.
So, for the shrunk image size it will return 10, 10, 100, 21 and for the full image size it will be 100, 100, 1000, 210 and so will look the same size when printed out.
You could create a compound UIView by having a UIView with a UIImageView and a UILabel then you just have to resize it to the full image size before rendering it. That would be the easy but naive way of approaching it initially.
Or you could create a UIView with CALayers backing it that display the image and text.
Or you could render out the image and text with some sort of draw method.
Either way, you can't just use a screen capture.
And yes, this is a lot more complex than it first appears.

Related

What is the correct approach to show the image without distortion in collection view

I have a very fundamental question hence not posting any code.
OverView
I have a collection view which makes use of custom layout which I have written. All this custom layout does is it applies some maths and finds out the optimal number of cells to place on screen based on screen width and tries to round off the additional pixels using padding. As a result most of my cells size vary when the collection view changes its orientation especially on a device like iPad Pro. What I mean is if the cell size is 300 x 300 in portrait it might become 320 x 300 something like that in landscape. (They are not exact values just trying to give an idea)
Every single cell has imageView in it and image needs to be downloaded from server. I am making use of SDWebImage. Images downloaded are way bigger than my cells and I don't want to load such big images into memory, hence I have decided that I'll be resizing the image once it is downloaded before putting it on to SDWebImageCache.
Issue
Now as my cells sizes are changing based on device orientation, in order to make the image look pixel perfect I'll have to resize the image every time device rotates for each cell. But that will lead to bad performance.
Solutions I have figured out
I'll have to cache two images each one for one orientation in SDWeImageCache but thats again will increase memory footprint of the app.
Find the biggest size cell will get among various orientation and resize the image to that size and then use the same image for all the smaller cell size with image view having its mode set to AspectToFit.
Please suggest me what is the correct approach. I don't need the code, just need idea.
Let me know if this is a bad idea, but potential options.
You could
1) Get the original image size.
Save this.
func resetImage() {
2) Determine if picture width > height
3) If width > height, then you know it is gonna be a longer picture. Determine how long you would like it. Let's say you want the wider pictures about 1/10*Width of a cell
4) Determine the height of the image accordingly. ((1/10)/widthOfCell)*imageHeight
5) If (width > height) imageView.frame = CGRect(0, 0, 1/10*widthCell, height)
6) imageView.contentMode = .ScaleAspectFit, imageView.image = UIImage(named: "awesome")
7) Possibly save imageView.size to use incase another rotation.
This allows you to change just the imageView instead of saving the image data, hopefully removing the white space. Comment and lemme see whatcha think.

Position an UIImageView over another with scaling

Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.
The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Slicing the image by coding

I saw some apps which is something like a puzzle. It first asks to select an image, and slice them and put into 4x4 or 5x5 squares randomly. One square will be empty and user can rearrange them by slide the image to empty slot.
I know how to slide the image. But main task is how to slick them into smaller images?
Is it possible?
Matt Long discusses this in his article Subduing CATiledLayer.
Not sure if this is the most efficient way but I think it should work.
Load the image into a UIImageView of the size that want the slice to be, so lets use a 480x320 image as an example. Lets slice this into 9 parts, a 3x3 grid. That would make each piece 160x106 (roughly, 320 does not evenly divide by 3)
So create your UIImageView with the size 160x106, set the image property to your full-sized image. And set the image view's contentMode to UIViewContentModeScaleAspectFill so that the regions not in the view are clipped. I haven't tried it but I assume this would give you the top left corner of your grid, position 1,1.
Now here is where you make a choice depending on memory efficiency/performance, you will have to test which is best.
Option 1
Continue to do this for each slice, but set the UIImage's frame so that the correct portion of the image is being displayed and the rest is being clipped. So your frame for position 1,2 would look like CGRectMake(106, 0, 106, 160) for position 2,2 it would be CGRectMake(106*2, 160, 106, 160) and so forth.
This is result in the full image being loaded into memory 9 times, this may or may not be a problem. You will have to do some analysis on memory usage.
Option 2
Once you create your "slice" in the UIImageView, save that context off as a file that you will later load into appropriate UIImageViews.
The following code demonstrates one way to save a UIView's context to a UIImage.
UIGraphicsBeginImageContext(yourView.frame.size);
[[yourView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageOfView = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You would then save off this UIImage to disk with an appropriate name to be loaded into the appropriate slice later.

Resources