By default UIGraphicsImageRenderer sets the scale to the device's screen scale, on iPhone 6s it's 2x and iPhone 6s Plus 3x, therefore even though you've given it a size with dimension 300 it's creating it at either 600 or 900 depending on which device is being used. When you want to ensure it's always 300, how do you set the scale?
let outputBounds = CGRect(x: 0, y: 0, width: 300, height: 300)
let renderer = UIGraphicsImageRenderer(bounds: outputBounds)
let image = renderer.image { context in
//...
}
Previously you would set the scale via the last parameter here:
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 1)
You should use the UIGraphicsImageRendererFormat class when creating your UIGraphicsImageRenderer. If you want to write exact pixels rather than scaled points, use something like this:
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let renderer = UIGraphicsImageRenderer(size: yourImageSize, format: format)
let image = renderer.image { ctx in
// etc
}
I'm keeping a list of iOS 10 code examples and will add one based on this.
Related
I'm working on iOS app which should enable users to create Instagram stories photos and export them to Instagram. Basically app like Unfold, Stellar, Chroma Stories... I've prepared UI where user can select from prepared templates and add to them own photos with filters, labels etc.
My question is - what is the best way to export created UIView to bigger Image?
I mean how to get the best quality, sharp pixels of labels etc?
Because the template view with subviews (added photos, labels...) is taking +- half of device's screen. But I need bigger size for exported image.
Currently I use:
func makeImageFromView() -> UIImage {
let format = UIGraphicsImageRendererFormat()
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
templateView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
The resulted image has size of 1080 x 1920, but labels aren't sharp.
Do I need to scale somehow the photo and font size before making it to an image?
Thanks!
So actually yes, before capturing image I need to scale whole view and it's subviews. Here are my findings (maybe obvious things but it took me a while to realize that – I'll be glad for any improvements)
Rendering image of the same size
When you want to capture UIView as an image, you can simply use this function. Resulted image will have a same size as a view (scaled 2x / 3x depending on actual device)
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let size = CGSize(width: desiredView.bounds.width, height: desiredView.bounds.height)
let renderer = UIGraphicsImageRenderer(size: size)
let image = renderer.image { (ctx) in
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
Rendering image of the different size
But what to do, when you want a specific size for your exported image?
So from my use-case I wanted to render image of final size (1080 x 1920), but a view I wanted to capture had a smaller size (in my case 275 x 487). If you do such a rendering without anything, there must be a loss in quality.
If you want to avoid that and preserve sharp labels and other subviews, you need to try to scale the view ideally to the desired size. In my case, make it from 275 x 487 to 1080 x 1920.
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let format = UIGraphicsImageRendererFormat()
// We need to divide desired size with renderer scale, otherwise you get output size larger #2x or #3x
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
// remake constraints or change size of desiredView to 1080 x 1920
// handle it's subviews (update font size etc.)
// ...
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
// undo the size changes
// ...
}
return image
}
My approach
But because I didn't want to mess with a size of a view displayed to the user, I took a different way and used second view which isn't shown to the user. That means that just before I want to capture image, I prepare "duplicated" view with the same content but bigger size. I don't add it to the view controller's view hierarchy, so it's not visible.
Important note!
You really need to take care of subviews. That means, that you have to increase the font size, update position of moved subviews (for example their center) etc.!
Here is just a few lines to illustrate that:
// 1. Create bigger view
let hdView = MyView()
hdView.frame = CGRect(x: 0, y: 0, width: 1080, height: 1920)
// 2. Load content according to the original view (desiredView)
// set text, images...
// 3. Scale subviews
// Find out what scale we need
let scaleMultiplier: CGFloat = 1080 / desiredView.bounds.width // 1080 / 275 = 3.927 ...
// Scale everything, for examples label's font size
[label1, label2].forEach { $0.font = UIFont.systemFont(ofSize: $0.font.pointSize * scaleMultiplier, weight: .bold) }
// or subview's center
subview.center = subview.center.applying(.init(scaleX: scaleMultiplier, y: scaleMultiplier))
// 4. Render image from hdView
let hdImage = makeImageFrom(hdView)
Difference in quality from real usage – zoomed to the label:
I'm using AlamoFireImage to crop an user profile picture before sending it to the server. Our server has some restrictions and we can't send images larger than 640x640.
I'm using the af_imageAspectScaled UIImage extension function like so:
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
)
)
I was expecting this to crop image to a 320px by 320px image. However I found out that the output image is being saved as a 640x640px image with scale 2.0. The following XCTest shows this:
class UIImageTests: XCTestCase {
func testAfImageAspectScaled() {
if let image = UIImage(
named: "ipad_mini2_photo_1.JPG",
in: Bundle(for: type(of: self)),
compatibleWith: nil
) {
print (image.scale) // prints 1.0
print (image.size) // prints (1280.0, 960.0)
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
)
)
print (croppedImage.scale) // prints 2.0
print (croppedImage.size) // prints (320.0, 320.0)
}
}
}
I'm running this on the iPhone Xr simulator on Xcode 10.2.
The original image is 1280 by 960 points, with scale 1, which would be equivalent to 1280 by 960 pixels. The cropped image is 320 by 320 points, with scale 2, which would be equivalent to 640 by 640 pixels.
Why is the scale set to 2? Can I change that? How can I generate a 320 by 320 pixels image independent on the scale and device?
Well, checking the source code for the af_imageAspectScaled method I found the following code for generating the actual scaled image:
UIGraphicsBeginImageContextWithOptions(size, af_isOpaque, 0.0)
draw(in: CGRect(origin: origin, size: scaledSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext() ?? self
UIGraphicsEndImageContext()
The parameter with value 0.0 on UIGraphicsBeginImageContextWithOptions tells the method to use the main screen scale factor for defining the image size.
I tried setting this to 1.0 and, when running my testcase, af_imageAspectScaled generated an image with the correct dimensions I wanted.
Here there is a table showing all the iOS devices resolutions. My app was sending appropriate sized images for all devices where the scale factor was 2.0, however several devices have scale factor 3.0. For those the app wasn't working.
Well, unfortunately it seems that if I want to use af_imageAspectScaled I have to divide the final size I want by the device's scale when setting the scaled size like so:
let scale = UIScreen.main.scale
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320/scale,
height: 320/scale
)
)
I've sent a pull request to AlamofireImage proposing the addition of a parameter scale to the functions af_imageAspectScaled(toFill:), af_imageAspectScaled(toFit:) and af_imageScaled(to:). If they accept it, the above code should become:
// this is not valid with Alamofire 4.0.0 yet! waiting for my pull request to
// be accepted
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
),
scale: 1.0
)
// croppedImage would be a 320px by 320px image, regardless of the device type.
I have a UIScrollView which contains a UIImage. On top of that is a box that the user can move the image, so that that portion is cropped.
This screenshot explains it better:
So they can scroll the image around until the portion they want is inside that box.
I then want to be able to crop the scrollView/UIImage to exactly that size and store the cropped image.
It shouldn't be very hard but I've spent ages trying screenshots, UIGraphicsContext, etc. and cant seem to get anything to work.
Thanks for the help.
I finally figured out how to get it to work. Here is the code:
func croppedImage() -> UIImage {
let cropSize = CGSize(width: 280, height: 280)
let scale = (imageView.image?.size.height)! / imageView.frame.height
let cropSizeScaled = CGSize(width: cropSize.width * scale, height: cropSize.height * scale)
if #available(iOS 10.0, *) {
let r = UIGraphicsImageRenderer(size: cropSizeScaled)
let x = -scrollView.contentOffset.x * scale
let y = -scrollView.contentOffset.y * scale
return r.image { _ in
imageView.image!.draw(at: CGPoint(x: x, y: y))
}
} else {
return UIImage()
}
}
So it first calculates the scale of the imageView and the actual image.
Then it creates a CGSize of that crop box as shown in the photo. However, the width and height must be scaled by the scale factor. (e.g. 280 * 6.5)
You must check if the phone is running iOS 10.0 for UIGraphicsImageRender - if not, it won't work.
Initialise this with the crop box size.
The image must then be offset, and this is calculated by getting the scrollView's content offset, negating it, and multiplying by the scale factor.
Then return the image drawn at that point!
Given a CGRect, I want to use GPUImage to crop a video. For example, if the rect is (0, 0, 50, 50), the video would be cropped at (0,0) with a length of 50 on each side.
What's throwing me is that GPUImageCropFilter doesn't take a rectangle, rather a normalized crop region with values ranging from 0 to 1. My intuition was to to this:
let assetSize = CGSizeApplyAffineTransform(videoTrack.naturalSize, videoTrack.preferredTransform)
let cropRect = CGRect(x: frame.minX/assetSize.width,
y: frame.minY/assetSize.height,
width: frame.width/assetSize.width,
height: frame.height/assetSize.height)
to calculate the crop region based on the size of the incoming asset. Then:
// Filter
let cropFilter = GPUImageCropFilter(cropRegion: cropRect)
let url = NSURL(fileURLWithPath: "\(NSTemporaryDirectory())\(String.random()).mp4")
let movieWriter = GPUImageMovieWriter(movieURL: url, size: assetSize)
movieWriter.encodingLiveVideo = false
movieWriter.shouldPassthroughAudio = false
// add targets
movieFile.addTarget(cropFilter)
cropFilter.addTarget(movieWriter)
cropFilter.forceProcessingAtSize(frame.size)
cropFilter.setInputRotation(kGPUImageRotateRight, atIndex: 0)
What should the movie writer size be? Shouldn't it be the size of the frame I want to crop with? And should I be using forceProcessingAtSize with the size value of my crop frame?
A complete code example would be great; I've been trying for hours and I can't seem to get the section of the video that I want.
FINAL:
if let videoTrack = self.asset.tracks.first {
let movieFile = GPUImageMovie(asset: self.asset)
let transformedRegion = CGRectApplyAffineTransform(region, videoTrack.preferredTransform)
// Filters
let cropFilter = GPUImageCropFilter(cropRegion: transformedRegion)
let url = NSURL(fileURLWithPath: "\(NSTemporaryDirectory())\(String.random()).mp4")
let renderSize = CGSizeApplyAffineTransform(videoTrack.naturalSize, CGAffineTransformMakeScale(transformedRegion.width, transformedRegion.height))
let movieWriter = GPUImageMovieWriter(movieURL: url, size: renderSize)
movieWriter.transform = videoTrack.preferredTransform
movieWriter.encodingLiveVideo = false
movieWriter.shouldPassthroughAudio = false
// add targets
// http://stackoverflow.com/questions/37041231/gpuimage-crop-to-cgrect-and-rotate
movieFile.addTarget(cropFilter)
cropFilter.addTarget(movieWriter)
movieWriter.completionBlock = {
observer.sendNext(url)
observer.sendCompleted()
}
movieWriter.failureBlock = { _ in
observer.sendFailed(.VideoCropFailed)
}
disposable.addDisposable {
cropFilter.removeTarget(movieWriter)
movieWriter.finishRecording()
}
movieWriter.startRecording()
movieFile.startProcessing()
}
As you note, the GPUImageCropFilter takes in a rectangle in normalized coordinates. You're on the right track, in that you just need to convert your CGRect in pixels to normalized coordinates by dividing the X components (origin.x and size.width) by the width of the image and the Y components by the height.
You don't need to use forceProcessingAtSize(), because the crop will automatically output an image of the appropriate cropped size. The movie writer's size should be matched to this cropped size, which you should know from your original CGRect.
The one complication you introduce is the rotation. If you need to apply a rotation in addition to your crop, you might want to check and make sure that you don't need to swap your X and Y for your crop region. This should be apparent in the output if the two need to be swapped.
There were some bugs with applying rotation at the same time as a crop a while ago, and I can't remember if I fixed all those. If I didn't, you could insert a dummy filter (gamma or brightness set to default values) before or after the crop and apply the rotation at that stage.
I'm trying to draw on top of an image in a CALayer and am having trouble with where the drawing shows up on different size displays.
func drawLayer(){
let circleLayer = CAShapeLayer()
let radius: CGFloat = 30
let x = Thermo.frame.origin.x
let y = Thermo.frame.origin.y
let XX = Thermo.frame.width
let YY = Thermo.frame.height
print("X: \(x) Y: \(y) Width: \(XX) Height: \(YY)")
circleLayer.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: 2.0 * radius, height: 2.0 * radius) , cornerRadius: radius).CGPath
circleLayer.fillColor = UIColor.redColor().CGColor
circleLayer.shadowOffset = CGSizeMake(0, 3)
circleLayer.shadowRadius = 5.0
circleLayer.shadowColor = UIColor.blackColor().CGColor
circleLayer.shadowOpacity = 0.8
circleLayer.frame = CGRectMake(0, 410, 0, 192);
self.Thermo.layer.addSublayer(circleLayer)
circleLayer.setNeedsDisplay()
}
That draws a circle, in the correct place ... for an iPhone 6s. But when the enclosing UIImageView component is scaled for a smaller device, well, to clearly doesn't. I added the print() to see what the image size, and position was and ... well, it's exactly the same on every device I run it on X: 192.0 Y: 8.0 Width: 216.0 Height: 584.0 but clearly it's being scaled by the constraints in the AuoLayout manager.
So, my question is how can I figure out the proper radio and position for different screen sizes if I can't use the enclosing View's size and position since that seems to never change?
Here is the image I am starting with, in a UIImageView, and trying to draw over.
Im of course trying to color it in based on data from an external device. Any suggestions/sample code most appreciated!
CALayer and its subclasses incl. CAShapeLayer have a property
var contentsScale: CGFloat
From class reference :
For layers you create and manage yourself, you must set the value of this property yourself based on the resolution of the screen and the content you are providing. Core Animation uses the value you specify as a cue to determine how to render your content.
So what you need to do is set the scale on the layer and you get the scale of the device from UIDevice class
circleLayer.scale = UIScreen.mainScreen().scale