I am cropping an image with:
UIGraphicsBeginImageContext(croppingRect.size)
let context = UIGraphicsGetCurrentContext()
context?.clip(to: CGRect(x: 0, y: 0, width: rect.width, height: rect.height))
image.draw(at: CGPoint(x: -rect.origin.x, y: -rect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
The cropped image sometimes has a 1px white edge on the right and bottom border. The right bottom corner zoomed in to see the individual pixels looks like this. Apparently the border is not plain white but shades of white which may come from later compression.
Where does this white edge artifact come from?
The issue was that the values of the croppingRect were not full pixels.
As the values for x, y, width, height where calculated CGFloat numbers the results would sometimes be fractional numbers (e.g. 1.3 instead of 1.0). The solution was to round these values:
let cropRect = CGRect(
x: x.rounded(.towardZero),
y: y.rounded(.towardZero),
width: width.rounded(.towardZero),
height: height.rounded(.towardZero))
The rounding has to be .towardZero (see here what it means) because the cropping rectangle should (usually) not be larger than the image rect.
Related
I want to add a white border to the photo, while preserving the quality of the photo, so I use UIGraphicsImageRenderer to draw a white background, and then draw the photo up, the result is a dramatic increase in memory usage, is there any better way?
The resolution of the original picture is 4032 * 3024.
let renderer = UIGraphicsImageRenderer(size: CGSize(width: canvasSideLength, height: canvasSideLength))
let newImage = renderer.image { context in
UIColor.white.setFill()
context.fill(CGRect(x: 0, y: 0, width: canvasSideLength, height: canvasSideLength))
image.draw(in: CGRect(x: photoCanvasX, y: photoCanvasY, width: photoCanvasWidth, height: photoCanvasHeight))
}
When considering the memory used, don’t be misled by the size of the JPG or PNG file, because that is generally compressed. You will require four bytes per pixel when performing image operations in memory (e.g. width × height × 4, in pixels).
Worse, by default, UIGraphicsImageRenderer will generate images at screen resolution (e.g. potentially 2× or 3× depending upon your device). E.g. on a 3× device, consider:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let image = UIGraphicsImageRenderer(bounds: rect).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print:
25557 × 25557
When you consider that it then takes 4 bytes per pixel, that adds up to 2.6gb. Even if the image is only 4,032 × 3,024, as suggested by your revised question, that’s still 439mb per image.
You may want to make sure to specify an explicit scale of 1:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let image = UIGraphicsImageRenderer(bounds: rect, format: format).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print, as you expected:
8519 × 8519
Then you are only going to require 290mb for the image. That’s still a lot of memory, but a lot less than if you use the default scale (on retina devices). Or, considering your revised 4,032 × 3,024 image, this 1× image could take only 49mb, 1/9th the size of the corresponding 3× image where you didn’t set the scale.
Here is the image I wish to crop (to get rid of the options at the bottom. The Back, Draw and Delete are actual menu items, the ones above it are part of the image
this is the result of changing y: 100 and height : 1948
I want to remove the bottom 100 coordinates of an image. My application is on the iPad and all of the images are saved horizontally.
This code is one I took from stack overflow on a similar question, However it does not work for any values of x,y,width and height. The image is never cropped from the bottom.
Changing the values tends to only crop the image from the left and right (the 1536 pixel part of the iPad and not the 2048)
func cropImage(image: UIImage) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: 1536, height: 2048) // 1536 x 2048 pixels
let cgImage = image.cgImage!
let croppedCGImage = cgImage.cropping(to: rect)
return UIImage(cgImage: croppedCGImage!)
}
Does anyone know what is missing? All i need it to crop out the bottom part as the images are saves of a previous view (however the menu options appear in a a=stack view in the bottom which are still there when I save the image, hence the crop. Thanks
The "image I wish to crop" image you posted is 2048 x 1536 pixels...
If you want to crop the "bottom 100 pixels" your crop rect should be
CGRect(x: 0, y: 0, width: 2048, height: 1536 - 100)
I would like to draw in specific channels using Core Graphics.
Using the code below, each shape is drawn using a single channel color, but the second green filled rectangle will overwrite the previous red ellipse. I would like one ellipse to be only in the red channel and the square to be only in the green channel. I tried using transparency layers but they did not help.
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
let circlePath = UIBezierPath(ovalIn: CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50))
let squarePath = UIBezierPath(rect: CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50))
UIColor.red.setFill()
circlePath.fill()
UIColor.green.setFill()
squarePath.fill()
Is it possible to draw in individual channels? Or will I have to draw in individual bitmaps and combine them at the pixel level?
Rather than using UIBezierPath.fill() (which draws the bezier path as fully opaque), you need to use fill(with:alpha:), which allows you to specify a custom opacity as a CGFloat between 0 (fully transparent) and 1 (fully opaque).
There are multiple blend modes available (see CGBlendMode), which specify how partially transparent surfaces interact with each other. Multiply is a "sensible default", but you can play around with them and see which best matches your design. The blend modes are the same as those in Photoshop and such, so there's plenty of online tutorials, explanations and samples, such as https://photoshoptrainingchannel.com/blending-modes-explained/
I am trying to annotate a PDF using type .ink in my application using a UIBezierPath. I have included a snippet of the pertinent code below (can add the whole sample but issue is only with rotating the path). The issue is when I apply this path, it is rotated 180 degrees around the x- axis so basically it is flipped upside down. I would like to be able to rotate this path 180 degrees around the x-axis so it appears as initially intended. I have seen example of rotating around the z-axis but none around the x-axis. Any help would be greatly appreciated!
let rect = CGRect(x: 110, y: 100, width: 400, height: 300)
let annotation = PDFAnnotation(bounds: rect, forType: .ink, withProperties: nil)
annotation.backgroundColor = .blue
path.apply(CGAffineTransform(scaleX: 0.2, y: 0.2))
annotation.add(path)
// Add annotation to the first page
page.addAnnotation(annotation)
pdfView?.document?.page(at: 0)?.addAnnotation(annotation)
I was actually able to solve this issue using the following scale and a translation transforms:
let rect = CGRect(x: 110, y: 100, width: 400, height: 300)
let annotation = PDFAnnotation(bounds: rect, forType: .ink, withProperties: nil)
annotation.backgroundColor = .blue
// OVER HERE 🙂
path.apply(CGAffineTransform(scaleX: 1.0, y: -1.0))
path.apply(CGAffineTransform(translationX: 0, y: rect.size.height))
annotation.add(path)
// Add annotation to the first page
page.addAnnotation(annotation)
pdfView?.document?.page(at: 0)?.addAnnotation(annotation)
This solution is inspired from the Apple Developer Documentation example.
It was a bit tricky because the PDFKit Coordinate System uses the bottom/left as the origin, with the x- axis going left-to-right and the y- axis going bottom-to-top. That's contrary to the origin: top/left, x: left-to-right and y: top-to-bottom pattern usually encountered on iOS. But this is what we're doing:
CGAffineTransform(scaleX: 1.0, y: -1.0) - scaling the y coordinate to -1.0 makes your path flip 180 degrees around the x- axis (said axis visually being the bottom line of the rect). This means the path is now below of the rect, which might give you the impression that it has disappeared (you won't even find it by Capturing the View Hierarchy since it will show you the entire PDFView as one UIView component, which may or may not drive you insane).
CGAffineTransform(translationX: 0, y: rect.size.height) - now that the path is at the bottom of the rect (but actually at the "top" according to the PDFKit Coordinate System), we need to bring it back into the visible area. Which is why we need to apply a translation transform to move the path up (or down - thanks again PDFKit) into the rect.
Hope this helps! Cheers
Is it possible by using UIGraphicsBeginImageContext to draw CGImage in a region of image context of my CGImage?
Example: I have image context 100x100 and i want to draw CGImage in a coordinates (x:4, y:0) and with width 50.
I'm trying to do it by
CGContextDrawImage(fullContext, CGRect(x: 4, y: 0, width: 50,
height: size.height), tiledBackground.CGImage);
But device always draws tiledBackground.CGImage from (4.0) till (100,0) repeating CGImage ignoring my width argument.