Is it possible by using UIGraphicsBeginImageContext to draw CGImage in a region of image context of my CGImage?
Example: I have image context 100x100 and i want to draw CGImage in a coordinates (x:4, y:0) and with width 50.
I'm trying to do it by
CGContextDrawImage(fullContext, CGRect(x: 4, y: 0, width: 50,
height: size.height), tiledBackground.CGImage);
But device always draws tiledBackground.CGImage from (4.0) till (100,0) repeating CGImage ignoring my width argument.
Related
I want to add a white border to the photo, while preserving the quality of the photo, so I use UIGraphicsImageRenderer to draw a white background, and then draw the photo up, the result is a dramatic increase in memory usage, is there any better way?
The resolution of the original picture is 4032 * 3024.
let renderer = UIGraphicsImageRenderer(size: CGSize(width: canvasSideLength, height: canvasSideLength))
let newImage = renderer.image { context in
UIColor.white.setFill()
context.fill(CGRect(x: 0, y: 0, width: canvasSideLength, height: canvasSideLength))
image.draw(in: CGRect(x: photoCanvasX, y: photoCanvasY, width: photoCanvasWidth, height: photoCanvasHeight))
}
When considering the memory used, don’t be misled by the size of the JPG or PNG file, because that is generally compressed. You will require four bytes per pixel when performing image operations in memory (e.g. width × height × 4, in pixels).
Worse, by default, UIGraphicsImageRenderer will generate images at screen resolution (e.g. potentially 2× or 3× depending upon your device). E.g. on a 3× device, consider:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let image = UIGraphicsImageRenderer(bounds: rect).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print:
25557 × 25557
When you consider that it then takes 4 bytes per pixel, that adds up to 2.6gb. Even if the image is only 4,032 × 3,024, as suggested by your revised question, that’s still 439mb per image.
You may want to make sure to specify an explicit scale of 1:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let image = UIGraphicsImageRenderer(bounds: rect, format: format).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print, as you expected:
8519 × 8519
Then you are only going to require 290mb for the image. That’s still a lot of memory, but a lot less than if you use the default scale (on retina devices). Or, considering your revised 4,032 × 3,024 image, this 1× image could take only 49mb, 1/9th the size of the corresponding 3× image where you didn’t set the scale.
Here is the image I wish to crop (to get rid of the options at the bottom. The Back, Draw and Delete are actual menu items, the ones above it are part of the image
this is the result of changing y: 100 and height : 1948
I want to remove the bottom 100 coordinates of an image. My application is on the iPad and all of the images are saved horizontally.
This code is one I took from stack overflow on a similar question, However it does not work for any values of x,y,width and height. The image is never cropped from the bottom.
Changing the values tends to only crop the image from the left and right (the 1536 pixel part of the iPad and not the 2048)
func cropImage(image: UIImage) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: 1536, height: 2048) // 1536 x 2048 pixels
let cgImage = image.cgImage!
let croppedCGImage = cgImage.cropping(to: rect)
return UIImage(cgImage: croppedCGImage!)
}
Does anyone know what is missing? All i need it to crop out the bottom part as the images are saves of a previous view (however the menu options appear in a a=stack view in the bottom which are still there when I save the image, hence the crop. Thanks
The "image I wish to crop" image you posted is 2048 x 1536 pixels...
If you want to crop the "bottom 100 pixels" your crop rect should be
CGRect(x: 0, y: 0, width: 2048, height: 1536 - 100)
I would like to draw in specific channels using Core Graphics.
Using the code below, each shape is drawn using a single channel color, but the second green filled rectangle will overwrite the previous red ellipse. I would like one ellipse to be only in the red channel and the square to be only in the green channel. I tried using transparency layers but they did not help.
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
let circlePath = UIBezierPath(ovalIn: CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50))
let squarePath = UIBezierPath(rect: CGRect(x: 0.0, y: 0.0, width: 50.0, height: 50))
UIColor.red.setFill()
circlePath.fill()
UIColor.green.setFill()
squarePath.fill()
Is it possible to draw in individual channels? Or will I have to draw in individual bitmaps and combine them at the pixel level?
Rather than using UIBezierPath.fill() (which draws the bezier path as fully opaque), you need to use fill(with:alpha:), which allows you to specify a custom opacity as a CGFloat between 0 (fully transparent) and 1 (fully opaque).
There are multiple blend modes available (see CGBlendMode), which specify how partially transparent surfaces interact with each other. Multiply is a "sensible default", but you can play around with them and see which best matches your design. The blend modes are the same as those in Photoshop and such, so there's plenty of online tutorials, explanations and samples, such as https://photoshoptrainingchannel.com/blending-modes-explained/
I am cropping an image with:
UIGraphicsBeginImageContext(croppingRect.size)
let context = UIGraphicsGetCurrentContext()
context?.clip(to: CGRect(x: 0, y: 0, width: rect.width, height: rect.height))
image.draw(at: CGPoint(x: -rect.origin.x, y: -rect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
The cropped image sometimes has a 1px white edge on the right and bottom border. The right bottom corner zoomed in to see the individual pixels looks like this. Apparently the border is not plain white but shades of white which may come from later compression.
Where does this white edge artifact come from?
The issue was that the values of the croppingRect were not full pixels.
As the values for x, y, width, height where calculated CGFloat numbers the results would sometimes be fractional numbers (e.g. 1.3 instead of 1.0). The solution was to round these values:
let cropRect = CGRect(
x: x.rounded(.towardZero),
y: y.rounded(.towardZero),
width: width.rounded(.towardZero),
height: height.rounded(.towardZero))
The rounding has to be .towardZero (see here what it means) because the cropping rectangle should (usually) not be larger than the image rect.
I've been trying to shrink down an image to a smaller size for a while and cannot figure out why it loses quality even though I've come across tutorials saying it should not. First, I crop my image into a square and then use this code:
let newRect = CGRectIntegral(CGRectMake(0,0, newSize.width, newSize.height))
let imageRef = image.CGImage
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, CGInterpolationQuality.High)
let flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height)
CGContextConcatCTM(context, flipVertical)
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef)
let newImageRef = CGBitmapContextCreateImage(context)! as CGImage
let newImage = UIImage(CGImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
I've also tried this code with the same results:
let newSize:CGSize = CGSize(width: 30,` height: 30)
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
UIBezierPath(
roundedRect: rect,
cornerRadius: 2
).addClip()
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImageJPEGRepresentation(newImage, 1.0)
sharedInstance.my.append(UIImage(data: imageData!)!)
I still get a blurry image after resizing. I compare it to when I have an image view and set it to aspect fill/fit, and the image is much clearer and still smaller. That is the quality I'm trying to get and can't figure out what I'm missing. I put two pictures here, the first is the clearer image using an imageView and the second is a picture resized with my code. How can I manipulate an image to look clear like in the image View?
You should use
let newSize:CGSize = CGSize(width: 30 * UIScreen.mainScreen().scale, height: 30 * UIScreen.mainScreen().scale)
This is because different iPhones have different size.
select the image view, click "Size" inspector and change the "X",
"Y", "Width" and "Height" attributes.
X = 14
Y = 10
Width = 60
Height = 60
For the round radius you can implement this code:
cell.ImageView.layer.cornerRadius = 30.0
cell.ImageView.clipsToBounds = true
or
go to the Identity inspector, click the Add button (+) in the lower left of
the user defined runtime attributes editor.
Double click on the Key Path field of the new attribute to edit the key path for the attribute to layer.cornerRadius
Set the type to Number and
the value to 30. To make a circular image from a square image, the
radius is set to half the width of the image view.
Duncan gave you a good explanation 30 by 30 is too small that's why the pixels or the quality of the image is loss, I recommend you to use 60 by 60
In the sample images you show, you're drawing the "after" image larger than the starting image. If you reduce an image from some larger size to 30 pixels by 30 pixels, you are throwing away a lot of information. If you then draw the 30x30 image at a larger size, it's going to look bad. 30 by 30 is a tiny image, without much detail.