I am trying to crop image captured with camera session with specific rect of interest. For find proportional crop rect I am using previewLayer.metadataOutputRectConverted method. But after cropping i got wrong ratio.
Debug example:
(lldb) po rectOfInterest.width / rectOfInterest.height
0.7941176470588235
(lldb) po image.size.width / image.size.height
0.75
(lldb) po outputRect.width / outputRect.height
0.9444444444444444
(lldb) po Double(cropped.width) / Double(cropped.height)
0.7080152671755725
As you can see, I am expecting that cropped image ratio will be ~0.79 as rectOfInterest which I am using for cropping.
Method:
private func makeImageCroppedToRectOfInterest(from image: UIImage) -> UIImage {
let previewLayer = cameraController.previewLayer
let rectOfInterest = layoutLayer.layoutRect
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rectOfInterest)
guard let cgImage = image.cgImage else {
return image
}
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width,
y: outputRect.origin.y * height ,
width: outputRect.size.width * width,
height: outputRect.size.height * height)
guard let cropped = cgImage.cropping(to: cropRect) else {
return image
}
return UIImage(cgImage: cropped, scale: image.scale, orientation: image.imageOrientation)
}
Target rect:
I'm not an expert, but I think you misinterpreted the metadataOutputRectConverted method usage.
There is no way your code could return a cropped image with the same aspect ratio since you are multiplying (what I think are) virtually unrelated numbers one to each other.
You can try this method
previewLayer.layerRectConverted(fromMetadataOutputRect: CGRect(x: 0, y: 0, width: 1, height: 1))
To get an idea on what the actual calculations made from the metadataOututRectConverted are.
I think you could explain better what you want to achieve, and maybe provide a sample project (or at the very least some more context on what are the actual images/rects you are using) to help us debug, if you want to have more help on this.
Thanks to #Enricoza I understand how to fix my problem. Here is code:
private func makeImageCroppedToRectOfInterest(from image: UIImage) -> UIImage {
let previewLayer = cameraController.previewLayer
let rectOfInterest = layoutLayer.layoutRect
let metadataOutputRect = CGRect(x: 0, y: 0, width: 1, height: 1)
let outputRect = previewLayer.layerRectConverted(fromMetadataOutputRect: metadataOutputRect)
guard let cgImage = image.cgImage else {
return image
}
let width = image.size.width
let height = image.size.height
let factorX = width / outputRect.width
let factorY = height / outputRect.height
let factor = max(factorX, factorY)
let cropRect = CGRect(x: (rectOfInterest.origin.x - outputRect.origin.x) * factor,
y: (rectOfInterest.origin.y - outputRect.origin.y) * factor,
width: rectOfInterest.size.width * factor,
height: rectOfInterest.size.height * factor)
guard let cropped = cgImage.cropping(to: cropRect) else {
return image
}
return UIImage(cgImage: cropped, scale: image.scale, orientation: image.imageOrientation)
}
Related
I'm working on an app where I'm cropping an image.
Currently, this is how I crop it:
mainPicture.layer.cornerRadius = mainPicture.frame.size.width / 2
mainPicture.clipsToBounds = true
The request is not to crop it from the middle but rather to crop it in a specific radius and 12 px from the top.
I start with a normal image:
and when I currently crop it just gets cropped from the middle, so the result is like this:
The request is to crop it so that the top part of the circle will be 12 px from the top:
So that the final image would look like this:
How can this be done using Swift 4.0?
Here what you need to do is first crop the original image into a square image from top with the margin you want (like 20) and then set image to your Image view.
Here's a extension you can write on UIImage class for cropping:
extension UIImage {
func getCroppedImage(with topMargin: CGFloat) -> UIImage? {
let heightWidth = size.height < size.width ? size.height : size.width
let x = (size.width - heightWidth)/2
let rect = CGRect(x: x, y: topMargin, width: heightWidth, height: heightWidth)
if let imageRef = cgImage?.cropping(to: rect) {
return UIImage(cgImage: imageRef)
}
return nil
}
}
Then before setting the image to UIImageView call this method for your Image like:
let image = UIImage(named: "test")
imageView.image = image?.getCroppedImage(with: 20)
Output:
This is the input image:
This is the Output:
fixed it by cropping the image prior to posting it using this function:
func cropToBounds(image: UIImage, width: CGFloat, height: CGFloat) -> UIImage {
let cgimage = image.cgImage!
let contextImage: UIImage = UIImage(cgImage: cgimage)
let contextSize: CGSize = contextImage.size
var posX: CGFloat = 0.0
var posY: CGFloat = 0.0
var cgwidth: CGFloat = width
var cgheight: CGFloat = height
// See what size is longer and create the center off of that
if contextSize.width > contextSize.height {
posX = ((contextSize.width - contextSize.height) / 2)
posY = 0
cgwidth = contextSize.height
cgheight = contextSize.height
} else {
posX = 0
posY = (( contextSize.width - contextSize.height) / 2)
cgwidth = contextSize.width
cgheight = contextSize.width
}
let rect: CGRect = CGRect(x: posX, y: posY, width: cgwidth, height: cgheight)
// Create bitmap image from context using the rect
let imageRef: CGImage = cgimage.cropping(to: rect)!
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}
I need to build a custom camera view similar to this: example image
I've used AVFoundation and placed an UIImageView over the AVCaptureVideoPreviewLayer and it look almost the same (although I'm not sure if this is the right way, thats way I wrote the title like this). I'm capturing the image and saving it to the gallery, but I need only the image in the rectangle in the middle.
Any suggestions how to do it?
Thanks in advance!
Actually the answer from Nishant Bhindi needs some corrections. The code below will do the work:
func cropToBounds(image: UIImage) -> UIImage
{
let contextImage: UIImage = UIImage(cgImage: image.cgImage!)
let contextSize: CGSize = contextImage.size
let widthRatio = contextSize.height/UIScreen.main.bounds.size.height
let heightRatio = contextSize.width/UIScreen.main.bounds.size.width
let width = (self.imgOverlay?.frame.size.width)!*widthRatio
let height = (self.imgOverlay?.frame.size.height)!*heightRatio
let x = (contextSize.width/2) - width/2
let y = (contextSize.height/2) - height/2
let rect = CGRect(x: x, y: y, width: width, height: height)
let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
let image: UIImage = UIImage(cgImage: imageRef, scale: 0, orientation: image.imageOrientation)
return image
}
You need to crop the image in context of overlay imageView.Pass the captured image in below function that may help you.
func cropToBounds(image: UIImage) -> UIImage
{
let contextImage: UIImage = UIImage(cgImage: image.cgImage!)
let contextSize: CGSize = contextImage.size
let widthRatio = contextSize.height/UIScreen.main.bounds.size.width
let heightRatio = contextSize.width/UIScreen.main.bounds.size.height
let width = (self.imgOverlay?.frame.size.width)!*widthRatio
let height = (self.imgOverlay?.frame.size.height)!*heightRatio
let x = ((self.imgOverlay?.frame.origin.x)!)*widthRatio
let y = (self.imgOverlay?.frame.origin.y)!*heightRatio
let rect = CGRect(x: x, y: y, width: height, height: width)
let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
let image: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}
I am following this dicussion and got an array of images which are divided parts of the original image. If I print it, it looks like this:
[<UIImage: 0x61000008ea60>, {309, 212}, <UIImage: 0x61000008ec90>, {309, 212}, <UIImage: 0x61000008ebf0>, {309, 213}, <UIImage: 0x61000008ec40>, {309, 213}]
How could I use elements of this array? In the regular case I would have the name of the image and could use, for example, UIImage(named: ""). This array doesn't show me any names. Do you have any idea?
perhaps my mistake is here:
func setImage(){
testImage = UIImageView(frame: CGRect(x: 0, y: 0, width: size/1.5, height: size/1.5))
testImage.center = CGPoint(x: view.frame.width/2, y: view.frame.height/2)
slice(image: UIImage(named:"leopard_PNG14834")!, into: 8)
testImage.image = images[2]
view.addSubview(testImage)
}
Here is the function code
func slice(image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
let cgImage = image.cgImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
images.append(UIImage(cgImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
If you're having a hard time adding your images to your ViewController, this is the common approach you would use to do so, assuming an input of an array of images:
let image = images[0]
let imageView = UIImageView(image: image)
self.view.addSubview(imageView)
EDIT
If you're messing with the frame of the image view, I would suggest doing it after you've initialized the UIImageView with an image like above. To test what's going on, perhaps just add an ImageView with your desired image without messing with the frame. Then check your variables that you are using to set the frame.
You already get an array with objects of type UIImage.
let images = slice(image: someOriginalImage, into: 4)
let firstImage = images[0] // firstImage is a UIImage
{309, 212} is a size of the image.
In my app, I'm displaying an image of a rectangle from the assets library. The image is 100x100 pixels. I'm only using the 1x slot for this asset.
I want to display this image at 300x300 pixels. Doing this using points is quite simple but I can't figure out how to get UIImageView to set the size in pixels.
Alternatively, if I can't set the size in pixels to display, I'd like to get the size in pixels that the image is being displayed.
I have tried using .scale on the UIImageView and UIImage instances, but it's always 1. Even though I have set constraints to 150 and 300.
To get size in pixels of UIImageView:
let widthInPixels = imageView.frame.width * UIScreen.mainScreen().scale
let heightInPixels = imageView.frame.height * UIScreen.mainScreen().scale
To get size in pixels of UIImage:
let widthInPixels = image.size.width * image.scale
let heightInPixels = image.size.height * image.scale
Swift 5
Take a Look here
// This extension to save ImageView as UImage
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
// this extension to resize image
extension UIImage {
func resizeImage(targetSize: CGSize) -> UIImage {
let size = self.size
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let newSize = widthRatio > heightRatio ? CGSize(width: size.width * heightRatio, height: size.height * heightRatio) : CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
// This extension will display image with 300X300 pixels
extension UIImage {
func Size300X300() -> UIImage? {
let imageView = UIImageView()
imageView.contentMode = .scaleAspectFit
imageView.frame = CGRect(x: 0, y: 0, width: 300, height: 300)
let image = imageView.asImage()
let newImage = image.resizeImage(targetSize: CGSize(width:300, height: 300))
return newImage
}
}
let image = YOURIMAGE.Size300X300()
imageView.image = image!
I'm drawing image with core graphics than i want to use it like a mask for another image, but just getting clear rectangular image.
take a look whats wrong with my cropping function ?
UIGraphicsBeginImageContext(CGSizeMake(self.bounds.size.width / 0.5, self.bounds.size.height / 0.5))
let imageCtx = UIGraphicsGetCurrentContext()
CGContextAddArc(imageCtx, CGFloat(self.frame.size.width ) , CGFloat(self.frame.size.height ), 158, 0, CGFloat(DegreesToRadians(Double(angle))) , 0)
CGContextSetLineWidth(imageCtx, 80)
CGContextDrawPath(imageCtx, .Stroke)
let myMask :CGImageRef = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext())!
UIGraphicsEndImageContext()
let maskImage = UIImage(CGImage: myMask)
let testVideoPreview = UIImage(named: "alex")
guard let makedImage = testVideoPreview else {return}
var imageMaskOne: CGImageRef = CGImageMaskCreate(CGImageGetWidth(myMask), CGImageGetHeight(myMask), CGImageGetBitsPerComponent(myMask), CGImageGetBitsPerPixel(myMask), CGImageGetBytesPerRow(myMask), CGImageGetDataProvider(myMask), nil, // Decode is null
true)!
let masked: CGImageRef = CGImageCreateWithMask(makedImage.CGImage, imageMaskOne)!
//Finished
let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 350, height: 350))
imageView.image = UIImage(CGImage: masked)
self.addSubview(imageView)
If you just need to create a crop at a specific coordinates there is an easier way CGImageCreateWithImageInRect
func CGImageCreateWithImageInRect(_ image: CGImage?, _ rect: CGRect) -> CGImage?
Here the documentation.