Apply a mask to AVCaptureStillImageOutput - ios

I'm working on a project where I'd like to mask a photo that the user has just taken with their camera. The mask is created at a specific aspect ratio to add letterboxes to a photo.
I can successfully create the image, create the mask, and save both to the camera roll, but I can't apply the mask to the image. Here's the code I have now
func takePhoto () {
dispatch_async(self.sessionQueue) { () -> Void in
if let photoOutput = self.output as? AVCaptureStillImageOutput {
photoOutput.captureStillImageAsynchronouslyFromConnection(self.outputConnection) { (imageDataSampleBuffer, err) -> Void in
if err == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
let image = UIImage(data: imageData)
if let _ = image {
let maskedImage = self.maskImage(image!)
print("masked image: \(maskedImage)")
self.savePhotoToLibrary(maskedImage)
}
} else {
print("Error while capturing the image: \(err)")
}
}
}
}
}
func maskImage (image: UIImage) -> UIImage {
let mask = createImageMask(image)
let maskedImage = CGImageCreateWithMask(image.CGImage, mask!)
return UIImage(CGImage: maskedImage!)
}
func createImageMask (image: UIImage) -> CGImage? {
let width = image.size.width
let height = width / CGFloat(store.state.aspect.rawValue)
let x = CGFloat(0.0)
let y = (image.size.height - height) / 2
let maskRect = CGRectMake(0.0, 0.0, image.size.width, image.size.height)
let maskContents = CGRectMake(x, y, width, height)
var color = UIColor(white: 1.0, alpha: 0.0)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(maskRect.size.width, maskRect.size.height), false, 0.0)
color.setFill()
UIRectFill(maskRect)
color = UIColor(white: 0.0, alpha: 1.0)
color.setFill()
UIRectFill(maskContents)
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("mask: \(maskImage)")
savePhotoToLibrary(image)
savePhotoToLibrary(maskImage)
let mask = CGImageMaskCreate(
CGImageGetWidth(maskImage.CGImage),
CGImageGetHeight(maskImage.CGImage),
CGImageGetBitsPerComponent(maskImage.CGImage),
CGImageGetBitsPerPixel(maskImage.CGImage),
CGImageGetBytesPerRow(maskImage.CGImage),
CGImageGetDataProvider(maskImage.CGImage),
nil,
false)
return mask
}
From what I understand, CGImageCreateWithMask requires that the image to be masked has an alpha channel. I've tried everything I've seen here to add an alpha channel to the jpeg representation, but I'm not having any luck. Any help would be super.

This may be a bug, or maybe it's just a bit misleading. CGImageCreateWithMask() doesn't actually modify the image - it just associates the mask data with the image data, and uses the mask when you draw the image to a context (such as in a UIImageView), but not when you save the image to disk.
There are a couple approaches to generating a "rendered" version of the masked image, but if I understand your intent, you don't really want a "mask" ... you want a letter-boxed version of the image.
Here is one option that will effectively draw black bars on the top and bottom of your image (the bars / frame color is an optional parameter, if you don't want black). You can then save the modified image.
In your code above, replace
let maskedImage = self.maskImage(image!)
with
let height = image.size.width / CGFloat(store.state.aspect.rawValue)
let maskedImage = self.doLetterBox(image!, visibleHeight: height)
and add this function:
func doLetterBox(sourceImage: UIImage, visibleHeight: CGFloat, frameColor: UIColor?=UIColor.blackColor()) -> UIImage! {
// local rect based on sourceImage size
let imageRect: CGRect = CGRectMake(0.0, 0.0, sourceImage.size.width, sourceImage.size.height)
// rect for "visible" part of letter-boxed image
let clipRect: CGRect = CGRectMake(0.0, (imageRect.size.height - visibleHeight) / 2.0, imageRect.size.width, visibleHeight)
// setup the image context, using sourceImage size
UIGraphicsBeginImageContextWithOptions(imageRect.size, true, UIScreen.mainScreen().scale)
let ctx: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextSaveGState(ctx)
// fill new empty image with frameColor (defaults to black)
CGContextSetFillColorWithColor(ctx, frameColor?.CGColor)
CGContextFillRect(ctx, imageRect)
// set Clipping rectangle to allow drawing only in desired area
UIRectClip(clipRect)
// draw the sourceImage to full-image-size (the letter-boxed portion will be clipped)
sourceImage.drawInRect(imageRect)
// get new letter-boxed image
let resultImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
// clean up
CGContextRestoreGState(ctx)
UIGraphicsEndImageContext()
return resultImage
}

Related

Cropping AVCapturePhoto to overlay rectangle displayed on screen

I am trying to take a picture of a thin piece of metal, cropped to the outline displayed on the screen. I have seen almost every other post on here, but nothing has got it for me yet. This image will then be used for analysis by a library. I can get some cropping to happen, but never to the rectangle displayed. I have tried rotating the image before cropping, and calculating the rect based on the rectangle on screen.
Here is my capture code. PreviewView is the container, videoLayer is for the AVCapture video.
// Photo capture delegate
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imgData = photo.fileDataRepresentation(), let uiImg = UIImage(data: imgData), let cgImg = uiImg.cgImage else {
return
}
print("Original image size: ", uiImg.size, "\nCGHeight: ", cgImg.height, " width: ", cgImg.width)
print("Orientation: ", uiImg.imageOrientation.rawValue)
guard let img = cropImage(image: uiImg) else {
return
}
showImage(image: img)
}
func cropImage(image: UIImage) -> UIImage? {
print("Image size before crop: ", image.size)
//Get the croppedRect from function below
let croppedRect = calculateRect(image: image)
guard let imgRet = image.cgImage?.cropping(to: croppedRect) else {
return nil
}
return UIImage(cgImage: imgRet)
}
func calculateRect(image: UIImage) -> CGRect {
let originalSize: CGSize
let visibleLayerFrame = self.rectangleView.bounds
// Calculate the rect from the rectangleview to translate to the image
let metaRect = (self.videoLayer.metadataOutputRectConverted(fromLayerRect: visibleLayerFrame))
print("MetaRect: ", metaRect)
// check orientation
if (image.imageOrientation == UIImage.Orientation.left || image.imageOrientation == UIImage.Orientation.right) {
originalSize = CGSize(width: image.size.height, height: image.size.width)
} else {
originalSize = image.size
}
let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral
print("Calculated Rect: ", cropRect)
return cropRect
}
func showImage(image: UIImage) {
if takenImage != nil {
takenImage = nil
}
takenImage = UIImageView(image: image)
takenImage.frame = CGRect(x: 10, y: 50, width: 400, height: 1080)
takenImage.contentMode = .scaleAspectFit
print("Cropped Image Size: ", image.size)
self.previewView.addSubview(takenImage)
}
And here is along the line of what I keep getting.
What am I screwing up?
I managed to solve the issue for my use case.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage
The world of UIKit has the TOP LEFT corner as 0,0.
The 0,0 point in the AVFoundation world is the BOTTOM LEFT corner.
So you have to translate by rotating 90 degrees.
That's why your image is bonkers.
Also remember that because of the origin translation the following rules apply:
X is actually up and down
Y is actually left and right
width and height are swapped
Also be aware that the UIImageView content mode setting WILL impact how your image scales. You might want to use .scaleAspectFill and NOT AspectFit if you really want to see how your image looks in the UIView.
I used this code snippet to see what was behind the curtain:
// figure out how to cut/crop this
let realImageRect = AVMakeRect(aspectRatio: image.size, insideRect: (self.cameraPreview?.frame)!)
NSLog("real image rectangle = \(realImageRect.debugDescription)")
The 'cameraPreview' reference above is the control you're using for your AV Capture Session.
Good luck!

UIImageJpgRepresentation doubles image resolution

I am trying to save an image coming from the iPhone camera to a file. I use the following code:
try UIImageJPEGRepresentation(toWrite, 0.8)?.write(to: tempURL, options: NSData.WritingOptions.atomicWrite)
This results in a file double the resolution of the toWrite UIImage. I confirmed in the watch expressions that creating a new UIImage from UIImageJPEGRepresentation doubles its resolution
-> toWrite.size CGSize (width = 3264, height = 2448)
-> UIImage(data: UIImageJPEGRepresentation(toWrite, 0.8)).size CGSize? (width = 6528, height = 4896)
Any idea why this would happen, and how to avoid it?
Thanks
Your initial image has scale factor = 2, but when you init your image from data you will get image with scale factor = 1. Your way to solve it is to control the scale and init the image with scale property:
#available(iOS 6.0, *)
public init?(data: Data, scale: CGFloat)
Playground code that represents the way you can set scale
extension UIImage {
class func with(color: UIColor, size: CGSize) -> UIImage? {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, true, 2.0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
let image = UIImage.with(color: UIColor.orange, size: CGSize(width: 100, height: 100))
if let image = image {
let scale = image.scale
if let data = UIImageJPEGRepresentation(image, 0.8) {
if let newImage = UIImage(data: data, scale: scale) {
debugPrint(newImage?.size)
}
}
}

Image masking fails on iOS10 beta3

For some time now we use the following code to mask a grayscale image without transparency to a coloured image.
This always worked fine until Apple released iOS 10 beta 3. Suddenly the mask is not applied anymore resulting in just a square box being returned with te given color.
The logic behind this can be found at
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
under the header Masking an Image with an Image Mask
The logic of this code:
* Take an grayscale image without alpha
* Create an solid image with the given color
* Create a mask from the given image
* Mask the solid image with the created mask
* Output is a masked image with also respect for colors in between (gray might be light red i.e.).
Has anyone an idea how to fix this function?
If you have XCode 8 beta 3 you can run this code and on a simulator lower than iOS 10 this will work correct and on iOS 10 it will just create a square box
Example image:
public static func image(maskedWith color: UIColor, imageNamed imageName: String) -> UIImage? {
guard let image = UIImage(named: imageName)?.withRenderingMode(.alwaysOriginal) else {
return nil
}
guard image.size != CGSize.zero else {
return nil
}
guard
let maskRef = image.cgImage,
let colorImage = self.image(with: color, size: image.size),
let cgColorImage = colorImage.cgImage,
let dataProvider = maskRef.dataProvider
else {
return nil
}
guard
let mask = CGImage(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: dataProvider, decode: nil, shouldInterpolate: true),
let masked = cgColorImage.masking(mask)
else {
return nil
}
let result = UIImage(cgImage: masked, scale: UIScreen.main().scale, orientation: image.imageOrientation)
return result
}
public static func image(with color: UIColor, size: CGSize) -> UIImage? {
guard size != CGSize.zero else {
return nil
}
let rect = CGRect(origin: CGPoint.zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main().scale)
defer {
UIGraphicsEndImageContext()
}
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image;
}
This issue has been solved in iOS10 beta 4.

How to fix the Gradient on text when using an image in swift, as the gradient restarts

I'm trying to create a gradient on text, I have used UIGraphics to use a gradient image to create this. The problem I'm having is that the gradient is restarting. Does anyone know how I can scale the gradient to stretch to the text?
The text is on a wireframe and will be altered a couple of times. Sometimes it will be perfect but other times it is not.
The gradient should go yellow to blue but it restarts see photo below:
import UIKit
func colourTextWithGrad(label: UILabel) {
UIGraphicsBeginImageContext(label.frame.size)
UIImage(named: "testt.png")?.drawInRect(label.bounds)
let myGradient: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
label.textColor = UIColor(patternImage: myGradient)
}
You'll have to redraw the image each time the label size changes
This is because a pattered UIColor is only ever tiled. From the documentation:
During drawing, the image in the pattern color is tiled as necessary to cover the given area.
Therefore, you'll need to change the image size yourself when the bounds of the label changes – as pattern images don't support stretching. To do this, you can subclass UILabel, and override the layoutSubviews method. Something like this should achieve the desired result:
class GradientLabel: UILabel {
let gradientImage = UIImage(named:"gradient.png")
override func layoutSubviews() {
guard let grad = gradientImage else { // skip re-drawing gradient if it doesn't exist
return
}
// redraw your gradient image
UIGraphicsBeginImageContext(frame.size)
grad.drawInRect(bounds)
let myGradient = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// update text color
textColor = UIColor(patternImage: myGradient)
}
}
Although it's worth noting that I'd always prefer to draw a gradient myself – as you can have much more flexibility (say you want to add another color later). Also the quality of your image might be degraded when you redraw it at different sizes (although due to the nature of gradients, this should be fairly minimal).
You can draw your own gradient fairly simply by overriding the drawRect of your UILabel subclass. For example:
override func drawRect(rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
let ctx = UIGraphicsGetCurrentContext()
// flip context
CGContextScaleCTM(ctx, 1, -1)
CGContextTranslateCTM(ctx, 0, -bounds.size.height)
// get the superclass to draw text
super.drawRect(rect)
}
// get image and end context
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// get drawRect context
let ctx = UIGraphicsGetCurrentContext()
// clip context to image
CGContextClipToMask(ctx, bounds, img.CGImage)
// define your colors and locations
let colors = [UIColor.orangeColor().CGColor, UIColor.redColor().CGColor, UIColor.purpleColor().CGColor, UIColor.blueColor().CGColor]
let locs:[CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
let grad = CGGradientCreateWithColors(CGColorSpaceCreateDeviceRGB(), colors, locs)
// draw gradient
CGContextDrawLinearGradient(ctx, grad, CGPoint(x: 0, y:bounds.size.height*0.5), CGPoint(x:bounds.size.width, y:bounds.size.height*0.5), CGGradientDrawingOptions(rawValue: 0))
}
Output:
Swift 4 & as subclass
class GradientLabel: UILabel {
// MARK: - Colors to create gradient from
#IBInspectable open var gradientFrom: UIColor?
#IBInspectable open var gradientTo: UIColor?
override func draw(_ rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
guard let ctx = UIGraphicsGetCurrentContext() else { super.draw(rect); return }
// flip context
ctx.scaleBy(x: 1, y: -1)
ctx.translateBy(x: 0, y: -bounds.size.height)
// get the superclass to draw text
super.draw(rect)
}
// get image and end context
guard let img = UIGraphicsGetImageFromCurrentImageContext(), img.cgImage != nil else { return }
UIGraphicsEndImageContext()
// get drawRect context
guard let ctx = UIGraphicsGetCurrentContext() else { return }
// clip context to image
ctx.clip(to: bounds, mask: img.cgImage!)
// define your colors and locations
let colors: [CGColor] = [UIColor.orange.cgColor, UIColor.red.cgColor, UIColor.purple.cgColor, UIColor.blue.cgColor]
let locs: [CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
guard let grad = CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: colors as CFArray, locations: locs) else { return }
// draw gradient
ctx.drawLinearGradient(grad, start: CGPoint(x: 0, y: bounds.size.height*0.5), end: CGPoint(x:bounds.size.width, y: bounds.size.height*0.5), options: CGGradientDrawingOptions(rawValue: 0))
}
}

DrawView save combined image (multiply)

I have 2 UIImageViews - one is at the bottom and shows a default image (like an Photo) - on the second UIImageView where you can draw.
I would like to create an UIImage from both images, and save it as new image.
How can i do that? Ill tried with:
func saveImage() {
print("save combined image")
let topImage = self.canvasView.image
let bottomImage = self.backgroundImageView.image
let size = CGSizeMake(topImage!.size.width, topImage!.size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
[topImage!.drawInRect(CGRectMake(0,0,size.width, topImage!.size.height))];
[bottomImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height))];
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(newImage, self, #selector(CanvasViewController.image(_:didFinishSavingWithError:contextInfo:)), nil)
}
But the result is not correct (stretched and no overlay)
Any ideas?
Ok ill found an solution for that, just need to add "multiply" as blend mode.
let topImage = self.canvasView.image
let bottomImage = self.backgroundImageView.image
let size = CGSizeMake(bottomImage!.size.width, bottomImage!.size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
[bottomImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height))];
[topImage!.drawInRect(CGRectMake(0,0,size.width, bottomImage!.size.height), blendMode: CGBlendMode.Multiply , alpha: 1.0)];
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

Resources