OSX UIGraphicsBeginImageContext - ios

I have an iOS app that I am now creating for Mac OSX. I have the code below that converts the image to a size of 1024 and works out the width based on the aspect ratio of the image. This works on iOS but obviously does not on OSX. I am not sure how to create a PNG representation of the NSImage or what I should be using instead of UIGraphicsBeginImageContext. Any suggestions?
Thanks.
var image = myImageView.image
let imageData = UIImagePNGRepresentation(image)
let imageWidth = image?.size.width
let calculationNumber:CGFloat = imageWidth! / 1024.0
let imageHeight = image?.size.height
let newImageHeight = imageHeight! / calculationNumber
UIGraphicsBeginImageContext(CGSizeMake(1024.0, newImageHeight))
image?.drawInRect(CGRectMake(0, 0, 1024.0, newImageHeight))
var resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImagePNGRepresentation(resizedImage)
let theImageData:NSData = UIImagePNGRepresentation(resizedImage)
imageFile = PFFile(data: theImageData)

You can use:
let image = NSImage(size: newSize)
image.lockFocus()
//draw your stuff here
image.unlockFocus()
if let data = image?.TIFFRepresentation {
let imageRep = NSBitmapImageRep(data: data)
let imageData = imageRep?.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:])
//do something with your PNG data here
}

my two cents...
a quick extension to draw an image with a partially overlapped image:
extension NSImage {
func mergeWith(anotherImage: NSImage) -> NSImage {
self.lockFocus()
//draw your stuff here
self.draw(in: CGRect(origin: .zero, size: size))
let frame2 = CGRect(x: 4, y: 4, width: size.width/3, height: size.height/3)
anotherImage.draw(in: frame2)
self.unlockFocus()
return self
}
}
final effect:

Related

How to get thumbnail and Original image from UIImagePickerViewcontroller?

After captured photo from camera, I was doing image compression For (400kb and 1 Mb), it look almost 3 seconds in iPhone 6 and less than a second in iPhone 6s.
Is there any way to get thumbnail and original image without doing manual compression?
Code used for image compression
Extension for UIImage
extension UIImage {
// MARK: - UIImage+Resize
func compressTo(_ expectedSizeInMb:Int) -> Data? {
let sizeInBytes = expectedSizeInMb * 1024 * 1024
var needCompress:Bool = true
var imgData:Data?
var compressingValue:CGFloat = 1.0
while (needCompress && compressingValue > 0.0) {
if let data:Data = jpegData(compressionQuality: compressingValue) {
if data.count < sizeInBytes {
needCompress = false
imgData = data
} else {
compressingValue -= 0.1
}
}
}
if let data = imgData {
if (data.count < sizeInBytes) {
return data
}
}
return nil
}
}
usage:
if let imageData = image.compressTo(1) {
print(imageData)
}
For images saved in Photos Library :
Try :
let phAsset = info[UIImagePickerController.InfoKey.phAsset] as! PHAsset
let options = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isSynchronous = false
// you can change your target size to CGSize(width: Int , height: Int) any number you want.
PHImageManager.default().requestImage(for: phAsset, targetSize: PHImageManagerMaximumSize, contentMode: .default, options: options, resultHandler: { image , _ in
let thumbnail = image
// use your thumbnail
})
For Captured images from Camera, you can get image pixels without recalculating data count :
let image = info[UIImagePickerController.InfoKey.originalImage] as! UIImage
// pixels are the same on each device’s camera
let widthPixels = image.size.width * image.scale
let heightPixels = image.size.height * image.scale
let sizeInBytes = 1024 * 1024
var thumbnail : UIImage! = nil
if Int(widthPixels * heightPixels) > sizeInBytes {
// assign custom width and height you need
let rect = CGRect(x: 0.0, y: 0.0, width: 100, height: 100)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 1)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = .low
image.draw(in: rect)
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
thumbnail = resizedImage
} else {
thumbnail = image
}

Convert PDF data to UIImage and download into album in Swift iOS

I have got a requirement like from REST API, I will get the PDF bytes data, So, In UI I have to convert the PDF bytes data to image(.jpg) format and download into photo Album. Is it Possible in Swift4? if Possible, can share me an example code snippet.
Thanks in advance
you can save your pdf to document directory and can create thumbnail of first page
import PDFKit
func pdfThumbnail(url: URL, width: CGFloat = 240) -> UIImage? {
guard let data = try? Data(contentsOf: url),
let page = PDFDocument(data: data)?.page(at: 0) else {
return nil
}
let pageSize = page.bounds(for: .mediaBox)
let pdfScale = width / pageSize.width
// Apply if you're displaying the thumbnail on screen
let scale = UIScreen.main.scale * pdfScale
let screenSize = CGSize(width: pageSize.width * scale,
height: pageSize.height * scale)
return page.thumbnail(of: screenSize, for: .mediaBox)
}
Check out PDFKit in the documentation.
You can initialise a PDF document with a Data representation of a PDF (or a PDF file) and then display it in a PDFView. Given PDFView inherits from UIView, all the standard UIView functionality should be there, including methods such as
func UIImageWriteToSavedPhotosAlbum(UIImage, Any?, Selector?, UnsafeMutableRawPointer?)
which should do what it says in it's signature!
This code is working fine for my question:
GHServiceManager.shared.getPDF(fileName: self.pdfName, success: { (ssdata) in
let pdfData = ssdata as CFData
let provider:CGDataProvider = CGDataProvider(data: pdfData)!
let pdfDoc:CGPDFDocument = CGPDFDocument(provider)!
let pdfPage:CGPDFPage = pdfDoc.page(at: 1)!
var pageRect:CGRect = pdfPage.getBoxRect(.mediaBox)
pageRect.size = CGSize(width:pageRect.size.width, height:pageRect.size.height)
print("\(pageRect.width) by \(pageRect.height)")
UIGraphicsBeginImageContext(pageRect.size)
let context:CGContext = UIGraphicsGetCurrentContext()!
context.saveGState()
context.translateBy(x: 0.0, y: pageRect.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(pdfPage.getDrawingTransform(.mediaBox, rect: pageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(pdfPage)
context.restoreGState()
let pdfImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
dukePhotoLibrary?.saveImage(image: pdfImage)
})

Convert Image to CVPixelBuffer for Machine Learning Swift

I am trying to get Apple's sample Core ML Models that were demoed at the 2017 WWDC to function correctly. I am using the GoogLeNet to try and classify images (see the Apple Machine Learning Page). The model takes a CVPixelBuffer as an input. I have an image called imageSample.jpg that I'm using for this demo. My code is below:
var sample = UIImage(named: "imageSample")?.cgImage
let bufferThree = getCVPixelBuffer(sample!)
let model = GoogLeNetPlaces()
guard let output = try? model.prediction(input: GoogLeNetPlacesInput.init(sceneImage: bufferThree!)) else {
fatalError("Unexpected runtime error.")
}
print(output.sceneLabel)
I am always getting the unexpected runtime error in the output rather than an image classification. My code to convert the image is below:
func getCVPixelBuffer(_ image: CGImage) -> CVPixelBuffer? {
let imageWidth = Int(image.width)
let imageHeight = Int(image.height)
let attributes : [NSObject:AnyObject] = [
kCVPixelBufferCGImageCompatibilityKey : true as AnyObject,
kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject
]
var pxbuffer: CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
imageWidth,
imageHeight,
kCVPixelFormatType_32ARGB,
attributes as CFDictionary?,
&pxbuffer)
if let _pxbuffer = pxbuffer {
let flags = CVPixelBufferLockFlags(rawValue: 0)
CVPixelBufferLockBaseAddress(_pxbuffer, flags)
let pxdata = CVPixelBufferGetBaseAddress(_pxbuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGContext(data: pxdata,
width: imageWidth,
height: imageHeight,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(_pxbuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
if let _context = context {
_context.draw(image, in: CGRect.init(x: 0, y: 0, width: imageWidth, height: imageHeight))
}
else {
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return nil
}
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return _pxbuffer;
}
return nil
}
I got this code from a previous StackOverflow post (last answer here). I recognize that the code may not be correct, but I have no idea of how to do this myself. I believe that this is the section that contains the error. The model calls for the following type of input: Image<RGB,224,224>
You don't need to do a bunch of image mangling yourself to use a Core ML model with an image — the new Vision framework can do that for you.
import Vision
import CoreML
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
The WWDC17 session on Vision should have a bit more info — it's tomorrow afternoon.
You can use a pure CoreML, but you should resize an image to (224,224)
DispatchQueue.global(qos: .userInitiated).async {
// Resnet50 expects an image 224 x 224, so we should resize and crop the source image
let inputImageSize: CGFloat = 224.0
let minLen = min(image.size.width, image.size.height)
let resizedImage = image.resize(to: CGSize(width: inputImageSize * image.size.width / minLen, height: inputImageSize * image.size.height / minLen))
let cropedToSquareImage = resizedImage.cropToSquare()
guard let pixelBuffer = cropedToSquareImage?.pixelBuffer() else {
fatalError()
}
guard let classifierOutput = try? self.classifier.prediction(image: pixelBuffer) else {
fatalError()
}
DispatchQueue.main.async {
self.title = classifierOutput.classLabel
}
}
// ...
extension UIImage {
func resize(to newSize: CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSize(width: newSize.width, height: newSize.height), true, 1.0)
self.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return resizedImage
}
func cropToSquare() -> UIImage? {
guard let cgImage = self.cgImage else {
return nil
}
var imageHeight = self.size.height
var imageWidth = self.size.width
if imageHeight > imageWidth {
imageHeight = imageWidth
}
else {
imageWidth = imageHeight
}
let size = CGSize(width: imageWidth, height: imageHeight)
let x = ((CGFloat(cgImage.width) - size.width) / 2).rounded()
let y = ((CGFloat(cgImage.height) - size.height) / 2).rounded()
let cropRect = CGRect(x: x, y: y, width: size.height, height: size.width)
if let croppedCgImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCgImage, scale: 0, orientation: self.imageOrientation)
}
return nil
}
func pixelBuffer() -> CVPixelBuffer? {
let width = self.size.width
let height = self.size.height
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault,
Int(width),
Int(height),
kCVPixelFormatType_32ARGB,
attrs,
&pixelBuffer)
guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(resultPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(data: pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) else {
return nil
}
context.translateBy(x: 0, y: height)
context.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return resultPixelBuffer
}
}
The expected image size for inputs you can find in the mimodel file:
A demo project that uses both pure CoreML and Vision variants you can find here: https://github.com/handsomecode/iOS11-Demos/tree/coreml_vision/CoreML/CoreMLDemo
If the input is UIImage, rather than an URL, and you want to use VNImageRequestHandler, you can use CIImage.
func updateClassifications(for image: UIImage) {
let orientation = CGImagePropertyOrientation(image.imageOrientation)
guard let ciImage = CIImage(image: image) else { return }
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
}
From Classifying Images with Vision and Core ML

Resize a CVPixelBuffer

I'm trying to resize a CVPixelBuffer to a size of 128x128. I'm working with one that is 750x750. I'm currently using the CVPixelBuffer to create a new CGImage, which I resize then convert back into a CVPixelBuffer. Here is my code:
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
let imageRect = CGRect(x: 0, y: 0, width: 128, height: 128)
if let image = context.createCGImage(ciImage, from: imageRect) {
let t = CIImage(cgImage: image)
let new = t.applying(transformation)
context.render(new, to: pixelBuffer)
return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: .right)
}
}
return nil
}
I've also tried scaling the CIImage then converting it:
let t = CIImage(cgImage: image)
let transformation = CGAffineTransform(scaleX: 1, y: 2)
let new = t.applying(transformation)
context.render(new, to: pixelBuffer)
But that didn't work either.
Any help is appreciated. Thanks!
There's no need for pixel buffer rendering and alike. Just transform the original CIImage and crop to size. Cropping is necessary if you source and destination dimensions aren't proportional.
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let srcWidth = CGFloat(ciImage.extent.width)
let srcHeight = CGFloat(ciImage.extent.height)
let dstWidth: CGFloat = 128
let dstHeight: CGFloat = 128
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
let scale = min(scaleX, scaleY)
let transform = CGAffineTransform.init(scaleX: scale, y: scale)
let output = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
return UIImage(ciImage: output)
}
return nil
}
Try this
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let resizedCIImage = ciImage.applying(CGAffineTransform(scaleX: 128.0 / 750.0, y: 128.0 / 750.0))
let context = CIContext()
if let image = context.createCGImage(resizedCIImage, from: resizedCIImage.extent) {
return UIImage(cgImage: image)
}
}
return nil
}
Here I assume that pixel buffer is square and size is equal to 750x750, you can change it to work with all aspect ratios and sizes

SWIFT 3 - CGImage to PNG

I am trying to use a color mask to make a color in JPG image transparent because as I read, color mask only works with JPG.
This code work when I apply the color mask and save the image as a JPG, but as a JPG, there is no transparency so I want to transform the JPG image to a PNG image to keep the transparency but when I try to do it, the color mask doesn't work.
Am I doing something wrong or maybe this isn't the right approach.
Here is the code of the 2 functions :
func callChangeColorByTransparent(_ sender: UIButton){
var colorMasking: [CGFloat] = []
if let textLabel = sender.titleLabel?.text {
switch textLabel {
case "Remove Black":
colorMasking = [0,30,0,30,0,30]
break
case "Remove Red":
colorMasking = [180,255,0,50,0,60]
break
default:
colorMasking = [222,255,222,255,222,255]
}
}
print(colorMasking)
let newImage = changeColorByTransparent(selectedImage, colorMasking: colorMasking)
symbolImageView.image = newImage
}
func changeColorByTransparent(_ image : UIImage, colorMasking : [CGFloat]) -> UIImage {
let rawImage: CGImage = (image.cgImage)!
//let colorMasking: [CGFloat] = [222,255,222,255,222,255]
UIGraphicsBeginImageContext(image.size)
let maskedImageRef: CGImage = rawImage.copy(maskingColorComponents: colorMasking)!
if let context = UIGraphicsGetCurrentContext() {
context.draw(maskedImageRef, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
var newImage = UIImage(cgImage: maskedImageRef, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsEndImageContext()
var pngImage = UIImage(data: UIImagePNGRepresentation(newImage)!, scale: 1.0)
return pngImage!
}
print("fail")
return image
}
Thank for your help.
Thanks the issue of DonMag in my other question SWIFT 3 - CGImage copy always nil, here is the code to solve this :
func saveImageWithAlpha(theImage: UIImage, destFile: URL) -> Void {
// odd but works... solution to image not saving with proper alpha channel
UIGraphicsBeginImageContext(theImage.size)
theImage.draw(at: CGPoint.zero)
let saveImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let img = saveImage, let data = UIImagePNGRepresentation(img) {
try? data.write(to: destFile)
}
}

Resources