I am trying to use a color mask to make a color in JPG image transparent because as I read, color mask only works with JPG.
This code work when I apply the color mask and save the image as a JPG, but as a JPG, there is no transparency so I want to transform the JPG image to a PNG image to keep the transparency but when I try to do it, the color mask doesn't work.
Am I doing something wrong or maybe this isn't the right approach.
Here is the code of the 2 functions :
func callChangeColorByTransparent(_ sender: UIButton){
var colorMasking: [CGFloat] = []
if let textLabel = sender.titleLabel?.text {
switch textLabel {
case "Remove Black":
colorMasking = [0,30,0,30,0,30]
break
case "Remove Red":
colorMasking = [180,255,0,50,0,60]
break
default:
colorMasking = [222,255,222,255,222,255]
}
}
print(colorMasking)
let newImage = changeColorByTransparent(selectedImage, colorMasking: colorMasking)
symbolImageView.image = newImage
}
func changeColorByTransparent(_ image : UIImage, colorMasking : [CGFloat]) -> UIImage {
let rawImage: CGImage = (image.cgImage)!
//let colorMasking: [CGFloat] = [222,255,222,255,222,255]
UIGraphicsBeginImageContext(image.size)
let maskedImageRef: CGImage = rawImage.copy(maskingColorComponents: colorMasking)!
if let context = UIGraphicsGetCurrentContext() {
context.draw(maskedImageRef, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
var newImage = UIImage(cgImage: maskedImageRef, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsEndImageContext()
var pngImage = UIImage(data: UIImagePNGRepresentation(newImage)!, scale: 1.0)
return pngImage!
}
print("fail")
return image
}
Thank for your help.
Thanks the issue of DonMag in my other question SWIFT 3 - CGImage copy always nil, here is the code to solve this :
func saveImageWithAlpha(theImage: UIImage, destFile: URL) -> Void {
// odd but works... solution to image not saving with proper alpha channel
UIGraphicsBeginImageContext(theImage.size)
theImage.draw(at: CGPoint.zero)
let saveImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let img = saveImage, let data = UIImagePNGRepresentation(img) {
try? data.write(to: destFile)
}
}
Related
I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}
I have a UIImage coming from server that I need to present in the UI as a monochromatic image with a given single color that can be an arbitrary as well. What's the best way to achieve it?
In my current method I am using following method that returns a monochromatic image for a given image and a color:
fileprivate func monochromaticImage(from image: UIImage, in color: UIColor) -> UIImage {
guard let img = CIImage(image: image) else {
return image
}
let color = CIColor(color: color)
guard let outputImage = CIFilter(name: "CIColorMonochrome",
withInputParameters: ["inputImage" : img,
"inputColor" : color])?.outputImage else {
return image
}
let context = CIContext()
if let cgImage = context.createCGImage(outputImage, from: outputImage.extent) {
let newImage = UIImage(cgImage: cgImage, scale: image.scale, orientation: image.imageOrientation)
return newImage
}
return image
}
I'm trying to crop an image that has been selected and edited through the UIImagePicker. For some reasons the picker doesn't return a UIImagePickerControllerEditedImage value. So what I'm trying to do is to get the original value and crop it according to the rectangle in UIImagePickerControllerCropRect but that's a NSRect and I need to convert it to a CRRect to crop it. This is my image picker controller function :
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) {
//Cut photo or show subview
let chosenImage : UIImage
print(editingInfo)
if let possibleImage = editingInfo!["UIImagePickerControllerEditedImage"] as? UIImage {
print("Edited image")
chosenImage = possibleImage
} else if let possibleImage = editingInfo!["UIImagePickerControllerOriginalImage"] as? UIImage {
print("Non edited image")
if let rectangle = editingInfo!["UIImagePickerControllerCropRect"] as? [[Int]] {
print(rectangle[0])
}
let rect: CGRect = CGRectMake(0, 230, 750, 750)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(possibleImage.CGImage, rect)!
let image: UIImage = UIImage(CGImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
chosenImage = image
} else {
return
}
self.profilePicture.image = chosenImage.rounded?.circle
dismissViewControllerAnimated(true) { () -> Void in
//update the server
}
}
Everything else works fine except it crops the image according to the rectangle in CGRectMake(0, 230, 750, 750) I'd like to make that instead of fixed values the values of UIImagePickerControllerCropRect . Please let me know if you have any suggestion on how to get that.
Thank you!
var rect: CGRect = CGRectMake(0, 230, 750, 750)
if let rectangle = editingInfo!["UIImagePickerControllerCropRect"] as? NSValue {
rect = rectangle.CGRectValue()
}
let imageRef: CGImageRef = CGImageCreateWithImageInRect(possibleImage.CGImage, rect)!
You used wrong variable I suppose
I am trying to generate QR Code using iOS Core Image API:
func createQRForString(#data : NSData)->CIImage!{
var qrFilter = CIFilter(name: "CIQRCodeGenerator")
qrFilter.setValue(data, forKey: "inputMessage")
qrFilter.setValue("H", forKey:"inputCorrectionLevel")
return qrFilter.outputImage
}
func createNonInterpolatedImageFromCIImage(image : CIImage,withScale scale:CGFloat)->UIImage{
let cgImage = CIContext(options: nil).createCGImage(image, fromRect: image.extent())
UIGraphicsBeginImageContext(CGSizeMake(image.extent().size.width*scale, image.extent().size.height*scale))
let context = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(context, kCGInterpolationNone)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
And the following code in viewDidLoad method :
let data = "Hello World".dataUsingEncoding(NSUTF8StringEncoding)
if let image=createQRForString(data: data!){
let uiimage = createNonInterpolatedImageFromCIImage(image, withScale: 1.0)
imageView.image = uiimage
}
else{
println("Error loading image")
}
}
But it neither prints "Error" nor shows qr code in the imageView.
Here is the solution:
override func viewDidLoad() {
super.viewDidLoad()
self.imgView.image = generateCode()
}
func generateCode() -> UIImage {
let filter = CIFilter(name: "CIQRCodeGenerator")
let data = "Hello World".dataUsingEncoding(NSUTF8StringEncoding)
filter.setValue("H", forKey:"inputCorrectionLevel")
filter.setValue(data, forKey:"inputMessage")
let outputImage = filter.outputImage
let context = CIContext(options:nil)
let cgImage = context.createCGImage(outputImage, fromRect:outputImage.extent())
let image = UIImage(CGImage:cgImage, scale:1.0, orientation:UIImageOrientation.Up)
let resized = resizeImage(image!, withQuality:kCGInterpolationNone, rate:5.0)
return resized
}
func resizeImage(image: UIImage, withQuality quality: CGInterpolationQuality, rate: CGFloat) -> UIImage {
let width = image.size.width * rate
let height = image.size.height * rate
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(context, quality)
image.drawInRect(CGRectMake(0, 0, width, height))
let resized = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resized;
}
I have an iOS app that I am now creating for Mac OSX. I have the code below that converts the image to a size of 1024 and works out the width based on the aspect ratio of the image. This works on iOS but obviously does not on OSX. I am not sure how to create a PNG representation of the NSImage or what I should be using instead of UIGraphicsBeginImageContext. Any suggestions?
Thanks.
var image = myImageView.image
let imageData = UIImagePNGRepresentation(image)
let imageWidth = image?.size.width
let calculationNumber:CGFloat = imageWidth! / 1024.0
let imageHeight = image?.size.height
let newImageHeight = imageHeight! / calculationNumber
UIGraphicsBeginImageContext(CGSizeMake(1024.0, newImageHeight))
image?.drawInRect(CGRectMake(0, 0, 1024.0, newImageHeight))
var resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImagePNGRepresentation(resizedImage)
let theImageData:NSData = UIImagePNGRepresentation(resizedImage)
imageFile = PFFile(data: theImageData)
You can use:
let image = NSImage(size: newSize)
image.lockFocus()
//draw your stuff here
image.unlockFocus()
if let data = image?.TIFFRepresentation {
let imageRep = NSBitmapImageRep(data: data)
let imageData = imageRep?.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:])
//do something with your PNG data here
}
my two cents...
a quick extension to draw an image with a partially overlapped image:
extension NSImage {
func mergeWith(anotherImage: NSImage) -> NSImage {
self.lockFocus()
//draw your stuff here
self.draw(in: CGRect(origin: .zero, size: size))
let frame2 = CGRect(x: 4, y: 4, width: size.width/3, height: size.height/3)
anotherImage.draw(in: frame2)
self.unlockFocus()
return self
}
}
final effect: