How to change UIImage color? - ios

I'd like to change the color of every pixel of any UIImage to a specific color (all pixel should get the same color):
... so of course I could just loop trough every pixel of the UIImage and set it's red, green and blue property to 0 to achieve a black-color look.
But obviously this is not a effective way to recolor an image and I'm pretty sure there are several more effective methods achieving this instead of looping trough EVERY single pixel of the image.
func recolorImage(image: UIImage, color: String) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let data = context.data!.assumingMemoryBound(to: UInt8.self)
for i in 0..<img.height {
for j in 0..<img.width {
// set data[pixel] ==> [0,0,0,255]
}
}
let output = context.makeImage()!
return UIImage(cgImage: output)
}
Ayn help would be very appreciated!

Since every pixel of the original image will be the same color, the result image is not dependent on the pixels of the original image. Your method actually just needs the size of the image and then creates a new image with that size, that is filled with one single color.
func recolorImage(image: UIImage, color: UIColor) -> UIImage {
let size = image.size
UIGraphicsBeginImageContext(size)
color.setFill()
UIRectFill(CGRect(origin: .zero, size: size))
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}

Related

Force UIView to draw with specific scale (1 point = 1 pixel)

I'm doing a barcode printing feature, it generates a barcode view and then send the pixels data to the thermal printer. The process is below:
Snapshot UIView with size 250x90 (points) to UIImage:
let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { rendererContext in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
Get pixels data of output image:
extension UIImage {
func pixelData() -> [UInt8]? {
let height = self.size.height
let width = self.size.width
let dataSize = width * height
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo: UInt32 = 0
let context = CGContext(data: &pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: Int(width),
space: colorSpace,
bitmapInfo: bitmapInfo)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
return pixelData
}
}
Send pixelData to printer (after some process to convert it to printer data, like which pixel is black/white/gray...)
The problem is that, the size of the output bitmap must be fixed to 250x90 pixels, so it can be fit in the label stamp. But in high resolution iPhones with screen scale of 3x, after called pixelData() with 250x90 as width/height, the output CGImage will be downscale from original cgImage (because the original has 750x270 pixels). And because of downscaling, some black area become gray, and barcode becomes unrecognizable.
I can put the image.scale to pixelData() method, but that will make the pixels data to have the physical size of 750x270 pixels and it too large to fit in the label stamp.
I also tried this way to create UIImage but it still downscales and pixelates the output image:
// force 1.0 scale
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 1.0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
So the question is:
Can I force the UIView to be drawn with 1x scale, as 1 point = 1 pixel, and everything after that will working as expected?
Or can I adjust the pixels data generation so that context.draw can merge 3 pixels into 1?

How to decrease png size saved with swift on iPhone?

In my app user can get a photo from the gallery, edit it and then save to documents directory.
Image requirements:
Dimension: 512X512
Size: less when 100Kb
Images basically are WhatsApp stickers
My files are 250Kb+
I already tried a lot, even tried to save files as png8 like this:
func normalize() -> UIImage {
let size = CGSize(width: 512, height: 512)
let genericColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: nil, width: 512, height: 512, bitsPerComponent: 8, bytesPerRow: 4 * 512, space: genericColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
context?.interpolationQuality = .default//(thumbBitmapCtxt!, CGInterpolationQuality.default)
let destRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
context?.draw(self.cgImage!, in: destRect)
let tmpThumbImage = context?.makeImage()
let result = UIImage(cgImage: tmpThumbImage!) //, scale: 1, orientation: .up)
return result
}
this method makes images look bad but size even increased to 300Kb
If anyone knows how to deal with it, please help me.
Please note, I don't need JPEG, I need PNG with transparent background
Also, this may be helpful. In my app, I have a few edit tools, like lasso and eraser. All of them made with UIGraphicsImageContext like this:
func erase(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(lassoImageView.bounds.size, false, 1)
let context = UIGraphicsGetCurrentContext()
lassoImageView.layer.render(in: context!)
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(.round)
context?.setLineWidth(CGFloat(eraserBrushWidth))
context?.setBlendMode(.clear)
context?.strokePath()
lassoImageView.image = UIGraphicsGetImageFromCurrentImageContext()
croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
Maybe something wrong with my UIGraphicsBeginImageContextWithOptions implementation?
Thanks for any help

Convert UIImage to 4 bits color space

I want to make my image as smaller as possible (in terms of data size, not changing height or width) and I can do it at expense of colors. So I want to have 4bit color scheme. How can I convert image to this color scheme? How can I create it? I start with trying to convert to grayscale with this code:
func convert(image: UIImage) -> UIImage? {
let imageSize = image.size
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext(data: nil, width: Int(imageSize.width), height: Int(imageSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
context.draw(image.cgImage!, in: CGRect(origin: CGPoint.zero, size: imageSize))
//image.draw(in: )
let imgRef = context.makeImage()
return UIImage(cgImage: imgRef!)
}
and it's working. Now I want to modify it to use different color space. I found that what I am looking for is probably indexed color scheme, right? But I can't find any useful tutorial of how can I do this. I started like this:
let table: [UInt8] = [0,0,0, 255,0,0, 0,0,255, 0,255,0,
255,255,0, 255,0,255, 0,255,255, 255,255,255]
let colorSpace = CGColorSpace(indexedBaseSpace: CGColorSpaceCreateDeviceRGB(),
last: table.count - 1, colorTable: table)!
But I am not sure if I started ok and should I continue. Which values should I then use in CGContext init method and so on. Can someone helps me? Thanks

Reduce UIImage size by making it grayscale

I'm generating a PDF with 10 to 15 images the user has taken. The images are photos of documents and don't need to be colored.
If I simply use the UIImages the user has taken, and use a very high compression rate
UIImageJPEGRepresentation(image, 0.02)
the PDF is about 3MB (witch colored images) on an iPhone6.
To further reduce the file size, I would now like to convert all images to true grayscale (I do want to throw the color information away). I also found this on github
Note that iOS/macOS do not support gray scale with alpha (you have to use a RGB image with all 3 set to the same value + alpha to get this affect).
I'm converting the images to grayscale like so:
guard let cgImage = self.cgImage else {
return self
}
let height = self.size.height
let width = self.size.width
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext.init(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
let rect = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: rect)
guard let grayscaleImage = context.makeImage() else {
return self
}
return UIImage(cgImage: grayscaleImage)
However, when I try to compress the resulting images again with
UIImageJPEGRepresentation(image, 0.02)
I get the flollowing logs:
JPEGDecompressSurface : Picture decode failed: e00002c2
and the images are displayed distorted. Any ideas on how I can get small true grayscale image?

I have a png file with no background, how can I create a clear color background for this image in iOS?

So I understand that a UIImage inherently doesn't have a background. As many of us know, a lot of PNG files don't have a background Color thus making it clear. I'm attempting to upload a png file that doesn't have a background color, thus a clear color. Yes, I know I can set the background Color myself in adobe or sketch, but I'm assuming that other users don't know how to do this.
Here is a screenshot of the png that I have created:
As you can see, it's just two lines that are unioned together so there's no background set.
Now below is a screenshot of the aftermath of using the imagePicker to choose this png image from my photo roll.
Notice that the area that is supposed to be transparent is actually black. I want to color in the black part and make it actually clearColor instead and keep the green cross as it is. Now, I'm not sure if the black color is actually even a black color because perhaps it's just empty space. Can I fill in the empty black space and turn it into a clear color?
Here's my code right now that isn't working very well:
func overlayImage(image: UIImage, color: UIColor) -> UIImage? {
let rect = CGRectMake(0, 0, image.size.width, image.size.height)
let backgroundView = UIView(frame: rect)
backgroundView.backgroundColor = color
UIGraphicsBeginImageContext(rect.size)
let gcSize: CGSize = backgroundView.frame.size
UIGraphicsBeginImageContext(gcSize)
let context: CGContextRef = UIGraphicsGetCurrentContext()!
backgroundView.layer.renderInContext(context)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Any help in either obj-C or Swift would be greatly appreciated.
I got rid of the overlay method above and am using the code below:
Updated with new code that still doesn't work
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage!, editingInfo: [NSObject : AnyObject]!) {
scaleImage(overlayImage(image, color: UIColor.clearColor()))
}
func scaleImageAndAddAugmented(image: UIImage?) {
let rect = CGRectMake(0, 0, image!.size.width, image!.size.height)
let backgroundView = UIView(frame: rect)
backgroundView.backgroundColor = UIColor.clearColor()
let size = CGSizeApplyAffineTransform(image!.size, CGAffineTransformMakeScale(0.25, 0.25))
let hasAlpha = false
let scale: CGFloat = 0.0
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image!.drawInRect(CGRect(origin: CGPointZero, size: size))
let context = CGBitmapContextCreate(nil, Int(image!.size.width), Int(image!.size.height), 8, 0, CGColorSpaceCreateDeviceRGB(), CGImageAlphaInfo.PremultipliedLast.rawValue)
let myGeneratedImage = CGBitmapContextCreateImage(context)
CGContextDrawImage(context, rect, myGeneratedImage)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.dismissViewControllerAnimated(true, completion: nil)
// set image below
}
It seems a bit of excess work to use UIGraphicsGetImageFromCurrentImageContext to generate an image from context, when you can just create an image from a file.
As first mentioned it is required to have Opaque = NO:
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
My bet is its the image itself, because the whole thing is faded.
Make 100% certain that the passed colour is a clear colour for:
backgroundView.backgroundColor = color
You could create a bitmap context and use that instead:
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
8,
0,
rgbColorSpace,
kCGImageAlphaPremultipliedLast);
Rather than using the current context and you can use:
myGeneratedImage = CGBitmapContextCreateImage(context)
Drawing is easy as pie:
CGContextDrawImage(realContext,bounds,myGeneratedImage)
Swift code below:
func scaleImage(image: UIImage?) {
if let image = image {
let rect = CGRectMake(0, 0, image.size.width, image.size.height)
let size: CGSize = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.25, 0.25))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let context = CGBitmapContextCreate(nil, Int(image.size.width), Int(image.size.height), 8, 0, CGColorSpaceCreateDeviceRGB(), CGImageAlphaInfo.PremultipliedLast.rawValue)
let myGeneratedImage = CGBitmapContextCreateImage(context)
CGContextDrawImage(context, rect, myGeneratedImage)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
}

Resources