Converting PNG to JPG with UIImage giving odd result - ios

My UIImage holds a .png image loaded from disk and is used to set backgroundColor in a background view. That works.
I now want to convert to .JPG to remove transparency / alpha channel. I have tried a few variations, but here's an example:
var orgImage: UIImage? = nil;
var newImage: UIImage? = nil;
orgImage = UIImage(contentsOfFile: "...");
let tmpData: NSData? = UIImageJPEGRepresentation(orgImage!, 1.0);
newImage = UIImage(data: tmpData);
However, when using newImage for the background, the image appears zoomed in. I don't want that. I want the same image, simply converted to JPG.
The code I use for this is:
// works
myView.backgroundColor = UIColor(patternImage: orgImage!)
// appears zoomed/scaled
myView.backgroundColor = UIColor(patternImage: newImage!)
I have tried (for testing purposes) to do a PNG to PNG (over disk or NSData) and saving/loading to/from disk instead. All tests yielding the same result.
Any idea what I am doing wrong? I am testing in iOS simulator.

Related

Diagonal transparent lines are added while converting UIView to UIImage

I am trying to convert UIView containing 2 UIImageView to UIImage. Almost everything is working fine, but on final conversion some transparent diagonal lines are shown in final UIImage. I cant understand why is this happening. If someone can help. Thanks.
extension UIView {
/**
Convert UIView to UIImage
*/
func toImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: false)
let snapshotImageFromMyView = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return snapshotImageFromMyView!
}
}
following are before and after saving image
enter image descriptBefore SavingAfter Saving
I found the problem, my code was converting resulting UIImage to Webp at 0.70 quality. That was adding lines in image. By saving jpg and then converting to webp fixed the problem.

SceneKit UIImage material is black

I'm loading images from the photo library via UIImagePickerController and with that image I'm setting the material of a SCNPlane:
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFit
plane.firstMaterial?.diffuse.contents = imageView
With some images, this works fine, no problem at all, the image shows up properly. However, with other images, the texture of the plane shows up as entirely black and Xcode prints "Unsupported IOSurface format: 0x00000000". It seems to be only images that were screenshots causing this, although some screenshot images work just fine.
SceneKit can display both PNGs and JPGs, some PNGs just seem to cause issues. Not sure why or whether it's a bug or not. What you can do that's probably better than converting everything to JPGs (so you can retain transparency) is set your material contents to the image's CGImage:
if let cgImage = image.cgImage {
geometry.firstMaterial?.diffuse.contents = cgImage
}
You can also convert your image to a JPEG via something like this, as SceneKit doesn't seem to have issues with JPGs:
if let jpegData = image.jpegData(compressionQuality: 1.0) {
image = UIImage(data: jpegData)!
}
You will lose transparency doing it this way however.

How to apply CIFilter to UIView?

According to Apple docs, filters property of CALayer is not supported in iOS. As i used one of the apps which are applying CIFilter to UIView i.e. Splice, Video Editor Videoshow FX for Funimate and artisto. That's means we can apply CIFilter to UIView.
I have used SCRecorder library and try to get this task done by SCPlayer and SCFilterImageView. But i am facing black screen issue when video is playing after apply CIFilter. So kindly help me to complete this task so that i can apply CIFilter to UIView and also can change the filter by clicking on a UIButton.
The technically accurate answer is that a CIFilter requires a CIImage. You can turn a UIView into a UIImage and then convert that into a CIImage, but all CoreImage filters that use an image for input (there are some that generate a new image) use a `CIImage for input and output.
Please note that the origin for a CIImage is bottom left, not top left. Basically the Y axis is flipped.
If you use CoreImage filters dynamically, learn to use a GLKView to render in - it uses the GPU where a UIImageView uses the CPU.
If you want to test out a filter, it's best to use an actual device. The simulator will give you very poor performance. I've seen a simple blur take nearly a minute where on a device it will be a fraction of a second!
Let's say you have a UIView that you wish to apply a CIPhotoEffectMono to. The steps to do this would be:
Convert the UIView into a CIImage.
Apply the filter, getting a CIImage as output.
Use a CIContext to create a CGImage and then convert that to a UIImage.
Here's a UIView extension that will convert the view and all it's subviews into a UIImage:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Converting a UIImage into a CIImage is one line of code:
let ciInput = CIImage(image: myView.createImage)
Here's a function that will apply the filter and return a UIImage:
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}

How to compress image size using UIImagePNGRepresentation - iOS?

I'm using UIImagePNGRepresentation to save an image. The result image is of size 30+ KB and this is BIG in my case.
I tried using UIImageJPEGRepresentation and it allows to compress image, so image saves in < 5KB size, which is great, but saving it in JPEG gives it white background, which i don't want (my image is circular, so I need to save it with transparent background).
How can I compress image size, using UIImagePNGRepresentation?
PNG uses lossless compression, that's why UIImagePNGRepresentation does not accept compressionQuality parameter like UIImageJPEGRepresentation does. You might get a bit smaller PNG file with different tools, but nothing like with JPEG.
May be this will help you out:
- (void)resizeImage:(UIImage*)image{
NSData *finalData = nil;
NSData *unscaledData = UIImagePNGRepresentation(image);
if (unscaledData.length > 5000.0f ) {
//if image size is greater than 5KB dividing its height and width maintaining proportions
UIImage *scaledImage = [self imageWithImage:image andWidth:image.size.width/2 andHeight:image.size.height/2];
finalData = UIImagePNGRepresentation(scaledImage);
if (finalData.length > 5000.0f ) {
[self resizeImage:scaledImage];
}
//scaled image will be your final image
}
}
Resizing image
- (UIImage*)imageWithImage:(UIImage*)image andWidth:(CGFloat)width andHeight:(CGFloat)height
{
UIGraphicsBeginImageContext( CGSizeMake(width, height));
[image drawInRect:CGRectMake(0,0,width,height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext() ;
return newImage;
}

Best performance solution for background image of 100 UIButtons

I have a UICollectionView with approximately 100 cells with a rounded button inside each cell. 5 cells per row, so I have to scroll down and up to select the buttons.
When the buttons are selected, the background image changes. I've done this in several ways which I expose below. Maybe it is not a very demanding view, but I was wondering what is the less expensive approach in terms of performance.
One solution I found is using an extension of UIImage and setting the button.layer.cornerRadius like this:
extension UIImage {
class func imageWithColor(color: UIColor?) -> UIImage! {
let rect = CGRectMake(0.0, 0.0, 1.0, 1.0)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0)
let context = UIGraphicsGetCurrentContext();
if let color = color {
color.setFill()
}
else {
UIColor.whiteColor().setFill()
}
CGContextFillRect(context, rect);
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
and then, setting the button image background with:
button.layer.cornerRadius = (cell.bounds.width - 8) / 2
button.clipsToBounds = true
button.setBackgroundImage(UIImage.imageWithColor(UIColor.greenColor()), forState: UIControlState.Selected)
I've heard that setting the layer.cornerRadius is pretty expensive.
Another approach would be just designing an squared image, in Photoshop or similar, with a circle in the middle and letting the rest transparent and setting it as button background.
Or another option, that I still haven't tried, I think it could be making an 1 x 1 pixels image with a color and setting the filling of the background as tile(I still haven checked the code for this one). But I think this is pretty similar to the first way.
Would you solve this questions measuring the performance with any software or just knowing more deeply the Swift language?
Try applying mask to image. It's better because you do it once for each image. This should not affect scroll performance. Only what you need is mask image. It should be square image with white background and black circle in middle. Here you can find example (obj-c). Swift:
extension UIImage {
func maskedImage(mask: UIImage) -> UIImage {
let maskImgRef = mask.CGImage
let maskRef = CGImageMaskCreate(CGImageGetWidth(maskImgRef), CGImageGetHeight(maskImgRef), CGImageGetBitsPerComponent(maskImgRef), CGImageGetBitsPerPixel(maskImgRef), CGImageGetBytesPerRow(maskImgRef), CGImageGetDataProvider(maskImgRef), nil, false)
if let maskedRef = CGImageCreateWithMask(self.CGImage, maskRef) {
let maskedIm = UIImage(CGImage: maskedRef)
UIGraphicsBeginImageContext(maskedIm.size)
maskedIm.drawInRect(CGRect(origin: CGPointZero, size: maskedIm.size))
let img = UIGraphicsGetImageFromCurrentImageContext()
return img
}
return self
}
}
UPD: code above is helpful if your images isn't monochrome. If they are monochrome you can use UIButtons with system type instead of ImageViews. Just disable userInteraction, set circled image for this buttons and manipulate tintColor.

Resources