As far as I know CGImage contains bitmap data, but how is that different from UIImage? What does UIImage store what CGImage doesn't?
Does UIImage -> CGImage and CGImage -> UIImage conversions effect on the performance?
Related
I am trying to convert UIImage to CIImage, but it increases the memory by 80-100 MB. Is this a memory leakage? If so, is there any possible way to reduce the memory leakage on converting UIImage to CIImage?
Here's my code:
extension UIImage {
func toCIImage() -> CIImage {
return CIImage(cgImage: self.cgImage!)
}
}
Also, I am looking for a better solution that wouldn't result in a memory spike converting UIImage to CIImage.
I was reading an article about Core Image where I saw the following lines:
if let output = filter?.valueForKey(kCIOutputImageKey) as? CIImage {
let cgimgresult = context.createCGImage(output, fromRect: output.extent)
let result = UIImage(CGImage: cgimgresult)
imageView?.image = result
}
As you can see, the CIImage instance is first converted into a CGImage instance and only then into a UIImage one. After doing some research I found out that it had something to do with the scale of the image within the image view's bounds.
I wonder, is that the only reason (having the right scale for display purposes) why we need to do all those conversions because there is already an initializer for UIImage that takes an instance of CIImage as an argument?
In the UIImage's reference that wrote
An initialized UIImage object. In Objective-C, this method returns nil if the ciImage parameter is nil.
and like #matt wrote here
UIImage's CIImage is not nil only if the UIImage is backed by a CIImage already (e.g. because it was generated by imageWithCIImage:).
So, the direct init
UIImage(ciImage: ciImage)
can be nil.
That's why we should be init the UIImage via the CGImage, not CIImage
According to Apple docs, filters property of CALayer is not supported in iOS. As i used one of the apps which are applying CIFilter to UIView i.e. Splice, Video Editor Videoshow FX for Funimate and artisto. That's means we can apply CIFilter to UIView.
I have used SCRecorder library and try to get this task done by SCPlayer and SCFilterImageView. But i am facing black screen issue when video is playing after apply CIFilter. So kindly help me to complete this task so that i can apply CIFilter to UIView and also can change the filter by clicking on a UIButton.
The technically accurate answer is that a CIFilter requires a CIImage. You can turn a UIView into a UIImage and then convert that into a CIImage, but all CoreImage filters that use an image for input (there are some that generate a new image) use a `CIImage for input and output.
Please note that the origin for a CIImage is bottom left, not top left. Basically the Y axis is flipped.
If you use CoreImage filters dynamically, learn to use a GLKView to render in - it uses the GPU where a UIImageView uses the CPU.
If you want to test out a filter, it's best to use an actual device. The simulator will give you very poor performance. I've seen a simple blur take nearly a minute where on a device it will be a fraction of a second!
Let's say you have a UIView that you wish to apply a CIPhotoEffectMono to. The steps to do this would be:
Convert the UIView into a CIImage.
Apply the filter, getting a CIImage as output.
Use a CIContext to create a CGImage and then convert that to a UIImage.
Here's a UIView extension that will convert the view and all it's subviews into a UIImage:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Converting a UIImage into a CIImage is one line of code:
let ciInput = CIImage(image: myView.createImage)
Here's a function that will apply the filter and return a UIImage:
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
I want to crop a UIImage (not imageview), before doing pixel operations on it. Is there a reliable way to do this in ios framework?
Here are the related methods in android I am using:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap, int, int, int, int)
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(int, int, android.graphics.Bitmap.Config)
-(UIImage*)scaleToSize:(CGSize)size image:(UIImage*)image
{
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I would advise you to look at:
The CoreAnimation Apple framework to create an image context, draw in it and save this as a UIImage in RAM or save it to disk.
The CoreImage Apple framework if you want a powerful and fully customizable image manipulation solution.
A third party lib to crop and resize images: CocoaPods is a great way to quickly integrate that kind of libs… Here is a list of a some interesting image manipulation pods.
This worked for me:
-(UIImage *)cropCanvas:(UImage *)input x1:(int)x1 y1:(int)y1 x2:(int)x2 y2:(int)y2{
CGRect rect = CGRectMake(x1, y1, input.size.width-x2-x1, input.size.height-y2-y1);
CGImageRef imageref = CGImageCreateWithImageInRect([input CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageref];
return img;
}
Running into a problem getting a PNG representation for a UIImage after having rotated it with CIAffineTransform. First, I have a category on UIImage that rotates an image 90 degrees clockwise. It seems to work correctly when I display the rotated image in a UIImageView.
-(UIImage *)cwRotatedRepresentation
{
// Not very precise, stop yelling at me.
CGAffineTransform xfrm=CGAffineTransformMakeRotation(-(6.28 / 4.0));
CIContext *context=[CIContext contextWithOptions:nil];
CIImage *inputImage=[CIImage imageWithCGImage:self.CGImage];
CIFilter *filter=[CIFilter filterWithName:#"CIAffineTransform"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSValue valueWithBytes:&xfrm objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *result=[filter valueForKey:#"outputImage"];
CGImageRef cgImage=[context createCGImage:result fromRect:[inputImage extent]];
return [[UIImage alloc] initWithCIImage:result];
}
However, when I try to actually get a PNG for the newly rotated image, UIImagePNGRepresentation returns nil.
-(NSData *)getPNG
{
UIImage *myImg=[UIImage imageNamed:#"canada"];
myImg=[myImg cwRotatedRepresentation];
NSData *d=UIImagePNGRepresentation(myImg);
// d == nil :(
return d;
}
Is core image overwriting the PNG headers or something? Is there a way around this behavior, or a better means of achieving the desired result of a PNG representation of a UIImage rotated 90 degrees clockwise?
Not yelling, but -M_PI_4 will give you the constant you want with maximum precision :)
The only other thing that I see is you probably want to be using [result extent] instead of [inputImage extent] unless your image is known square.
Not sure how that would cause UIImagePNGRepresentation to fail though. One other thought... you create a CGImage and then use the CIImage in the UIImage, perhaps using initWithCGImage would give better results.