How to prevent memory leakage while converting from UIImage to CIImage - ios

I am trying to convert UIImage to CIImage, but it increases the memory by 80-100 MB. Is this a memory leakage? If so, is there any possible way to reduce the memory leakage on converting UIImage to CIImage?
Here's my code:
extension UIImage {
func toCIImage() -> CIImage {
return CIImage(cgImage: self.cgImage!)
}
}
Also, I am looking for a better solution that wouldn't result in a memory spike converting UIImage to CIImage.

Related

The reason for converting an instance of CIImage into an instance of CGImage and only then into UIImage

I was reading an article about Core Image where I saw the following lines:
if let output = filter?.valueForKey(kCIOutputImageKey) as? CIImage {
let cgimgresult = context.createCGImage(output, fromRect: output.extent)
let result = UIImage(CGImage: cgimgresult)
imageView?.image = result
}
As you can see, the CIImage instance is first converted into a CGImage instance and only then into a UIImage one. After doing some research I found out that it had something to do with the scale of the image within the image view's bounds.
I wonder, is that the only reason (having the right scale for display purposes) why we need to do all those conversions because there is already an initializer for UIImage that takes an instance of CIImage as an argument?
In the UIImage's reference that wrote
An initialized UIImage object. In Objective-C, this method returns nil if the ciImage parameter is nil.
and like #matt wrote here
UIImage's CIImage is not nil only if the UIImage is backed by a CIImage already (e.g. because it was generated by imageWithCIImage:).
So, the direct init
UIImage(ciImage: ciImage)
can be nil.
That's why we should be init the UIImage via the CGImage, not CIImage

Difference between CGImage and UIImage. How costly is the conversion?

As far as I know CGImage contains bitmap data, but how is that different from UIImage? What does UIImage store what CGImage doesn't?
Does UIImage -> CGImage and CGImage -> UIImage conversions effect on the performance?

How to apply CIFilter to UIView?

According to Apple docs, filters property of CALayer is not supported in iOS. As i used one of the apps which are applying CIFilter to UIView i.e. Splice, Video Editor Videoshow FX for Funimate and artisto. That's means we can apply CIFilter to UIView.
I have used SCRecorder library and try to get this task done by SCPlayer and SCFilterImageView. But i am facing black screen issue when video is playing after apply CIFilter. So kindly help me to complete this task so that i can apply CIFilter to UIView and also can change the filter by clicking on a UIButton.
The technically accurate answer is that a CIFilter requires a CIImage. You can turn a UIView into a UIImage and then convert that into a CIImage, but all CoreImage filters that use an image for input (there are some that generate a new image) use a `CIImage for input and output.
Please note that the origin for a CIImage is bottom left, not top left. Basically the Y axis is flipped.
If you use CoreImage filters dynamically, learn to use a GLKView to render in - it uses the GPU where a UIImageView uses the CPU.
If you want to test out a filter, it's best to use an actual device. The simulator will give you very poor performance. I've seen a simple blur take nearly a minute where on a device it will be a fraction of a second!
Let's say you have a UIView that you wish to apply a CIPhotoEffectMono to. The steps to do this would be:
Convert the UIView into a CIImage.
Apply the filter, getting a CIImage as output.
Use a CIContext to create a CGImage and then convert that to a UIImage.
Here's a UIView extension that will convert the view and all it's subviews into a UIImage:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Converting a UIImage into a CIImage is one line of code:
let ciInput = CIImage(image: myView.createImage)
Here's a function that will apply the filter and return a UIImage:
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}

Memory leak with renderInContext and UIImagePNGRepresentation

I'm seeing a large memory leak when creating images by rendering a non-visible view into a context. I've reduced it down to the most basic implementation and have determined two lines of code that are contributing to the memory leak: renderInContext and UIImagePNGRepresentation. If I comment both out, no leak occurs, but if one of them is uncommented a leak occurs, if both are uncommented two leaks occur. Each time the method below in invoked, memory usage increases significantly (as expected), then after a moment it decreases but is ~0.8 MB higher than the amount it was before the invocation.
How can I resolve this to ensure there are no memory leaks?
public class func imageDataForSymbol(symbol: String) -> NSData? {
var imageData: NSData!
let dimension = 180
let label = UILabel(frame: CGRectMake(0, 0, CGFloat(dimension), CGFloat(dimension)))
label.text = symbol
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue
let bitmapContext = CGBitmapContextCreate(nil, dimension, dimension, 8, 0, colorSpace, bitmapInfo)!
label.layer.renderInContext(bitmapContext) //FIXME: causing leak!!
let cgImage = CGBitmapContextCreateImage(bitmapContext)!
let image = UIImage(CGImage: cgImage)
imageData = UIImagePNGRepresentation(image)! //FIXME: causing leak!!
return imageData
}
To test it, in viewDidAppear:
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
dispatch_async(dispatch_get_global_queue(QOS_CLASS_UTILITY, 0), ^{
NSData *d = [ImageGenerator imageDataForSymbol:#"W"];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"triggered");
});
});
});
If there is a better way to create NSData for an image of a UILabel's layer, I'm all for it. I could not think of a different way to obtain it though, other than creating a CIImage from CGImage then from CIImage to UIImage then from UIImage to NSData. Note that it doesn't need to be fast, but it does need to create the image on a background thread to ensure the UI remains responsive to additional input.
pair CGColorSpaceCreateDeviceRGB with CGColorSpaceRelease
pair CGBitmapContextCreate with CGContextRelease
pair CGBitmapContextCreateImage with CGContextRelease

How to render an image with effect faster with UIKit

I'm making an iOS app which there's a process to switch a lot of pictures with several UIImageViews (a loop to set image property of a UIImageView with a bunch of images). And sometimes some of the images needs some graphic effect, say multiplication.
The easiest way is to use a CIFilter to do this thing but the problem is that CALayer on iOS doesn't support "filters" property, so you need to apply the effect to the images before you set "image" property. But this is really too slow when you refresh the screen very frequently.
So next I tried to use Core Graphics directly to do the multiplication with UIGraphics context and kCGBlendModeMultiply. This is really much faster than using a CIFilter, but since you have to apply the multiplication before rendering the image, you can still feel the program runs slower while trying to render images with multiplication effect than rendering normal images.
My guess is that the fundamental problem of these 2 approaches is that you have to process the effect to the images with GPU, and then get the result image with CPU, then finally render the result image with GPU, which means the data transfer between CPU and GPU wasted a lot of time, so I then tried to change the super class from UIImageView to UIView and implement the CGGraphics context code to drawRect method, then when I set the "image" property I call setNeedsDisplay method in didSet. But this doesn't work so well... actually every time it calls setNeedsDisplay the program becomes much more slow that even slower than using a CIFilter, probably because there are several views displaying.
I guess that probably I can fix this problem with OpenGL but I'm wondering if I can solve this problem with UIKit only?
As far as I understand you have to make the same changes to different images. So time of initial initialization is not critical for you but each image should be processed as soon as possible. First of all it is critical to generate new images in a background queue/thread.
There are two good ways to quickly process/generate images:
Use CIFilter from CoreImage
Use GPUImage library
If you used CoreImage check that you use CIFilter and CIContext properly. CIContext creation takes quite a lot of time but it could be SHARED between different CIFilters and images - so you should create CIContext only once! CIFilter could also be SHARED between different images, but since it is not thread safe you should have a separate CIFilter for each thread.
In my code I have the following:
+ (UIImage*)roundShadowImageForImage:(UIImage*)image {
static CIFilter *_filter;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^
{
NSLog(#"CIContext and CIFilter generating...");
_context = [CIContext contextWithOptions:#{ kCIContextUseSoftwareRenderer: #NO,
kCIContextWorkingColorSpace : [NSNull null] }];
CIImage *roundShadowImage = [CIImage imageWithCGImage:[[self class] roundShadowImage].CGImage];
CIImage *maskImage = [CIImage imageWithCGImage:[[self class] roundWhiteImage].CGImage];
_filter = [CIFilter filterWithName:#"CIBlendWithAlphaMask"
keysAndValues:
kCIInputBackgroundImageKey, roundShadowImage,
kCIInputMaskImageKey, maskImage, nil];
NSLog(#"CIContext and CIFilter are generated");
});
if (image == nil) {
return nil;
}
NSAssert(_filter, #"Error: CIFilter for cover images is not generated");
CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);
// CIContext and CIImage objects are immutable, which means each can be shared safely among threads
CIFilter *filterForThread = [_filter copy]; // CIFilter could not be shared between different threads.
CGAffineTransform imageTransform = CGAffineTransformIdentity;
if (!CGSizeEqualToSize(imageSize, coverSize)) {
NSLog(#"Cover image. Resizing image %# to required size %#", NSStringFromCGSize(imageSize), NSStringFromCGSize(coverSize));
CGFloat scaleFactor = MAX(coverSide / imageSize.width, coverSide / imageSize.height);
imageTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
}
imageTransform = CGAffineTransformTranslate(imageTransform, extraBorder, extraBorder);
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
ciImage = [ciImage imageByApplyingTransform:imageTransform];
if (image.hasAlpha) {
CIImage *ciWhiteImage = [CIImage imageWithCGImage:[self whiteImage].CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISourceOverCompositing"
keysAndValues:
kCIInputBackgroundImageKey, ciWhiteImage,
kCIInputImageKey, ciImage, nil];
[filterForThread setValue:filter.outputImage forKey:kCIInputImageKey];
}
else
{
[filterForThread setValue:ciImage forKey:kCIInputImageKey];
}
CIImage *outputCIImage = [filterForThread outputImage];
CGImageRef cgimg = [_context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImage;
}
If you are still not satisfied with the speed try GPUImage It is a very good library, it is also very fast because it uses OpenGL for image generation.

Resources