Real Time Performance With Core Image Blend Modes - ios

I'm trying to do a really basic demo app that allows the user to pick an image and a blend mode and then drag and manipulate the blended image overtop of the background image. When the user is dragging the image overtop of the background I want real time performance (20+ fps on iPhone 4). The images are the same resolution as the screen.
Is this possible to do with core image? I have tried a couple different approaches but I can't seem to get the performance I want.
Right now I am doing something like this:
CIFilter * overlayBlendMode = [CIFilter filterWithName:#"CIOverlayBlendMode"];
[overlayBlendMode setValue:self.foregroundImage forKey:#"inputImage"];
[overlayBlendMode setValue:self.backgroundImage forKey:#"inputBackgroundImage"];
CIImage * test = [overlayBlendMode outputImage];
// render background image
[self.ciContext drawImage:test inRect:test.extent fromRect:test.extent];
This code is being executed each time display gets called from my GLKViewController.
And my setup code is:
self.glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
self.ciContext = [CIContext contextWithEAGLContext:self.glContext];
...
UIImage * foregroundImage = [ViewController imageScaledFromImage:[UIImage imageNamed:#"Smiley"] inRect:CGRectMake(0, 0, 100, 100)];
GLKTextureInfo * foregroundTexture = [GLKTextureLoader textureWithCGImage:foregroundImage.CGImage options:#{GLKTextureLoaderOriginBottomLeft: #(YES)} error:nil];
self.foregroundImage = [CIImage imageWithTexture:foregroundTexture.name size:foregroundImage.size flipped:NO colorSpace:nil];
UIImage *backgroundImage = [ViewController imageCenterScaledFromImage:[UIImage imageNamed:#"Kate.jpg"] inRect:(CGRect){0,0,self.renderBufferSize}];
GLKTextureInfo * backgroundTexture = [GLKTextureLoader textureWithCGImage:backgroundImage.CGImage options:#{GLKTextureLoaderOriginBottomLeft: #(YES)} error:nil];
self.backgroundImage = [CIImage imageWithTexture:backgroundTexture.name size:backgroundImage.size flipped:NO colorSpace:nil];
The performance I am getting is not what I expected, I was expecting 60fps since it is such a simple scene but on my iPad 4 I'm getting ~35 or so and I'm sure it would be worse on the iPhone 4 which is my lowest common denominator.

Did you set GLKViewController -> preferredFramesPerSecond to something other than its default 30?

Related

Ios crash "could not execute support code to read Objective-C" on ios 12.4.2 not on 12.0.1

This method returns a qr code image of a string. It works correctly on Ios 12.0.1 (iphone SE) but it crash on 12.4.2 (iphone 6). The method crash when i try to assign the resultant UIImage to an UIImageView, the resultant UIImage is not nil.
-(UIImage*)get_QR_image :(NSString*)qrString :(UIColor*)ForeGroundCol :(UIColor*)BackGroundCol{
NSData *stringData = [qrString dataUsingEncoding: NSUTF8StringEncoding];
CIFilter *qrFilter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
[qrFilter setValue:#"H" forKey:#"inputCorrectionLevel"];
CIImage *qrImage = qrFilter.outputImage;
float scaleX = 320;
float scaleY = 320;
CIColor *iForegroundColor = [CIColor colorWithCGColor:[ForeGroundCol CGColor]];
CIColor *iBackgroundColor = [CIColor colorWithCGColor:[BackGroundCol CGColor]];
CIFilter * filterColor = [CIFilter filterWithName:#"CIFalseColor" keysAndValues:#"inputImage", qrImage, #"inputColor0", iForegroundColor, #"inputColor1", iBackgroundColor, nil];
CIImage *filtered_image = [filterColor valueForKey:#"outputImage"];
filtered_image = [filtered_image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
UIImage *result_image = [UIImage imageWithCIImage:filtered_image
scale:[UIScreen mainScreen].scale
orientation:UIImageOrientationUp];
return result_image;
}
the line involved in crash is:
filtered_image = [filtered_image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
it generates this log:
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.
There's something in my method that works only on 12.0.1 ? Or maybe something wrong ? How i can investigate more to solve that crash ?
EDIT
in red i have:
MyQrCodeImageViewBig.image=qrimage;
with messagge:
Thread 1: EXC_BREAKPOINT (code=1, subcode=0x1a83e146c)
I see a lot of problems resulting from the [UIImage imageWithCIImage:] initializer. The main problem being that a CIImage does not actually contain any bitmap data. It needs to be rendered by a CIContext first. So the target you assign the UIImage to needs to know that it is backed by a CIImage that still needs rendering. Usually UIImageView handles this well, but I wouldn't trust it too much.
What you can do instead is to render the image yourself into a bitmap (CGImage) and initialize the UIImage with that instead. You need a CIContext for that, which I recommend you create somewhere outside this method once and re-use it every time you need to render an image (it's an expensive object):
self.context = [CIContext context];
Then in your method, you render the image like this:
CGImageRef cgImage = [self.context createCGImage:filtered_image fromRect:[filtered_image extent]];
UIImage* result_image = [UIImage imageWithCGImage:cgImage];

White pixels around iOS Image using CIFilter

I add a picture frame (Image with transparent background) around an existing UIImage and save it all as one image. On simulator, everything looks like it runs great. However on the device, it adds some white pixels around some of the areas of the frame's image. Here is my code:
- (void)applyFilter {
NSLog(#"Running");
UIImage *borderImage = [UIImage imageNamed:#"IMG_8055.PNG"];
NSData *dataFromImage = UIImageJPEGRepresentation(self.imgView.image, 1);
CIImage *beginImage= [CIImage imageWithData:dataFromImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *border =[CIImage imageWithData:UIImagePNGRepresentation(borderImage)];
border = [border imageByApplyingTransform:CGAffineTransformMakeScale(beginImage.extent.size.width/border.extent.size.width, beginImage.extent.size.height/border.extent.size.height)];
CIFilter *filter= [CIFilter filterWithName:#"CISourceOverCompositing"]; //#"CISoftLightBlendMode"];
[filter setDefaults];
[filter setValue:border forKey:#"inputImage"];
[filter setValue:beginImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [filter valueForKey:#"outputImage"];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self.imgView.image = newImg;
}
Here is the resulting image:
The frame image used in the picture looks like this:
Here is a screenshot of the frame image in photoshop, showing those pixels are not present in the PNG.
The issue is that if you look at your image, those pixels immediately adjacent to the musical notes are apparently not transparent. And if you notice, those white pixels that appear in the final image aren't just the occasional pixel, but they appear in square blocks.
These sorts of squared-off pixel noise is a telltale sign of JPEG artifacts. It's hard to say what's causing this because the image you added to this question was a JPEG (which doesn't support transparency). I assume you must have a PNG version of this backdrop? You might have to share that with us to confirm this diagnosis.
But the bottom line is that you need to carefully examine the original image and the transparency of those pixels that appear to be white noise. Make sure that as you create/manipulate these images, avoid JPEG file formats, because it loses transparency information and introduces artifacts. PNG files are often safer.

Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:#"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:#"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:#"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = #{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:
do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImages and use -imageByApplyingTransform: to adjust the ghost.
failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.
The biggest performance issue is that each frame you create EAGLContext and CIContext. This needs to be done only once outside of your processUsingCoreImage method.
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

How to render an image with effect faster with UIKit

I'm making an iOS app which there's a process to switch a lot of pictures with several UIImageViews (a loop to set image property of a UIImageView with a bunch of images). And sometimes some of the images needs some graphic effect, say multiplication.
The easiest way is to use a CIFilter to do this thing but the problem is that CALayer on iOS doesn't support "filters" property, so you need to apply the effect to the images before you set "image" property. But this is really too slow when you refresh the screen very frequently.
So next I tried to use Core Graphics directly to do the multiplication with UIGraphics context and kCGBlendModeMultiply. This is really much faster than using a CIFilter, but since you have to apply the multiplication before rendering the image, you can still feel the program runs slower while trying to render images with multiplication effect than rendering normal images.
My guess is that the fundamental problem of these 2 approaches is that you have to process the effect to the images with GPU, and then get the result image with CPU, then finally render the result image with GPU, which means the data transfer between CPU and GPU wasted a lot of time, so I then tried to change the super class from UIImageView to UIView and implement the CGGraphics context code to drawRect method, then when I set the "image" property I call setNeedsDisplay method in didSet. But this doesn't work so well... actually every time it calls setNeedsDisplay the program becomes much more slow that even slower than using a CIFilter, probably because there are several views displaying.
I guess that probably I can fix this problem with OpenGL but I'm wondering if I can solve this problem with UIKit only?
As far as I understand you have to make the same changes to different images. So time of initial initialization is not critical for you but each image should be processed as soon as possible. First of all it is critical to generate new images in a background queue/thread.
There are two good ways to quickly process/generate images:
Use CIFilter from CoreImage
Use GPUImage library
If you used CoreImage check that you use CIFilter and CIContext properly. CIContext creation takes quite a lot of time but it could be SHARED between different CIFilters and images - so you should create CIContext only once! CIFilter could also be SHARED between different images, but since it is not thread safe you should have a separate CIFilter for each thread.
In my code I have the following:
+ (UIImage*)roundShadowImageForImage:(UIImage*)image {
static CIFilter *_filter;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^
{
NSLog(#"CIContext and CIFilter generating...");
_context = [CIContext contextWithOptions:#{ kCIContextUseSoftwareRenderer: #NO,
kCIContextWorkingColorSpace : [NSNull null] }];
CIImage *roundShadowImage = [CIImage imageWithCGImage:[[self class] roundShadowImage].CGImage];
CIImage *maskImage = [CIImage imageWithCGImage:[[self class] roundWhiteImage].CGImage];
_filter = [CIFilter filterWithName:#"CIBlendWithAlphaMask"
keysAndValues:
kCIInputBackgroundImageKey, roundShadowImage,
kCIInputMaskImageKey, maskImage, nil];
NSLog(#"CIContext and CIFilter are generated");
});
if (image == nil) {
return nil;
}
NSAssert(_filter, #"Error: CIFilter for cover images is not generated");
CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);
// CIContext and CIImage objects are immutable, which means each can be shared safely among threads
CIFilter *filterForThread = [_filter copy]; // CIFilter could not be shared between different threads.
CGAffineTransform imageTransform = CGAffineTransformIdentity;
if (!CGSizeEqualToSize(imageSize, coverSize)) {
NSLog(#"Cover image. Resizing image %# to required size %#", NSStringFromCGSize(imageSize), NSStringFromCGSize(coverSize));
CGFloat scaleFactor = MAX(coverSide / imageSize.width, coverSide / imageSize.height);
imageTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
}
imageTransform = CGAffineTransformTranslate(imageTransform, extraBorder, extraBorder);
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
ciImage = [ciImage imageByApplyingTransform:imageTransform];
if (image.hasAlpha) {
CIImage *ciWhiteImage = [CIImage imageWithCGImage:[self whiteImage].CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISourceOverCompositing"
keysAndValues:
kCIInputBackgroundImageKey, ciWhiteImage,
kCIInputImageKey, ciImage, nil];
[filterForThread setValue:filter.outputImage forKey:kCIInputImageKey];
}
else
{
[filterForThread setValue:ciImage forKey:kCIInputImageKey];
}
CIImage *outputCIImage = [filterForThread outputImage];
CGImageRef cgimg = [_context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImage;
}
If you are still not satisfied with the speed try GPUImage It is a very good library, it is also very fast because it uses OpenGL for image generation.

GPUImage output image is missing in screen capture

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

Resources