Lanczos scale not working when scaleKey greater than some value - ios

I have this code
CIImage * input_ciimage = [CIImage imageWithCGImage:self.CGImage];
CIImage * output_ciimage =
[[CIFilter filterWithName:#"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.72], // [NSNumber numberWithFloat: 800.0 / self.size.width],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage
fromRect:[output_ciimage extent]];
UIImage *output_uiimage;
output_uiimage = [UIImage imageWithCGImage:output_cgimage
scale:1.0 orientation:self.imageOrientation];
CGImageRelease(output_cgimage);
return output_uiimage;
So, when scaleKey greater than some value then output_uiimage is black image.
In my case if value of key kCIInputScaleKey > #0.52 then result is black image. When i rotate image on 90 degree then i got the same result but value was 0.72 (not 0.52).
Whats wrong with library or mistake in my code?
I have iPhone 4, iOS 7.1.2, xCode 6.0 if needed.

That's what Apple said:
This scenario exposes a bug in Core Image. The bug occurs when rendering requires an intermediate buffer that has a dimension greater than the GPU texture limits (4096) AND the input image fits into these limits. This happens with any filter that is performing a convolution (blur, lanczos) on an input image that has width or height close to the GL texture limit.
Note: the render is succesful if the one of the dimensions of the input image is increased to 4097.
Replacing CILanczosScaleTransform with CIAffineTransform (lower quality) or resizing the image with CG are possible workarounds for the provided sample code.

I've updated Bug report after request from Apple's engineers. They answer:
We believe that the issue is with the Core Image Lanczos filter that
occurs at certain downsample scale factors. We hope to fix this issue
in the future.
The filter should work well with downsample that are power of 2 (i.e.
1/2, 1/4, 1/8). So, we would recommend limiting your downsample to
these values and then using AffineTransform to scale up or down
further if required.
We are now closing this bug report.

Related

Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:#"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:#"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:#"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = #{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:
do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImages and use -imageByApplyingTransform: to adjust the ghost.
failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.
The biggest performance issue is that each frame you create EAGLContext and CIContext. This needs to be done only once outside of your processUsingCoreImage method.
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

UIImage Distortion from UIGraphicsBeginImageContext with larger files (pixel formats, codecs?)

I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.

scale property of ALAssetRepresentation always return 1

I'm trying to get original image from ALAsset and find that the scale property of ALAssetRepresentation always returns 1.0. So I wonder is there a situation that the property will return other value like 2.0 ?
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation] ;
CGImageRef imgRef = assetRepresentation.fullResolutionImage ;
UIImage *image = [UIImage imageWithCGImage:imgRef] ;
After retina displays were introduced physical resolution was doubled but for API calls it was remained the same. So in some methods and functions (see UIGraphicsBeginImageContextWithOptions for example) was added additional argument 'scale'. I do not know why [ALAssetRepresentation scale] description is so poor
Returns the representation's scale.
but you can look at UIScreen.scale description
This value reflects the scale factor needed to convert from the
default logical coordinate space into the device coordinate space of
this screen. The default logical coordinate space is measured using
points. For standard-resolution displays, the scale factor is 1.0 and
one point equals one pixel. For Retina displays, the scale factor is
2.0 and one point is represented by four pixels.
I think [ALAssetRepresentation scale] should be 2.0 if you will run this code on device with retina display.

I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below:
CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: #"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, #"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];
For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)
Any advice is appreciated.
This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

"Performing a costly unpadding operation!" -- what is it, and how to fix it?

The debug console for my Core Filters test application is showing this message:
CGImageRef 0x7a0e890 has row byte padding. Performing a costly unpadding operation!
I couldn't find a hit for that exact message (minus the pointer info) in the headers or in a Google search.
My questions are (1)what does that mean and (2)how can I rectify the situation?
The following is an example of how I am generating a filtered UIImage using a CIFilter.
- (UIImage*)sepia
{
CIImage *beginImage = [CIImage imageWithCGImage:[self CGImage]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =
[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self = newImg;
CGImageRelease(cgimg);
return self;
}
Byte padding is extra bytes added to the end of each image row to ensure that each row starts on a 2^n byte multiple in memory. This increases memory access performance at the expense of image size. You can check this if you compare the result of CGImageGetBytesPerRow and the expected bytes per row calculated from the image dimensions and bytes per pixel.
As to how to rectify the unpadding - you need to find exactly which operation is triggering the unpadding and take it from there. Unpadding is expensive because basically the whole image memory needs to be shuffled up to remove all the end-of-row gaps.
I would write a comment, but since I cannot do this due to the low reputation I've got, I'll post it as an answer here:
I just had the same error and tried to fix it by using Thuggish Nuggets' suggestion. Turned out it's the right approach, however the image size has to be a multitude of 8. I just aligned the width to the multitude of 8, I don't know if the height also has to be a multitude of 8 since the image I tested this approach with was quadratic anyways.
Here's the (probably not very efficient) algorithm to give you a basic idea on how to calculate the needed size:
UIImage *image = ...;
CGSize targetSize = image.frame.size; //e.g. 51 x 51
double aspectRatio = targetSize.width / targetSize.height;
targetSize.width = targetSize.width + 8 - ((int)targetSize.width % 8);
targetSize.height = targetSize.width / aspectRatio;
I suspect the "costly unpadding operation" to be bogus. Reason: I get it on the Simulator but not on the device. Hence I believe it is a misleading artifact of the Simulator environment.

Resources