how to render CIImage to offscreen MTLTexture? - metal

I don't want to draw the CIImage to the currentDrawable.texture of the MTKView, I want to get the texture, and then pass it to the fragment shader to do other things. But why the obtained texture is completely transparent?
CIContext *ciContext = [CIContext contextWithMTLCommandQueue:commandQueue options:#{ kCIContextWorkingFormat: [NSNumber numberWithInt:kCIFormatRGBAf], kCIContextCacheIntermediates: #NO, kCIContextName: #"Image Processor" }];
NSBundle *bundle = [NSBundle bundleForClass:[CGMetalLookup class]];
NSString *path = [bundle pathForResource:#"CGMetal.bundle/Graphic" ofType:#"png"];
CIImage *cimage = [[CIImage alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path]];
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
CIFilter *sepiaFilter = [CIFilter filterWithName:#"CISepiaTone"];
[sepiaFilter setValue:cimage forKey:kCIInputImageKey];
[sepiaFilter setValue:#(5) forKey:kCIInputIntensityKey];
CIImage *outimage = sepiaFilter.outputImage;
// Initialize MTLTextures
MTLTextureDescriptor* descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatRGBA8Unorm width:cimage.extent.size.width height:cimage.extent.size.height mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id<MTLTexture> _ciTexture = [[CGMetalDevice sharedDevice].device newTextureWithDescriptor:descriptor];
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGRect rect = outimage.extent;
[ciContext render:outimage toMTLTexture:_ciTexture commandBuffer:commandBuffer bounds:rect colorSpace:colorSpaceRef];
CIImage *tImage = [[CIImage alloc] initWithMTLTexture:_ciTexture options:nil];
CGImageRef cgImage1 = [ciContext createCGImage:tImage fromRect:tImage.extent];
this documention https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-DontLinkElementID_12
This example shows only the minimal code needed to render with Core Image using Metal. In a real application, you’d likely perform additional rendering passes before or after the one managed by Core Image, or render Core Image output into a secondary texture and use that texture in another rendering pass. For more information on drawing with Metal, see Metal Programming Guide.
it should support rendering to custom textures, how should I do it?
how to understand this sentence
NOTE: Rendering to a texture initialized with a commandBuffer requires encoding all the commands to render an image into the specified buffer.
/* Render 'bounds' of 'image' to a Metal texture, optionally specifying what command buffer to use.
* Texture type must be MTLTexture2D.
* NOTE: Rendering to a texture initialized with a commandBuffer requires encoding all the commands to render an image into the specified buffer.
* This may impact system responsiveness and may result in higher memory usage if the image requires many passes to render.
* To avoid this impact, it is recommended to create a context using [CIContext contextWithMTLCommandQueue:].
*/
- (void)render:(CIImage *)image
toMTLTexture:(id<MTLTexture>)texture
commandBuffer:(nullable id<MTLCommandBuffer>)commandBuffer
bounds:(CGRect)bounds
colorSpace:(CGColorSpaceRef)colorSpace NS_AVAILABLE(10_11,9_0);

Related

Unable to draw CIImage on GLKView after few frames since updated to iOS 10.2?

Using following code in my application which was performing quiet fine to draw a CIImage on a GLKView again and again as recieved from AVCaptureOutput -didOutputSampleBuffer until I was using iOS <= 10.1.*
After updating device to iOS 10.2.1 it has stopped working. I am calling it for few frames the app just crashes with low memory warning. Whereas with iOS 10.1.1 and below I smoothly runs the app even on older device like iPhone 5S.
[_glkView bindDrawable];
if (self.eaglContext != [EAGLContext currentContext])
[EAGLContext setCurrentContext:self.eaglContext];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (ciImage) {
[_ciContext drawImage:ciImage inRect:gvRect fromRect:dRect];
}
[_glkView display];
This is how I am making the CIImage.
- (CIImage*)ciImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer ofSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CIImage *croppedImage = nil;
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
croppedImage = ciImage;
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
[scaleFilter setValue:croppedImage forKey:#"inputImage"];
[scaleFilter setValue:[NSNumber numberWithFloat:self.zoom_Resize_Factor == 1 ? 0.25 : 0.5] forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0] forKey:#"inputAspectRatio"];
croppedImage = [scaleFilter valueForKey:#"outputImage"];
NSDictionary *options = #{(id)kCIImageAutoAdjustRedEye : #(false)};
NSArray *adjustments = [ciImage autoAdjustmentFiltersWithOptions:options];
for (CIFilter *filter in adjustments) {
[filter setValue:croppedImage forKey:kCIInputImageKey];
croppedImage = filter.outputImage;
}
CIFilter *selectedFilter = [VideoFilterFactory getFilterWithType:self.selectedFilterType]; //This line needs to be removed from here
croppedImage = [VideoFilterFactory applyFilter:selectedFilter OnImage:croppedImage];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return croppedImage;
}
Here is imgur link http://imgur.com/a/u6Vyo of VM Tracker and OpenGL ES instruments result. Incase it eases to understand. Thanks.
Your GLKView rendering implementation looks fine, the issue seems to be coming from the amount of processing you're doing on PixelBuffer after converting it into CIImage.
Also the Imgur link you shared shows that GLKView is unable to prepare VideoTexture object correctly, most probably due to the memory overload being created in each iteration. You need to optimise this CIFilter Processing.

CGImageRef consumes lot of memory

I am creating blur image for one of my apps screen, for this i am using following code
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
blurrImage = [UIImage imageWithCGImage:cgImage];
self.blurrImageView.image = blurrImage;
CGImageRelease(cgImage);
form the above code i am getting the correct blur image, but the problem is at CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]]; at this line.
upto this line memory usage showing is normal, but after this line memory usage is increased abnormally high,
hear is the screenchot of memory usage shown before the execution. memory usage is keep on increasing along the execution of this method , this is before
and this after execution of the line CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
is this is common behaviour..? i searched answer but i didn't get, so any one faced the same problem please help me on this
one thing i am "not using ARC"
I experience the same memory consumption problems with Core Image.
If you're looking for alternatives, in iOS 7, you can use UIImage+ImageEffects category, which is available as part of the iOS_UIImageEffects project at the WWDC 2013 sample code page. It provides a few new methods:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
These don't suffer from the memory consumption issues that you experience with Core Image. (Plus, it's a much faster blurring algorithm.)
This technique is illustrated in WWDC 2013 video Implementing Engaging UI on iOS.
The fact you are using a screenshot could vary the memory usage, on retina display could be more that normal device. The doubled is ok in my opinion because you have the original UIImage and the blur image living in memory, probably also the context will keep some memory. I make a guess:
You are using a lot of autoreleased object, they will stay in memory
until the pool is drained, try to wrap the code in an
autoreleaseblock
#autoreleasepool{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
blurrImage = [UIImage imageWithCGImage:cgImage];
self.blurrImageView.image = blurrImage;
CGImageRelease(cgImage);
}

Better performance for ffmpeg decoding editing image contrast for every frame

i'm developing an rtsp player using ffmpeg library and i must edit contrast image for every frame of video, searching the web i found this code for edit contrast :
- (UIImage*)contrast
{
CIImage *beginImage = [CIImage imageWithCGImage:[self CGImage]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =
[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self = newImg;
CGImageRelease(cgimg);
return self;
}
it works perfectly, but on iPad i lose performance and when decoding video are show a lot of noise on screen. There is a better way in performance to modify contrast for image??
Yes, there is a better way... Use opengl es 2.0 shaders, btw this hard work will be done on GPU...

How to create real time image effect processing application iOS

I use AVCaptureSession to receive image from camera of iPhone. It return image in delegate function. In this function, I create image and call other thread to process this image:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
// static bool isFirstTime = true;
// if (isFirstTime == false) {
// return;
// }
// isFirstTime = false;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
//Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Create a CGImageRef from the CVImageBufferRef
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst/*kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast*/);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
// release some components
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage* uiimage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationDown];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
//[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiimage waitUntilDone:YES];
if(processImageThread == nil || (processImageThread != nil && processImageThread.isExecuting == false)){
[processImageThread release];
processImageThread = [[NSThread alloc] initWithTarget:self selector:#selector(processImage:) object:uiimage];
[processImageThread start];
}
[pool drain];
}
I process image on another thread, use CIFilters:
- (void) processImage:(UIImage*)image{
NSLog(#"Begin process");
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
CIFilter* filter = [CIFilter filterWithName:#"CIColorMonochrome"];// keysAndValues:kCIInputImageKey, ciimage, "inputRadius", [NSNumber numberWithFloat:10.0f], nil];
[filter setDefaults];
[filter setValue:ciimage forKey:#"inputImage"];
[filter setValue:[CIColor colorWithRed:0.5 green:0.5 blue:1.0] forKey:#"inputColor"];
CIImage* ciResult = [filter outputImage];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:ciResult fromRect:[ciResult extent]];
UIImage* uiResult = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationRight];
CFRelease(cgImage);
[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiResult waitUntilDone:YES];
NSLog(#"End process");
}
And set result image for a layer:
- (void) setImageForImageView:(UIImage*)image{
self.view.layer.contents = image.CGImage;
}
But it is very laggy. I found a open source, it create a real time image effect application very smooth (also use AVCaptureSession. So, what is difference here (my code and their code) ? How to create real time image effect processing application ?
This is the link of open source: https://github.com/gobackspaces/DLCImagePickerController#readme
The open source sample that you specified in your question using an outstanding open source library GPUImage by BradLarson for the real time photo and video processing. This library uses GPU-based filters (OpenGL ES 2.0) for image processing. Comparatively it is faster than the CPU-based image fileters that you are using by the core image framework.
GPUImage
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.

Capture still UIImage without compression (from CMSampleBufferRef)?

I need to obtain the UIImage from uncompressed image data from CMSampleBufferRef. I'm using the code:
captureStillImageOutput captureStillImageAsynchronouslyFromConnection:connection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// that famous function from Apple docs found on a lot of websites
// does NOT work for still images
UIImage *capturedImage = [self imageFromSampleBuffer:imageSampleBuffer];
}
http://developer.apple.com/library/ios/#qa/qa1702/_index.html is a link to imageFromSampleBuffer function.
But it does not work properly. :(
There is a jpegStillImageNSDataRepresentation:imageSampleBuffer method, but it gives the compressed data (well, because JPEG).
How can I get UIImage created with the most raw non-compressed data after capturing Still Image?
Maybe, I should specify some settings to video output? I'm currently using those:
captureStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
captureStillImageOutput.outputSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
I've noticed, that output has a default value for AVVideoCodecKey, which is AVVideoCodecJPEG. Can it be avoided in any way, or does it even matter when capturing still image?
I found something there: Raw image data from camera like "645 PRO" , but I need just a UIImage, without using OpenCV or OGLES or other 3rd party.
The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.
So for example:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[newStillImageOutput setOutputSettings:outputSettings];
EDIT
I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];
EDIT 2
The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.
Here's a more efficient way:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

Resources