GPUImage output image is missing in screen capture - ios

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?

First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

Related

Video stream in AVSampleBufferDisplayLayer doesn't show up in screenshot

I've been using the new Video Toolbox methods to take an H.264 video stream and display it in a view controller using AVSampleBufferDisplayLayer. This all works as intended and the stream looks great. However, when I try to take a screenshot of the entire view, the contents of the AVSampleBufferDisplayLayer (i.e. the decompressed video stream) do not show up in the snapshot. The snapshot shows all other UI buttons/labels/etc. but the screenshot only shows the background color of the AVSampleBufferDisplayLayer (which I had set to bright blue) and not the live video feed.
In the method below (inspired by this post) I take the SampleBuffer from my stream and queue it to be displayed on the AVSampleBufferDisplayLayer. Then I call my method imageFromLayer: to get the snapshot as a UIImage. (I then either display that UIImage in the UIImageView imageDisplay, or I save it to the device's local camera roll to verify what the UIImage looks like. Both methods yield the same result.)
-(void) h264VideoFrame:(CMSampleBufferRef)sample
{
[self.AVSampleDisplayLayer enqueueSampleBuffer:sample];
dispatch_sync(dispatch_get_main_queue(), ^(void) {
UIImage* snapshot = [self imageFromLayer:self.AVSampleDisplayLayer];
[self.imageDisplay setImage:snapshot];
});
}
Here I simply take the contents of the AVSampleBufferDisplayLayer and attempt to convert it to a UIImage. If I pass the entire screen into this method as the layer, all other UI elements like labels/buttons/images will show up except for the AVDisplayLayer. If I pass in just the AVDisplayLayer, I get a solid blue image (since the background color is blue).
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContextWithOptions([layer frame].size, YES, 1.0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
//UIImageWriteToSavedPhotosAlbum(outputImage, self, nil, nil);
UIGraphicsEndImageContext();
return outputImage;
}
I've tried using UIImage snapshot = [self imageFromLayer: self.AVSampleDisplayLayer.presentationLayer]; and .modelLayer, but that didn't help. I've tried queueing the samplebuffer and waiting before taking a snapshot, I've tried messing with the opacity and xPosition of the AVDisplayLayer... I've even tried setting different values for the CMTimebase of the AVDisplayLayer. Any hints are appreciated!
Also according to this post, and this post other people are having similar troubles with snapshots in iOS 8.
I fixed this by switching from AVSampleDisplayLayer to VTDecompressionSession. In the VTDecompression didDecompress callback method, I send the decompressed image (CVImageBufferRef) into the following method to get a screenshot of the video stream and turn it into a UIImage.
-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
[self doSomethingWithOurUIImage:image];
CGImageRelease(videoImage);
}

How to pixelate an image in iOS?

I'm to create a simple app with the following features:
First page of app will display a list of images from server (when we display these images we should pixelate it).
Once user clicks on any pixelated image then it will open in detail view (opens that pixelated image in a new ViewController).
When the user does a single touch on the detail view controller image, then it will reduce its pixelation level, and after some clicks the user can see the real image.
My problem is I am not able to find out a way to pixelate all these things dynamically. Please help me.
The GPUImage Framework has a pixellate filter, since it uses the GPUAcceleration applying the filter on an image is very fast and you can vary the pixellate level at runtime.
UIImage *inputImage = [UIImage imageNamed:<#yourimageame#>];
GPUImagePixellateFilter *filter = [[GPUImagePixellateFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:inputImage];
An easy way to pixellate an image would be to use the CIPixellate filter from Core Image.
Instructions and sample code for processing images with Core Image filters can be found in the Core Image Programming Guide.
UIImage *yourImage = [UIImage imageNamed:#"yourimage"];
NSData *imageData1 = UIImageJPEGRepresentation(yourImage, 0.2);
NSData *imageData2 = UIImageJPEGRepresentation(yourImage, 0.3);
and so on upto
NSData *imageDataN = UIImageJPEGRepresentation(yourImage, 1);
show the imageData with the help of the below:
UIImage *compressedImage = [UIImage imageWithData:imageData1];
try this. Happy coding

GPUImage blend filters

I'm trying to apply a blend filters to 2 images.
I've recently updated GPUImage to the last version.
To make things simple I've modified the example SimpleImageFilter.
Here is the code:
UIImage * image1 = [UIImage imageNamed:#"PGSImage_0000.jpg"];
UIImage * image2 = [UIImage imageNamed:#"PGSImage_0001.jpg"];
twoinputFilter = [[GPUImageColorBurnBlendFilter alloc] init];
sourcePicture1 = [[GPUImagePicture alloc] initWithImage:image1 ];
sourcePicture2 = [[GPUImagePicture alloc] initWithImage:image2 ];
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
The image returned is nil.Applying some breakpoints I can see that the filter fails inside the method - (CGImageRef)newCGImageFromCurrentlyProcessedOutput the problem is that the framebufferForOutput is nil.I'm using simulator.
I don't get why it isn't working.
It seems that I was missing this command, as written in the documentation for still image processing:
Note that for a manual capture of an image from a filter, you need to
set -useNextFrameForImageCapture in order to tell the filter that
you'll be needing to capture from it later. By default, GPUImage
reuses framebuffers within filters to conserve memory, so if you need
to hold on to a filter's framebuffer for manual image capture, you
need to let it know ahead of time.
[twoinputFilter useNextFrameForImageCapture];

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

UIImagePickerController image Scaling and maintain its quality

Image Size captured using Camera return's image of size 720*960.
The captured Image is displayed in a UIImageView of 320*436, like this.
UIImageView *imgView=[[UIImageView alloc] initWithFrame:CGRectMake(0.0,0.0,320.0,436.0)];
imgView.image=img;//Image received from camera.
[self.view addSubView:imgView];
This, works fine image 720*960 is scaled to 320*436 and displayed.
Now, from here actual problem starts. I have another image of size 72*72. This image is overlapped with the image received from camera at some arbitrary coordinates.
CGRectMake(0.0,0.0,72.0,72.0);
I am not able to find a better way to handle scaling and applying a overlay of another Image, at the same time maintain its quality.
The image needs to be send to a server.
Use the following code to scale images:
-(UIImage*) imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Resources