Video stream in AVSampleBufferDisplayLayer doesn't show up in screenshot - ios

I've been using the new Video Toolbox methods to take an H.264 video stream and display it in a view controller using AVSampleBufferDisplayLayer. This all works as intended and the stream looks great. However, when I try to take a screenshot of the entire view, the contents of the AVSampleBufferDisplayLayer (i.e. the decompressed video stream) do not show up in the snapshot. The snapshot shows all other UI buttons/labels/etc. but the screenshot only shows the background color of the AVSampleBufferDisplayLayer (which I had set to bright blue) and not the live video feed.
In the method below (inspired by this post) I take the SampleBuffer from my stream and queue it to be displayed on the AVSampleBufferDisplayLayer. Then I call my method imageFromLayer: to get the snapshot as a UIImage. (I then either display that UIImage in the UIImageView imageDisplay, or I save it to the device's local camera roll to verify what the UIImage looks like. Both methods yield the same result.)
-(void) h264VideoFrame:(CMSampleBufferRef)sample
{
[self.AVSampleDisplayLayer enqueueSampleBuffer:sample];
dispatch_sync(dispatch_get_main_queue(), ^(void) {
UIImage* snapshot = [self imageFromLayer:self.AVSampleDisplayLayer];
[self.imageDisplay setImage:snapshot];
});
}
Here I simply take the contents of the AVSampleBufferDisplayLayer and attempt to convert it to a UIImage. If I pass the entire screen into this method as the layer, all other UI elements like labels/buttons/images will show up except for the AVDisplayLayer. If I pass in just the AVDisplayLayer, I get a solid blue image (since the background color is blue).
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContextWithOptions([layer frame].size, YES, 1.0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
//UIImageWriteToSavedPhotosAlbum(outputImage, self, nil, nil);
UIGraphicsEndImageContext();
return outputImage;
}
I've tried using UIImage snapshot = [self imageFromLayer: self.AVSampleDisplayLayer.presentationLayer]; and .modelLayer, but that didn't help. I've tried queueing the samplebuffer and waiting before taking a snapshot, I've tried messing with the opacity and xPosition of the AVDisplayLayer... I've even tried setting different values for the CMTimebase of the AVDisplayLayer. Any hints are appreciated!
Also according to this post, and this post other people are having similar troubles with snapshots in iOS 8.

I fixed this by switching from AVSampleDisplayLayer to VTDecompressionSession. In the VTDecompression didDecompress callback method, I send the decompressed image (CVImageBufferRef) into the following method to get a screenshot of the video stream and turn it into a UIImage.
-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
[self doSomethingWithOurUIImage:image];
CGImageRelease(videoImage);
}

Related

GPUImage output image is missing in screen capture

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

Taking SnapShot in MPMovieController in iOS

I am working on a project where I take the HTTP streaming video and display it in the MPMoviePlayerController. And I have to take the snapshot of that streaming video.
I used following code to do that, but I get only nil value.
UIImage *thumbnail = [mpPlayer thumbnailImageAtTime:yourMoviePlayerObject.currentPlaybackTime
timeOption:MPMovieTimeOptionNearestKeyFrame];
You can use UIGraphics to take screenshot:
CGSize imageSize = set_image_size_here;
UIGraphicsBeginImageContext(imageSize);
CGContextRef imageContext = UIGraphicsGetCurrentContext();
[mpPlayer.view.layer renderInContext:imageContext];
Retrieve screenshot image
UIImage *imagefinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

UIImageWriteToSavedPhotosAlbum() doesn't save cropped image

I'm trying to save a cropped image to the camera roll.
(I need to do it programmatically, I can't have the user edit it)
This is my (still quite basic) cut and save code:
- (void)cutAndSaveImage:(UIImage*)rawImage
{
CIImage *workingImage = [[CIImage alloc] initWithImage:rawImage];
CGRect croppingRect = CGRectMake(0.0f, 0.0f, 3264.0f, 1224.0f);
CIImage *croppedImage = [workingImage imageByCroppingToRect:croppingRect];
UIImage *endImage = [UIImage imageWithCIImage:croppedImage scale: 1.0f orientation:UIImageOrientationRight];
self.testImage.image = endImage;
UIImageWriteToSavedPhotosAlbum(rawImage, self, #selector(image:didFinishSavingWithError:contextInfo:) , nil);
UIImageWriteToSavedPhotosAlbum(endImage, self, #selector(image:didFinishSavingWithError:contextInfo:) , nil);
}
The method is called within:
- (void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary*)info
I first create a CIImage using the raw UIImage.
Then I get a cropped CIImage using an instance method of the first one.
After that I create a new UIImage using the cropped CIImage.
At this point, to have some feedback, I set the new cropped UIImage as the backing image of a UIImageView. This works, and I can clearly see the image cropped exactly how I desired.
When I try to save it to the camera roll, however, things stop working.
I can't save the newly created endImage.
As you can see, I added a line to save the original UIImage too, just for comparison. The original one saves normally.
Another confusing thing is that the NSError object passed to the image:didFinishSavingWithError:contextInfo: callback is nil. (the callback is normally executed for both saving attempts)
EDIT:
Just made an experiment:
NSLog(#"rawImage: %# - rawImage.CGImage: %#", rawImage, rawImage.CGImage);
NSLog(#"endImage: %# - endImage.CGImage: %#", endImage, endImage.CGImage);
It looks like only the rawImage (coming from the UIImagePickerController) possesses a backing CGImageRef. The other one, created from a CIImage, doesn't.
Can it be that UIImageWriteToSavedPhotosAlbum works using the backing CGImageRef?
Can it be that UIImageWriteToSavedPhotosAlbum works using the backing CGImageRef?
Correct. A CIImage is not an image, and a UIImage backed only by a CIImage is not an image either; it is just a kind of wrapper. Why are you using CIImage at all here? You aren't using CIFilter so this makes no sense. Or if you are using CIFilter, you must render through a CIContext to get the output as a bitmap.
You can crop easily by drawing into a smaller graphics context.
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
You can generate UIImage from CIImage like this:
let lecturePicture = UIImage(data: NSData(contentsOfURL: NSURL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!)!)!
let controlsFilter = CIFilter(name: "CIColorControls")
controlsFilter.setValue(CIImage(image: lecturePicture), forKey: kCIInputImageKey)
controlsFilter.setValue(1.5, forKey: kCIInputContrastKey)
let displayImage = UIImage(CGImage: CIContext(options:nil).createCGImage(controlsFilter.outputImage, fromRect:controlsFilter.outputImage.extent()))!
displayImage

handle memory warning while applying core image filter

I am using the followinf code for applying image filters. This is working fine on scaled down images.
But when I apply more than 2 filters on full resolution images, the app crashes. A memory warning is received.
When I open the 'allocations' instrument, I see that CFData(store) takes up most of the memory used by the program.
When I apply more than 2 filters on a full resolution image, the 'overall bytes' go upto 54MB. While the 'live bytes' don't seem to reach more than 12MB when I use my eyes on the numbers as such, but the spikes show that live bytes also reach upto this number and come back.
Where am i going wrong?
- (UIImage *)editImage:(UIImage *)imageToBeEdited tintValue:(float)tint
{
CIImage *image = [[CIImage alloc] initWithImage:imageToBeEdited];
NSLog(#"in edit Image:\ncheck image: %#\ncheck value:%f", image, tint);
[tintFilter setValue:image forKey:kCIInputImageKey];
[tintFilter setValue:[NSNumber numberWithFloat:tint] forKey:#"inputAngle"];
CIImage *outputImage = [tintFilter outputImage];
NSLog(#"check output image: %#", outputImage);
return [self completeEditingUsingOutputImage:outputImage];
}
- (UIImage *)completeEditingUsingOutputImage:(CIImage *)outputImage
{
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
NSLog(#"check cgimg: %#", cgimg);
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
NSLog(#"check newImge: %#", newImage);
CGImageRelease(cgimg);
return newImage;
}
Edit:
I also tried making cgimg as nil. Didn't help.
I tried putting context declaration and definition inside the 2nd function. Didn't help.
I tried to move declarations and definitions of filters inside the functions, didn't help.
AlsoCrash happens at
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
the cgimg i was making took most of the space in memory and was not getting released.
I observed that calling the filer with smaller values takes the CFData (store) memory value back to a samller value, thus avoiding the crash.
So I apply filter and after that call the same filter with image as 'nil'. This takes memory back to 484kb or something from 48 MB after applying all 4 filters.
Also, I am applying this filters on a background thread instead of the main thread. Applying on main thread again causes crash. Probably it doesn't get enough time to release the memory. I don't know.
But these things are working smoothly now.
// where is your input filter name like this:
[tintFilter setValue:image forKey:#"CIHueAdjust"];
// I think you have a mistake in outputImage.extent. You just write this
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage extent];

UIImage face detection

I am trying to write a routine that takes a UIImage and returns a new UIImage that contains just the face. This would seem to be very straightforward, but my brain is having problems getting around the CoreImage vs. UIImage spaces.
Here's the basics:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
-(UIImage *)getFaceImage:(UIImage *)picture {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];
CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
NSArray *features = [detector featuresInImage:ciImage];
// For simplicity, I'm grabbing the first one in this code sample,
// and we can all pretend that the photo has one face for sure. :-)
CIFaceFeature *faceFeature = [features objectAtIndex:0];
return imageFromImage:picture inRect:faceFeature.bounds;
}
The image that is returned is from the flipped image. I've tried adjusting faceFeature.bounds using something like this:
CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
... but that gives me results outside the image.
I'm sure there's something simple to fix this, but short of calculating the bottom-down and then creating a new rect using that as the X, is there a "proper" way to do this?
Thanks!
It's much easier and less messy to just use CIContext to crop your face from the image. Something like this:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];
Where inputImage is your UIImage object and faceFeature object is of type CIFaceFeature that you get from [CIDetector featuresInImage:] method.
Since there doesn't seem to be a simple way to do this, I just wrote some code to do it:
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
_picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This worked a charm.
There is no simple way to achieve this, the problem is that the images from the iPhone camera are always in portrait mode, and metadata settings are used to get them to display correctly. You will also get better accuracy in your face detection call if you tell it the rotation of the image beforehand. Just to make things complicated, you have to pass it the image orientation in EXIF format.
Fortunately there is an apple sample project that covers all of this called Squarecam, i suggest you check it for details

Resources