AVPlayer poster frame - ios

When an AVPlayer is at the first frame I would like to show a different frame in the video like a YouTube poster frame. How would you accomplish something like this?

You would set the contents property of any visible layer accordingly:
UIImage *image = [UIImage imageWithCGImage:...]; // replace ... with the CGImage returned by AVAssetImageGenerator
view.layer.contents = (__bridge id)image.CGImage;
Strangely, you have to create a UIImage, and then convert it back to a CGImage to set the Layer's contents property. You can release the UIImage immediately afterwards with CFRelease or by using the __weak or __autoreleasing designators, or just #autoreleasepool.
Don't let anyone tell you that this is memory-consuming or slow. If you relegate this to a separate thread from the main via NSThread, it's as fast as any other means; and, I've displayed 80 UIImages in this way on one screen before.
But, then again, multi-threading and memory management are my strong-suits, as you can see here:
https://youtu.be/7QlaO7WxjGg

Related

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

How to layer a distinct alpha channel animation on top of video?

I'm developing an iphone app and am having issues with the AVFoundation API; I'm used to lots of image manipulation, and just kind of figured I would have access to an image buffer; but the video API is quite different.
I want to take a 30 frame/sec animation which is generated as PNGs with transparency channel, and overlay it onto an arbitrary number of video clips composited inside of a AVMutableComposition.
I figured that the AVMutableVideoComposition would be a good way to go about it; but as it turns out, the animation tool, AVVideoCompositionCoreAnimationTool, requires a special kind of CALayer animation. It supports an animation with basic stuff like a spatial transform, scaling, fading, etc -- but my animation is already complete as a series of PNGS.
Is this possible with AVFoundation? If so, what is the recommended process?
I think you should work with UIImageView and animationImages:
UIImageView *anImageView = [[UIImageView alloc] initWithFrame:frame];
NSMutableArray *animationImages = [NSMutableArray array];
for (int i = 0; i < 500; i++) {
[animationImages addObject:[UIImage imageNamed:[NSString stringWithFormat:#"image%d", i]]];
}
anImageView.animationImages = animationImages;
anImageView.animationDuration = 500/30;
I would use AVVideoCompositing protocol along with AVAsynchronousVideoCompositionRequest. Use [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:] to get the CVPixelBufferRef for the video frame. Then Create a CIImage with the appropriate png based on the timing you want. Then render the video frame onto a GL_TEXTURE, render the CIImage to a GL_TEXTURE, then draw these all into your destination CVPixelBufferRef and you should get the effect you are looking for.
Something Like:
CVPixelBufferRef foregroundPixelBuffer;
CIImage *appropriatePNG = [CIImage imageWithURL:pngURL];
[someCIContext render:appropriatePNG toCVPixelBuffer:foregroundPixelBuffer]
CVPixelBufferRef backgroundPixelBuffer = [asynchronousVideoRequest sourceFrameByTrackID:theTrackID];
//... GL Code for rendering using CVOpenGLESTextureCacheRef
You will need to comp each set of input images (background and foreground) down to a single pixel buffer and then encode those buffers into a H.264 video a frame at a time. Just be aware that this will not be super fast since there is a lot of memory writing and encoding time to create the H.264. You can have a look at AVRender to see a working example of the approach described here. If you would like to roll your own impl then take a look at this tutorial which includes source code that can help you get started.

drawViewHierarchyInRect:afterScreenUpdates: delays other animations

In my app, I use drawViewHierarchyInRect:afterScreenUpdates: in order to obtain a blurred image of my view (using Apple’s UIImage category UIImageEffects).
My code looks like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I noticed during development that many of my animations were delayed after using my app for a bit, i.e., my views were beginning their animations after a noticeable (but less than about a second) pause compared to a fresh launch of the app.
After some debugging, I noticed that the mere act of using drawViewHierarchyInRect:afterScreenUpdates: with screen updates set to YES caused this delay. If this message was never sent during a session of usage, the delay never appeared. Using NO for the screen updates parameter also made the delay disappear.
The strange thing is that this blurring code is completely unrelated (as far as I can tell) to the delayed animations. The animations in question do not use drawViewHierarchyInRect:afterScreenUpdates:, they are CAKeyframeAnimation animations. The mere act of sending this message (with screen updates set to YES) seems to have globally affected animations in my app.
What’s going on?
(I have created videos illustrating the effect: with and without an animation delay. Note the delay in the appearance of the "Check!" speech bubble in the navigation bar.)
UPDATE
I have created an example project to illustrate this potential bug. https://github.com/timarnold/AnimationBugExample
UPDATE No. 2
I received a response from Apple verifying that this is a bug. See answer below.
I used one of my Apple developer support tickets to ask Apple about my issue.
It turns out it is a confirmed bug (radar number 17851775). Their hypothesis for what is happening is below:
The method drawViewHierarchyInRect:afterScreenUpdates: performs its operations on the GPU as much as possible, and much of this work will probably happen outside of your app’s address space in another process. Passing YES as the afterScreenUpdates: parameter to drawViewHierarchyInRect:afterScreenUpdates: will cause a Core Animation to flush all of its buffers in your task and in the rendering task. As you may imagine, there’s a lot of other internal stuff that goes on in these cases too. Engineering theorizes that it may very well be a bug in this machinery related to the effect you are seeing.
In comparison, the method renderInContext: performs its operations inside of your app’s address space and does not use the GPU based process for performing the work. For the most part, this is a different code path and if it is working for you, then that is a suitable workaround. This route is not as efficient as it does not use the GPU based task. Also, it is not as accurate for screen captures as it may exclude blurs and other Core Animation features that are managed by the GPU task.
And they also provided a workaround. They suggested that instead of:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I should do this
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
Hopefully this is helpful for someone!
I tried all the latest snapshot methods using swift. Other methods didn't work for me in the background. But taking snapshot this way worked for me.
create an extension with parameters view layer and view bounds.
extension UIView {
func asImage(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(viewBounds.size)
viewLayer.render(in:UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
Usage
DispatchQueue.main.async {
let layer = self.selectedView.layer
let bounds = self.selectedView.bounds
DispatchQueue.global(qos: .background).async {
let image = self.selectedView.asImage(viewLayer: layer, viewBounds: bounds)
}
}
We need to calculate layer and bounds in the main thread, then other operations will work in the background thread. It will give smooth user experience without any lag or interruption in UI.
Why do you have this line (from your sample app):
animation.beginTime = CACurrentMediaTime();
Just remove it, and everything will be as you want it to be.
By setting animation time explicitly to CACurrentMediaTime() you ignore possible time transformations that can be present in layer tree. Either don't set it at all (by default animations will start now) or use time conversion method:
animation.beginTime = [view.layer convert​Time:CACurrentMediaTime() from​Layer:nil];
UIKit adds time transformations to layer tree when you call afterScreenUpdates:YES to prevent jumps in ongoing animation, that would be caused otherwise by intermediate CoreAnimation commits. If you want to start animation at specific time (not now), use time conversion method mentioned above.
And while at it, strongly prefer using -[UIView snapshotViewAfterScreenUpdates:] and friends instead of -[UIView drawViewHierarchyInRect:] family (preferably specifying NO for afterScreenUpdates part). In most of the cases you don't really need a persistent image and view snapshot is what you actually want. Using view snapshot instead of rendered image has following benefits:
2x-10x faster
Uses 2x-3x less memory
It will always use correct colorspace and buffer format (e.g. on devices with wide color screen)
It will use correct scale and orientation, so you don't need to think how to position your image so it looks good.
It works with accessibility features better (e.g. with Smart Invert colors)
View snapshot will also capture out-of-process and secure views correctly (while drawViewHierarchyInRect will render them black or white).
When the afterScreenUpdates parameter is set to YES, the system has to wait until all pending screen updates have happened before it can render the view.
If you're kicking off animations at the same time then perhaps the rendering and the animations are trying to happen together and this is causing a delay.
It may be worth experimenting with kicking off your animations slightly later to prevent this. Obviously not too much later because that would defeat the object, but a small dispatch_after interval would be worth trying.
Have you tried running your code on a background thread?
Heres an example using gcd:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//background thread
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_sync(dispatch_get_main_queue(), ^(void) {
//update ui on main thread
});
});

Capturing a preview image with AVCaptureStillImageOutput

Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.

Memory Management for CGImageCreateWithImageInRect used for CAKeyFramAnimation

I'm using CAKeyFrameAnimation to animate few png images. I cut them from one png image by using CGImageCreateWithImageInRect
When I run Analyze function in XCode it shows me potential memory leaks. They are beause of CGImageCreateWithImageInRect. But when I use CGImageRelease after the animation object is created, no images are shown (I know why, but when I don't use release, there are leaks).
Could someone explain me this memory issue situation? And what is the best solution? I was thinking about creating UIImages with CGImageRefs for each "cut". But CAKeyFrameAnimation uses field of CGImageRef so I thought it is not necessary to create that UIImages.
UIImage *source = [UIImage imageNamed:#"my_anim_actions.png"];
cutRect = CGRectMake(0*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image1 = CGImageCreateWithImageInRect([source CGImage], cutRect);
cutRect = CGRectMake(1*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image2 = CGImageCreateWithImageInRect([source CGImage], cutRect);
NSArray *images = [[NSArray alloc] initWithObjects:(__bridge id)image1, (__bridge id)image2, (__bridge id)image2, (__bridge id)image1, (__bridge id)image2, (__bridge id)image1, nil];
CAKeyframeAnimation *myAnimation = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
myAnimation.calculationMode = kCAAnimationDiscrete;
myAnimation.duration = kMyTime;
myAnimation.values = images; // NSArray of CGImageRefs
[myAnimation setValue:#"ANIMATION_MY" forKey:#"MyAnimation"];
myAnimation.removedOnCompletion = NO;
myAnimation.fillMode = kCAFillModeForwards;
CGImageRelease(image1);CGImageRelease(image2);//YES or NO
Two solutions:
First: Use [UIImageView animationImages] to animate between multiple images.
Second: Store the images as Ivars and release them in dealloc of your class.
Not sure if this helps, but I ran into something similar where using CGImageCreateWithImageInRect to take sections of an image and then showing those in image views caused huge memory usages. And I found that one thing that improved memory usage a lot was encoding the sub-image into PNG data and re-decoding it back into an image again.
I know, that sounds ridiculous and completely redundant, right? But it helped a lot. According to the documentation of CGImageCreateWithImageInRect,
The resulting image retains a reference to the original image, which
means you may release the original image after calling this function.
It seems as though when the image is shown, the UI copies the image data, including copying the original image data that it references too; that's why it uses so much memory. But when you write it to PNG and back again, you create an independent image data, that is smaller and does not depend on the original. This is just my guess.
Anyway, try it and see if it helps.

Resources