How to layer a distinct alpha channel animation on top of video? - ios

I'm developing an iphone app and am having issues with the AVFoundation API; I'm used to lots of image manipulation, and just kind of figured I would have access to an image buffer; but the video API is quite different.
I want to take a 30 frame/sec animation which is generated as PNGs with transparency channel, and overlay it onto an arbitrary number of video clips composited inside of a AVMutableComposition.
I figured that the AVMutableVideoComposition would be a good way to go about it; but as it turns out, the animation tool, AVVideoCompositionCoreAnimationTool, requires a special kind of CALayer animation. It supports an animation with basic stuff like a spatial transform, scaling, fading, etc -- but my animation is already complete as a series of PNGS.
Is this possible with AVFoundation? If so, what is the recommended process?

I think you should work with UIImageView and animationImages:
UIImageView *anImageView = [[UIImageView alloc] initWithFrame:frame];
NSMutableArray *animationImages = [NSMutableArray array];
for (int i = 0; i < 500; i++) {
[animationImages addObject:[UIImage imageNamed:[NSString stringWithFormat:#"image%d", i]]];
}
anImageView.animationImages = animationImages;
anImageView.animationDuration = 500/30;

I would use AVVideoCompositing protocol along with AVAsynchronousVideoCompositionRequest. Use [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:] to get the CVPixelBufferRef for the video frame. Then Create a CIImage with the appropriate png based on the timing you want. Then render the video frame onto a GL_TEXTURE, render the CIImage to a GL_TEXTURE, then draw these all into your destination CVPixelBufferRef and you should get the effect you are looking for.
Something Like:
CVPixelBufferRef foregroundPixelBuffer;
CIImage *appropriatePNG = [CIImage imageWithURL:pngURL];
[someCIContext render:appropriatePNG toCVPixelBuffer:foregroundPixelBuffer]
CVPixelBufferRef backgroundPixelBuffer = [asynchronousVideoRequest sourceFrameByTrackID:theTrackID];
//... GL Code for rendering using CVOpenGLESTextureCacheRef

You will need to comp each set of input images (background and foreground) down to a single pixel buffer and then encode those buffers into a H.264 video a frame at a time. Just be aware that this will not be super fast since there is a lot of memory writing and encoding time to create the H.264. You can have a look at AVRender to see a working example of the approach described here. If you would like to roll your own impl then take a look at this tutorial which includes source code that can help you get started.

Related

Capturing a preview image with AVCaptureStillImageOutput

Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.

How to load animation in background in cocos2d?

I am currently developing one cocos2d game for iPad, in that game lot of animations is there. I have previously used zwoptex for creating spritesheet and adding animations in my project.
In my game 10 Levels is there, and after each level completed one animation will play, the animation is different for each level and animation image size is same as device screen size. so i am not create spritesheet file, instead of i am loading image directly. My problem is while animation it takes too much time for animation playing.
How to i fix it? please any one guide me. is it possible to load full screensize image(1024x768) in plist, because total 10 levels each level has 20 frames for animation, so 10X20 = 200 images need to load spritesheet.
I am using this code for animation
CCAnimation *animation = [CCAnimation animation];
for(int i=1;i<=20;i++)
{
[animation addFrameWithFile:[NSString stringWithFormat:#"Level%dAni%d.png",level,i];
}
animation.delayPerUnit = 0.3f;
animation.restoreOriginalFrame = YES;
id action = [CCAnimate actionWithAnimation:animation];
My Question is it possible to load full screen animations with spritesheet? and animation loading time is differ and it takes too much time how to fix it?
Please help me..
Do the math and figure out the amount of memory you would need for 200 pics at 1024x768 pixels. Way too much memory for any iSomething device.
If you have a performance problem (ie are you running this on a device?), then there are two things you can do to improve image load speed:
Convert your images to .pvr.gz (i recommend TexturePacker for this). They load significantly faster than .png.
Use an RGBA4444 pixel format (again you can do this with TexturePacker), and set the texture format in cocos just prior to loading the images. Images will be smaller, take much less memory.
In your code, where you do the anim
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
// load your images and do your anim
...
// at the completion of the anim
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
By Using Cocos2D you can animate sprite sheet images easily.
Try following examples using Cocos2D
http://www.smashious.com/cocos2d-sprite-sheet-tutorial-animating-a-flying-bird/309/ http://www.raywenderlich.com/1271/how-to-use-animations-and-sprite-sheets-in-cocos2d
Try following examples without using Cocos2D
http://www.dalmob.org/2010/12/07/ios-sprite-animations-with-notifications/
http://developer.glitch.com/blog/2011/11/10/avatar-animations-in-ios/
Use NSThread and load your animation in this thread.
NSThread *thread = [[[NSThread alloc] initWithTarget:self selector:#selector(preloadFireWorkAnimation) object:nil] autorelease];
[thread start];
-(void)preloadFireWorkAnimation
{
// load your animation here;
}

convert PIX object to UIImage

I am trying to apply a threshold using the leptonica image library with the function
l_int32 pixOtsuAdaptiveThreshold()
I believe I have succesfully used this function but as you can see, it returns an int. I am not sure how to go from here and convert this into a UIImage OR convert the PIX object I passed in into an UIImage? Basically I just want a UIImage back after applying the threshold.
The API for this function can be found here: http://tpgit.github.io/Leptonica/binarize_8c.html#aaef1d6ed54b87144b98c72f675ad7a4c
Does anyone know what I must do to get a UIImage back?
Thanks!
here is another user converting data to image...
Reading binary image data from a web service into UIImage
but if you can i would look into brad larsons awesome gpu image filters may be better suited for you https://github.com/BradLarson/GPUImage very easy to use
added answer for adding GPUImage framework: sorry i cant help with the first as for if you would like to continue using the second if you simply need a desired threshold effect you can use gpuimage as a frame work and after setup for adaptive threshold i simply used a switch case for different effects or call init how ever you want and i used a slider to get effects control or select a predetermined value but the code ends up as easy as this
case GPUIMAGE_ADAPTIVETHRESHOLD:
{
self.title = #"Adaptive Threshold";
self.filterSettingsSlider.hidden = NO;
[self.filterSettingsSlider setMinimumValue:1.0];
[self.filterSettingsSlider setMaximumValue:20.0];
[self.filterSettingsSlider setValue:1.0];
UIImage *newFilteredImage = [[[GPUImageAdaptiveThresholdFilter alloc] init] imageByFilteringImage:[self.originalImageView image] ];
self.myEditedImageView = newFilteredImage;
}; break;

Effectively scaling multiple images in iOS

I have 15 images being displayed on a single view. I need to scale the images based on the user's voice (the louder they speak the larger the images need to scale). At the moment I am using averagePowerForChannel on the AVAudioRecorder and frequently sampling the audio to scaling all the images appropriately. The code I'm using to do the scaling looks something like this:
- (void)scaleImages:(float)scalingFactor {
for (UIView *imageHolder in self.imageView.subviews) {
UIView *image = [imageHolder.subviews objectAtIndex:0];
image.transform = CGAffineTransformMakeScale(scalingFactor, scalingFactor);
image.hidden = scalingFactor <= 0.0f;
}
}
This works fine when I have a single image, but when I do this for all 15 images it becomes incredibly laggy and unresponsive. I have tried several different options - sampling less frequently, normalizing the sampling output, etc but nothing seems to make a difference.
How would I optimize this?
You might want to try the GPUImage framework . It uses the GPU to accelerate Core Image transforms .
https://github.com/BradLarson/GPUImage

How to implement fast image filters on iOS platform

I am working on iOS application where user can apply a certain set of photo filters. Each filter is basically set of Photoshop actions with a specific parameters. This actions are:
Levels adjustment
Brightness / Contrast
Hue / Saturation
Single and multiple overlay
I've repeated all this actions in my code using arithmetic expressions looping through the all pixels in image. But when I run my app on iPhone 4, each filter takes about 3-4 sec to apply which is quite a few time for the user to wait. The image size is 640 x 640 px which is #2x of my view size because it's displayed on Retina display. I've found that my main problem is levels modifications which are calling the pow() C function each time I need to adjust the gamma. I am using floats not doubles of course because ARMv6 and ARMv7 are slow with doubles. Tried to enable and disable Thumb and got the same result.
Example of the simplest filter in my app which is runs pretty fast though (2 secs). The other filters includes more expressions and pow() calls thus making them slow.
https://gist.github.com/1156760
I've seen some solutions which are using Accelerate Framework vDSP matrix transformations for fast image modifications. I've also seen OpenGL ES solutions. I am not sure that they are capable of my needs. But probably it's just a matter of translating my set of changes into some good convolution matrix?
Any advice would be helpful.
Thanks,
Andrey.
For the filter in your example code, you could use a lookup table to make it much faster. I assume your input image is 8 bits per color and you are converting it to float before passing it to this function. For each color, this only gives 256 possible values and therefore only 256 possible output values. You could precompute these and store them in an array. This would avoid the pow() calculation and the bounds checking since you could factor them into the precomputation.
It would look something like this:
unsigned char table[256];
for(int i=0; i<256; i++) {
float tmp = pow((float)i/255.0f, 1.3f) * 255.0;
table[i] = tmp > 255 ? 255 : (unsigned char)tmp;
}
for(int i=0; i<length; ++i)
m_OriginalPixelBuf[i] = table[m_OriginalPixelBuf[i]];
In this case, you only have to perform pow() 256 times instead of 3*640*640 times. You would also avoid the branching caused by the bounds checking in your main image loop which can be costly. You would not have to convert to float either.
Even a faster way may be to precompute the table outside the program and just put the 256 coefficients in the code.
None of the operations you have listed there should require a convolution or even a matrix multiply. They are all pixel-wise operations, meaning that each output pixel only depends on the single corresponding input pixel. You would need to consider convolution for operations like blurring or sharpening where multiple input pixels affect a single output pixel.
If you're looking for the absolute fastest way to do this, you're going to want to use the GPU to handle the processing. It's built to do massively parallel operations, like color adjustments on single pixels.
As I've mentioned in other answers, I measured a 14X - 28X improvement in performance when running an image processing operation using OpenGL ES instead of on the CPU. You can use the Accelerate framework to do faster on-CPU image manipulation (I believe Apple claims around a ~4-5X boost is possible here), but it won't be as fast as OpenGL ES. It can be easier to implement, however, which is why I've sometimes used Accelerate for this over OpenGL ES.
iOS 5.0 also brings over Core Image from the desktop, which gives you a nice wrapper around these kind of on-GPU image adjustments. However, there are some limitations to the iOS Core Image implementation that you don't have when working with OpenGL ES 2.0 shaders directly.
I present an example of an OpenGL ES 2.0 shader image filter in my article here. The hardest part about doing this kind of processing is getting the OpenGL ES scaffolding set up. Using my sample application there, you should be able to extract that setup code and apply your own filters using it. To make this easier, I've created an open source framework called GPUImage that handles all of the OpenGL ES interaction for you. It has almost every filter you list above, and most run in under 2.5 ms for a 640x480 frame of video on an iPhone 4, so they're far faster than anything processed on the CPU.
As I said in a comment, you should post this question on the official Apple Developer Forums as well.
That aside, one real quick check: are you calling pow( ) or powf( )? Even if your data is float, calling pow( ) will get you the double-precision math library function, which is significantly slower than the single-precision variant powf( ) (and you'll have to pay for the extra conversions between float and double as well).
And a second check: have you profiled your filters in Instruments? Do you actually know where the execution time is being spent, or are you guessing?
I actually wanted to do all this myself but I found Silverberg's Image Filters. You could apply various instagram type image filters on your images. This so much better than other image filters out there - GLImageProcessing or Cimg.
Also check Instagram Image Filters on iPhone.
Hope this helps...
From iOS 5 upwards, you can use the Core Image filters to adjust a good range of image parameters.
To adjust contrast for example, this code works like a charm:
- (void)setImageContrast:(float)contrast forImageView:(UIImageView *)imageView {
if (contrast > MIN_CONTRAST && contrast < MAX_CONTRAST) {
CIImage *inputImage = [[CIImage alloc] initWithImage:imageView.image];
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIColorControls"];
[exposureAdjustmentFilter setDefaults];
[exposureAdjustmentFilter setValue:inputImage forKey:#"inputImage"];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:contrast] forKey:#"inputContrast"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:1.0f] forKey:#"inputSaturation"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:0.0f] forKey:#"inputBrightness"];
CIImage *outputImage = [exposureAdjustmentFilter valueForKey:#"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
imageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];
}
}
N.B. Default value for contrast is 1.0 (maximum value suggested is 4.0).
Also, contrast is calculated here on the imageView's image, so calling this method repeatedly will cumulate the contrast. Meaning, if you call this method with contrast value 2.0 first and then again with contrast value 3.0, you will get the original image with contrast value increased by 6.0 (2.0 * 3.0) - not 5.0.
Check the Apple documentation for more filters and parameters.
To list all available filters and parameters in code, just run this loop:
NSArray* filters = [CIFilter filterNamesInCategories:nil];
for (NSString* filterName in filters)
{
NSLog(#"Filter: %#", filterName);
NSLog(#"Parameters: %#", [[CIFilter filterWithName:filterName] attributes]);
}
This is an old thread, but I got to it from another link on SO, so people still read it.
With iOS 5, Apple added support for Core Image, and a decent number of Core image filters. I'm pretty sure all the ones the OP mentioned are available
Core Image uses OpenGL shaders under the covers, so it's really fast. It's much easier to use than OpenGL however. If you aren't already working in OpenGL, and just want to apply filters to CGImage or UIIMage objects, Core Image filters are the way to go.

Resources