Render Alpha image on CVImageBufferRef - ios

I am trying to achieve this effect . you can check this effect in musical.ly app (ripple effect)
https://drive.google.com/open?id=1uXExnmWQ7OfSGLFXdH7-5imay8tW87vO
Here is my approach.
I will render alpha and scaled image to pixel buffer I am showing all sample buffer using AVSampleBufferDisplayLayer . I want to show this animation for 3 sec - 5sec . Once user is done Then I will convert it to mp4 using AVAssetWriter .
I am unable to add alpha image to cvpixelbuffer
If there’s a better approach to make this animation . Please guide .
I get all the sample buffer using AVAssetReaderTrackOutput.
NSDictionary *readerOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack
outputSettings:readerOutputSettings];
[reader addOutput:readerOutput];
[reader startReading];
while ((sample = [readerOutput copyNextSampleBuffer]))
{
[samples addObject:(__bridge id)sample];
CFRelease(sample);
}
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)[samples lastObject]);
CIImage *filteredImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:nil];
CIFilter* theFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[theFilter setDefaults];
[theFilter setValue: filteredImage forKey: #"inputImage"];
CIVector *theRVector = [CIVector vectorWithX:1 Y:0 Z:0 W:0];
[theFilter setValue: theRVector forKey:#"inputRVector"];
CIVector *theGVector = [CIVector vectorWithX:0 Y:1 Z:0 W:0];
[theFilter setValue: theGVector forKey:#"inputGVector"];
CIVector *theBVector = [CIVector vectorWithX:0 Y:0 Z:1 W:0];
[theFilter setValue: theBVector forKey:#"inputBVector"];
CIVector *theAVector = [CIVector vectorWithX:0 Y:0 Z:0 W:0.5];
[theFilter setValue: theAVector forKey:#"inputAVector"];
CIVector *theBiasVector = [CIVector vectorWithX:0 Y:0 Z:0 W:0];
[theFilter setValue: theBiasVector forKey:#"inputBiasVector"];
CIImage* result = [theFilter valueForKey: #"outputImage"];
Then I have decrease the alpha of ciimage and render into first pixel buffer
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
CIContext *cicontext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
[cicontext render:result toCVPixelBuffer:CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)[samplebuffers firstObject]) bounds:CGRectMake(0, 0, [result extent].size.width,[result extent].size.height) colorSpace:CGColorSpaceCreateDeviceRGB()];
I got this result .
Expected result
Thanks in advance

This image effect appears to be a fixed image with a zoom and a fade applied to a copy over the original. You should not need an animation library for this, just create a PNG and 2 layers to render it, then zoom one of the layers and fade it out as the zoom happens. Start with a 32 BPP image that contains alpha for the see through parts and you should be fine. If you want to capture the effect as .h264, that is a totally different problem.

Related

Imageview not resetting to original after sharpness filter used

Not able to get original image after applying the filter on imageview.As I want to get original image after moving the slider from 1 to 0 (max to min value, I mean in reverse direction).
Below is the code for applying sharpness effect
- (IBAction)actionSharp:(UISlider *)sender {
demoImage = _affectView.image;
CIImage* image = [CIImage imageWithCGImage:demoImage.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSNumber *testNSNumber = [NSNumber numberWithFloat:_sharpActionWithSlider.value];
NSLog(#"ffghhn %f",_sharpActionWithSlider.value);
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CISharpenLuminance" keysAndValues: #"inputImage", image, nil];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:testNSNumber forKey:kCIInputSharpnessKey];
[gaussianBlurFilter setDefaults];
CIImage *result2 = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result2 fromRect:[result2 extent]];
UIImage *sharp= [UIImage imageWithCGImage:cgImage];
UIImage *p = sharp;
self.affectView.image= p;
self.affectView.alpha = sender.value;
CGImageRelease(cgImage);
}
Just copy the image on which you are adding filter and keep it aside. Add effects to copied image and show them on your _affectView image view.
On your slider action every time get reference from your original image and effect to that image rather than image from image view.
Create global var as mainImage as
UIImage *mainImage;
in your viewDidLoad assign origial image to your main image
mainImage = [UIImage imagenamed:"yourMainImage"]; // assign from source local or url
In you IBAction on every slider action get refernce of original image instead of fetching image from imageview.
- (IBAction)actionSharp:(UISlider *)sender {
UIImage *demoImage = mainImage;
CIImage* image = [CIImage imageWithCGImage:demoImage.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSNumber *testNSNumber = [NSNumber numberWithFloat:_sharpActionWithSlider.value];
NSLog(#"ffghhn %f",_sharpActionWithSlider.value);
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CISharpenLuminance" keysAndValues: #"inputImage", image, nil];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:testNSNumber forKey:kCIInputSharpnessKey];
[gaussianBlurFilter setDefaults];
CIImage *result2 = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result2 fromRect:[result2 extent]];
UIImage *sharp= [UIImage imageWithCGImage:cgImage];
UIImage *p = sharp;
self.affectView.image= p;
self.affectView.alpha = sender.value;
CGImageRelease(cgImage);
}
You are getting your demo image from _affectView, applying a filter to it and then saving it in _affectView again. The next time you retrieve _affectView into a UIImage it will not be the original one but the modified one so it is imposible to get back to the original.
Just at the start (outside this method) save your original image in demo and modify demo every time you want.

How can I improve the performance of this CIFilter chain?

I am creating my context (with an EAGLContext) and filters externally and reusing them, however I still am finding it to take about 1 second per UIImage to process with this filter chain-- not to mention that despite the fact that is running on a background thread, the main thread totally lags and my frame rate suffers. I've also tried using the software rendering option, but can't really tell much of a difference.
+(CGImageRef *)cgImageForImage:(UIImage *)image
rotatedByHue:(CGFloat)hue
saturation:(CGFloat)saturation
matrix:(NSArray *)matrix
context:(CIContext *)context
hueFilter:(CIFilter *)hueFilter
saturationFilter:(CIFilter *)saturationFilter
matrixFilter:(CIFilter *)matrixFilter {
CIImage *sourceCore = [CIImage imageWithCGImage:[image CGImage]];
CIImage *resultCore = nil;
if (hue != 0.0f) {
[hueFilter setDefaults];
[hueFilter setValue:sourceCore forKey:kCIInputImageKey];
[hueFilter setValue:[NSNumber numberWithFloat:hue] forKey:kCIInputAngleKey];
resultCore = [hueFilter outputImage];
}
if (saturation != 1.0f) {
[saturationFilter setDefaults];
[saturationFilter setValue:(resultCore ? resultCore : sourceCore) forKey:kCIInputImageKey];
[saturationFilter setValue:[NSNumber numberWithFloat:saturation] forKey:kCIInputSaturationKey];
resultCore = [saturationFilter outputImage];
}
if (matrix) {
[matrixFilter setDefaults];
[matrixFilter setValue:(resultCore ? resultCore : sourceCore) forKey:kCIInputImageKey];
[matrixFilter setValue:[self vectorForMatrixRow:matrix[0]] forKey:#"inputRVector"];
[matrixFilter setValue:[self vectorForMatrixRow:matrix[1]] forKey:#"inputGVector"];
[matrixFilter setValue:[self vectorForMatrixRow:matrix[2]] forKey:#"inputBVector"];
[matrixFilter setValue:[self vectorForMatrixRow:matrix[3]] forKey:#"inputAVector"];
[matrixFilter setValue:[self vectorForMatrixRow:matrix[4]] forKey:#"inputBiasVector"];
resultCore = [matrixFilter outputImage];
}
return [context createCGImage:resultCore fromRect:[resultCore extent]];
}
+(CIVector *)vectorForMatrixRow:(NSArray *)row {
return [CIVector vectorWithX:[row[0] floatValue] Y:[row[1] floatValue] Z:[row[2] floatValue] W:[row[3] floatValue]];
}
Since I am using an EAGLContext, would there be a benefit to rendering each outputImage to a GLKView ?

Get hue value from a UIColor?

I am using the following code to change hue of UIImage
UIImage *image = [UIImage imageNamed:#"Image.png"];
// Create a Core Image version of the image.
CIImage *sourceCore = [CIImage imageWithCGImage:[image CGImage]];
// Apply a CIHueAdjust filter
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: sourceCore forKey: #"inputImage"];
[hueAdjust setValue: [NSNumber numberWithFloat: 1.0f] forKey: #"inputAngle"];
CIImage *resultCore = [hueAdjust valueForKey: #"outputImage"];
// Convert the filter output back into a UIImage.
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef resultRef = [context createCGImage:resultCore fromRect:[resultCore extent]];
UIImage *result = [UIImage imageWithCGImage:resultRef];
CGImageRelease(resultRef);
The code works fine, i'm just trying to find a way to get the correct NSNumber values of inputAngle for the specific colors.
so can i maybe:
Get the value by converting a [UIColor colorWithRed:0.89 green:0.718 blue:0.102 alpha:1.0]
or maybe just use the UIColor in some way ?
or is there any list with the specific numbers for each color ?
The docs says:
Thanks in advance
This post might answer your question:
Is there function to convert UIColor to Hue Saturation Brightness?
The angle should be the hue value you get there.
Here you find some information on how the angle has to be understood:
iOS: Values for CIFilter (Hue) from Photoshop
EDIT
Here is some example code based on yours:
First let's define the color you want to filter (for your inputAngle)
UIColor *myColor = [UIColor redColor]; // The color you want to filter
Then we determine the hue value of that color (that's the actual inputAngle)
CGFloat hue;
CGFloat saturation;
CGFloat brightness;
CGFloat alpha;
[myColor getHue:&hue saturation:&saturation brightness:&brightness alpha:&alpha];
This is your code (unchanged)
UIImage *image = [UIImage imageNamed:#"Image.png"];
// Create a Core Image version of the image.
CIImage *sourceCore = [CIImage imageWithCGImage:[image CGImage]];
// Apply a CIHueAdjust filter
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: sourceCore forKey: #"inputImage"];
Here we apply the filter using the determined hue value of the chosen color
[hueAdjust setValue: [NSNumber numberWithFloat: hue] forKey: #"inputAngle"];
This is your code (unchanged)
CIImage *resultCore = [hueAdjust valueForKey: #"outputImage"];
// Convert the filter output back into a UIImage.
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef resultRef = [context createCGImage:resultCore fromRect:[resultCore extent]];
UIImage *result = [UIImage imageWithCGImage:resultRef];
CGImageRelease(resultRef);
Hope this fits your needs.

Fastest way to remove a color from a UIImage?

I have an image that I'm using to create an anaglyph by splitting it into separate red and cyan versions and setting a parallax between them.
The source image is 1146 x 580 (at retina resolution). I'm using a CIColorMatrix to create each image. Here's the code for the Cyan version which removes all of the red.
-(UIImage *)createCyan:(UIImage *)anImage
{
CIImage *inputImage = [CIImage imageWithCGImage:anImage.CGImage];
CIFilter *matrixFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[matrixFilter setDefaults];
[matrixFilter setValue:inputImage forKey:kCIInputImageKey];
[matrixFilter setValue:[CIVector vectorWithX:0 Y:0 Z:0 W:0] forKey:#"inputRVector"];
[matrixFilter setValue:[CIVector vectorWithX:0 Y:1 Z:0 W:0] forKey:#"inputGVector"];
[matrixFilter setValue:[CIVector vectorWithX:0 Y:0 Z:1 W:0] forKey:#"inputBVector"];
[matrixFilter setValue:[CIVector vectorWithX:0 Y:0 Z:0 W:1] forKey:#"inputAVector"];
CIImage *outputImage = [matrixFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
return [UIImage imageWithCGImage:cgimg];
}
The problem is that this takes over a second to process. I need to find a much faster means (if one exists) of accomplishing the same thing.
Is there a better method?
The issue with speed is not so much the filter, but that you're initializing a CIContext, a very expensive process, repeatedly. Create the CIContext beforehand, and your problem is solved.

UIImage color correction

I am using AVFoundation to play a video by creating a CGImage from the AVFoundation callback, creating a UIImage from the CGImage and then displaying the UIImage in a UIImageView.
I want to apply some color correction to the images before I display them on the screen. What is the best way to colorize the images I'm getting?
I tried using the CIFilters, but that requires me to first create a CIImage from AVFoundation, then colorise it, then create a CGImage and then create a UIImage, and I'd rather avoid the extra step of creating the CIImage if possible.
Additionally, it doesn't seem like the performance of the CIFilters is fast enough - at least not when also having to create the additional CIImage. Any suggestions for a faster way to go about doing this?
Thanks in advance.
It seems that using an EAGLContext instead of a standard CIContext is the answer. That gives fast enough performance in creating the colorized images for my needs.
Simple code example here:
On Init:
NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
[options setObject: [NSNull null] forKey: kCIContextWorkingColorSpace];
m_EAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
m_CIContext = [CIContext contextWithEAGLContext:m_EAGLContext options:options];
Set the color:
-(void)setColorCorrection:(UIColor*)color
{
CGFloat r,g,b,a;
[color getRed:&r green:&g blue:&b alpha:&a];
CIVector *redVector = [CIVector vectorWithX:r Y:0 Z:0];
CIVector *greenVector = [CIVector vectorWithX:0 Y:g Z:0];
CIVector *blueVector = [CIVector vectorWithX:0 Y:0 Z:b];
m_ColorFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[m_ColorFilter setDefaults];
[m_ColorFilter setValue:redVector forKey:#"inputRVector"];
[m_ColorFilter setValue:greenVector forKey:#"inputGVector"];
[m_ColorFilter setValue:blueVector forKey:#"inputBVector"];
}
On each video frame:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CGImageRef cgImage = nil;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
[m_ColorFilter setValue:ciImage forKey:kCIInputImageKey];
CIImage *adjustedImage = [m_ColorFilter valueForKey:kCIOutputImageKey];
cgImage = [m_CIContext createCGImage:adjustedImage fromRect:ciImage.extent];

Resources