I am using AVFoundation to play a video by creating a CGImage from the AVFoundation callback, creating a UIImage from the CGImage and then displaying the UIImage in a UIImageView.
I want to apply some color correction to the images before I display them on the screen. What is the best way to colorize the images I'm getting?
I tried using the CIFilters, but that requires me to first create a CIImage from AVFoundation, then colorise it, then create a CGImage and then create a UIImage, and I'd rather avoid the extra step of creating the CIImage if possible.
Additionally, it doesn't seem like the performance of the CIFilters is fast enough - at least not when also having to create the additional CIImage. Any suggestions for a faster way to go about doing this?
Thanks in advance.
It seems that using an EAGLContext instead of a standard CIContext is the answer. That gives fast enough performance in creating the colorized images for my needs.
Simple code example here:
On Init:
NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
[options setObject: [NSNull null] forKey: kCIContextWorkingColorSpace];
m_EAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
m_CIContext = [CIContext contextWithEAGLContext:m_EAGLContext options:options];
Set the color:
-(void)setColorCorrection:(UIColor*)color
{
CGFloat r,g,b,a;
[color getRed:&r green:&g blue:&b alpha:&a];
CIVector *redVector = [CIVector vectorWithX:r Y:0 Z:0];
CIVector *greenVector = [CIVector vectorWithX:0 Y:g Z:0];
CIVector *blueVector = [CIVector vectorWithX:0 Y:0 Z:b];
m_ColorFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[m_ColorFilter setDefaults];
[m_ColorFilter setValue:redVector forKey:#"inputRVector"];
[m_ColorFilter setValue:greenVector forKey:#"inputGVector"];
[m_ColorFilter setValue:blueVector forKey:#"inputBVector"];
}
On each video frame:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CGImageRef cgImage = nil;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
[m_ColorFilter setValue:ciImage forKey:kCIInputImageKey];
CIImage *adjustedImage = [m_ColorFilter valueForKey:kCIOutputImageKey];
cgImage = [m_CIContext createCGImage:adjustedImage fromRect:ciImage.extent];
Related
I am trying to achieve this effect . you can check this effect in musical.ly app (ripple effect)
https://drive.google.com/open?id=1uXExnmWQ7OfSGLFXdH7-5imay8tW87vO
Here is my approach.
I will render alpha and scaled image to pixel buffer I am showing all sample buffer using AVSampleBufferDisplayLayer . I want to show this animation for 3 sec - 5sec . Once user is done Then I will convert it to mp4 using AVAssetWriter .
I am unable to add alpha image to cvpixelbuffer
If there’s a better approach to make this animation . Please guide .
I get all the sample buffer using AVAssetReaderTrackOutput.
NSDictionary *readerOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack
outputSettings:readerOutputSettings];
[reader addOutput:readerOutput];
[reader startReading];
while ((sample = [readerOutput copyNextSampleBuffer]))
{
[samples addObject:(__bridge id)sample];
CFRelease(sample);
}
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)[samples lastObject]);
CIImage *filteredImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:nil];
CIFilter* theFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[theFilter setDefaults];
[theFilter setValue: filteredImage forKey: #"inputImage"];
CIVector *theRVector = [CIVector vectorWithX:1 Y:0 Z:0 W:0];
[theFilter setValue: theRVector forKey:#"inputRVector"];
CIVector *theGVector = [CIVector vectorWithX:0 Y:1 Z:0 W:0];
[theFilter setValue: theGVector forKey:#"inputGVector"];
CIVector *theBVector = [CIVector vectorWithX:0 Y:0 Z:1 W:0];
[theFilter setValue: theBVector forKey:#"inputBVector"];
CIVector *theAVector = [CIVector vectorWithX:0 Y:0 Z:0 W:0.5];
[theFilter setValue: theAVector forKey:#"inputAVector"];
CIVector *theBiasVector = [CIVector vectorWithX:0 Y:0 Z:0 W:0];
[theFilter setValue: theBiasVector forKey:#"inputBiasVector"];
CIImage* result = [theFilter valueForKey: #"outputImage"];
Then I have decrease the alpha of ciimage and render into first pixel buffer
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
CIContext *cicontext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
[cicontext render:result toCVPixelBuffer:CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)[samplebuffers firstObject]) bounds:CGRectMake(0, 0, [result extent].size.width,[result extent].size.height) colorSpace:CGColorSpaceCreateDeviceRGB()];
I got this result .
Expected result
Thanks in advance
This image effect appears to be a fixed image with a zoom and a fade applied to a copy over the original. You should not need an animation library for this, just create a PNG and 2 layers to render it, then zoom one of the layers and fade it out as the zoom happens. Start with a 32 BPP image that contains alpha for the see through parts and you should be fine. If you want to capture the effect as .h264, that is a totally different problem.
Not able to get original image after applying the filter on imageview.As I want to get original image after moving the slider from 1 to 0 (max to min value, I mean in reverse direction).
Below is the code for applying sharpness effect
- (IBAction)actionSharp:(UISlider *)sender {
demoImage = _affectView.image;
CIImage* image = [CIImage imageWithCGImage:demoImage.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSNumber *testNSNumber = [NSNumber numberWithFloat:_sharpActionWithSlider.value];
NSLog(#"ffghhn %f",_sharpActionWithSlider.value);
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CISharpenLuminance" keysAndValues: #"inputImage", image, nil];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:testNSNumber forKey:kCIInputSharpnessKey];
[gaussianBlurFilter setDefaults];
CIImage *result2 = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result2 fromRect:[result2 extent]];
UIImage *sharp= [UIImage imageWithCGImage:cgImage];
UIImage *p = sharp;
self.affectView.image= p;
self.affectView.alpha = sender.value;
CGImageRelease(cgImage);
}
Just copy the image on which you are adding filter and keep it aside. Add effects to copied image and show them on your _affectView image view.
On your slider action every time get reference from your original image and effect to that image rather than image from image view.
Create global var as mainImage as
UIImage *mainImage;
in your viewDidLoad assign origial image to your main image
mainImage = [UIImage imagenamed:"yourMainImage"]; // assign from source local or url
In you IBAction on every slider action get refernce of original image instead of fetching image from imageview.
- (IBAction)actionSharp:(UISlider *)sender {
UIImage *demoImage = mainImage;
CIImage* image = [CIImage imageWithCGImage:demoImage.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSNumber *testNSNumber = [NSNumber numberWithFloat:_sharpActionWithSlider.value];
NSLog(#"ffghhn %f",_sharpActionWithSlider.value);
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CISharpenLuminance" keysAndValues: #"inputImage", image, nil];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:testNSNumber forKey:kCIInputSharpnessKey];
[gaussianBlurFilter setDefaults];
CIImage *result2 = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result2 fromRect:[result2 extent]];
UIImage *sharp= [UIImage imageWithCGImage:cgImage];
UIImage *p = sharp;
self.affectView.image= p;
self.affectView.alpha = sender.value;
CGImageRelease(cgImage);
}
You are getting your demo image from _affectView, applying a filter to it and then saving it in _affectView again. The next time you retrieve _affectView into a UIImage it will not be the original one but the modified one so it is imposible to get back to the original.
Just at the start (outside this method) save your original image in demo and modify demo every time you want.
I have a partially-transparent UIImage that I would like to convert to a JPEG.
NSData * output = UIImageJPEGRepresentation(myUIImage,.90);
The JPEG always has a white background. I would like it to be black. How can I do that?
Performance is a concern. The image has just been rendered in CoreImage, where it would also be possible to set a background Color.
CIFilter *filter = [CIFilter filterWithName:#"CIPixellate"];
[filter setDefaults];
[filter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[filter setValue:#(amount) forKey:#"inputScale"];
[filter setValue:vector forKey:#"inputCenter"];
CIImage* result = [filter valueForKey:kCIOutputImageKey];
Currently I immediately re-render 'result' into a new UIGraphicsImageContext
CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
but that adds up to 82% more execution time vs skipping the step and having a white JPEG background.
I'd so appreciate help on this. Thank you.
Update: I tried the following with CISourceOverCompositing, which increased runtime by 198% in some cases
CIFilter * constantColorFilter = [CIFilter filterWithName:#"CIConstantColorGenerator"];
[constantColorFilter setValue:[CIColor colorWithCGColor:[backgroundFillColor CGColor]] forKey:kCIInputColorKey];
CIFilter * composeFilter = [CIFilter filterWithName:#"CISourceOverCompositing"];
CIImage * bgColorResult = [constantColorFilter valueForKey:kCIOutputImageKey];
[composeFilter setValue:bgColorResult forKey:kCIInputBackgroundImageKey];
[composeFilter setValue:pixelateResult forKey:kCIInputImageKey];
result = [composeFilter valueForKey:kCIOutputImageKey];
I tried using singletons CIFilters to avoid re-creating CIFilter objects, but it had trivial impact on performance.
I get sub-optimal performance (80% runtime increase), but this is what I use now:
CIFilter *pixelateFilter = [CIFilter filterWithName:#"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:#(amount) forKey:#"inputScale"];
[pixelateFilter setValue:vector forKey:#"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);
CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;
CGImageRelease(cgImage);
I am using the following code to change hue of UIImage
UIImage *image = [UIImage imageNamed:#"Image.png"];
// Create a Core Image version of the image.
CIImage *sourceCore = [CIImage imageWithCGImage:[image CGImage]];
// Apply a CIHueAdjust filter
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: sourceCore forKey: #"inputImage"];
[hueAdjust setValue: [NSNumber numberWithFloat: 1.0f] forKey: #"inputAngle"];
CIImage *resultCore = [hueAdjust valueForKey: #"outputImage"];
// Convert the filter output back into a UIImage.
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef resultRef = [context createCGImage:resultCore fromRect:[resultCore extent]];
UIImage *result = [UIImage imageWithCGImage:resultRef];
CGImageRelease(resultRef);
The code works fine, i'm just trying to find a way to get the correct NSNumber values of inputAngle for the specific colors.
so can i maybe:
Get the value by converting a [UIColor colorWithRed:0.89 green:0.718 blue:0.102 alpha:1.0]
or maybe just use the UIColor in some way ?
or is there any list with the specific numbers for each color ?
The docs says:
Thanks in advance
This post might answer your question:
Is there function to convert UIColor to Hue Saturation Brightness?
The angle should be the hue value you get there.
Here you find some information on how the angle has to be understood:
iOS: Values for CIFilter (Hue) from Photoshop
EDIT
Here is some example code based on yours:
First let's define the color you want to filter (for your inputAngle)
UIColor *myColor = [UIColor redColor]; // The color you want to filter
Then we determine the hue value of that color (that's the actual inputAngle)
CGFloat hue;
CGFloat saturation;
CGFloat brightness;
CGFloat alpha;
[myColor getHue:&hue saturation:&saturation brightness:&brightness alpha:&alpha];
This is your code (unchanged)
UIImage *image = [UIImage imageNamed:#"Image.png"];
// Create a Core Image version of the image.
CIImage *sourceCore = [CIImage imageWithCGImage:[image CGImage]];
// Apply a CIHueAdjust filter
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: sourceCore forKey: #"inputImage"];
Here we apply the filter using the determined hue value of the chosen color
[hueAdjust setValue: [NSNumber numberWithFloat: hue] forKey: #"inputAngle"];
This is your code (unchanged)
CIImage *resultCore = [hueAdjust valueForKey: #"outputImage"];
// Convert the filter output back into a UIImage.
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef resultRef = [context createCGImage:resultCore fromRect:[resultCore extent]];
UIImage *result = [UIImage imageWithCGImage:resultRef];
CGImageRelease(resultRef);
Hope this fits your needs.
My app applies filters to a picture after it has been selected/captured using Core Image. It works fine on small picture however if i take pictures by camera it takes about 3-4 seconds in order to process the picture and apply the filter. I read through the Core Image Programming Guide performance topics and i came across being able to decide if i want to use the CPU or GPU. So i applied what is written and it's still slow plus an error occurring in the console. I'll post the original code, the changes i applied to switch to GPU, and the error occurring.
Original code:
CIContext *context = [CIContext contextWithEAGLContext:nil]; //[CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer: [NSNumber numberWithBool:NO]}];
CIImage *image = [CIImage imageWithData:UIImagePNGRepresentation(_originalImage)];
CIFilter *filter = [CIFilter filterWithName:_filters[filterName]];
if ([filterName isEqualToString:#"Sepia"]) {
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:#0.8f forKey:kCIInputIntensityKey];
}
if ([filterName isEqualToString:#"B/W"]) {
[filter setValue:image forKey:kCIInputImageKey];
}
if ([filterName isEqualToString:#"Bloom"]) {
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:#1.0f forKey:kCIInputRadiusKey];
[filter setValue:#1.0f forKey:kCIInputIntensityKey];
}
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGRect extent = [result extent];
CGImageRef cgimage = [context createCGImage:result fromRect:extent];
UIImage *filteredImage = [UIImage imageWithCGImage:cgimage];
[_filteredImageView setImage:filteredImage];
tempFilteredImage = filteredImage;
Modified code for the context:
CIContext *context = [CIContext contextWithEAGLContext:nil];
Error occurring after changing the CIContext code:
CIContexts can only be created with ES 2.0 EAGLContexts
Is there something wrong in code ? am i doing the right thing to accelerate the process of rendering ? Thank you.
Try making the context like this...
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:nil];