GPUImage filter chain - ios

I am trying to apply to filters to an image using GPUImage but i am only getting the last filter applied to the image.
Here is the code:
UIImage *inputImage = [UIImage imageNamed:#"2.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
//Set Brightness to 60
GPUImageBrightnessFilter *brightnessFilter = [GPUImageBrightnessFilter new];
[brightnessFilter setBrightness:2];
//Set Contrast to 12
GPUImageContrastFilter *contrastFilter = [GPUImageContrastFilter new];
[contrastFilter setContrast:1.0];
[contrastFilter addTarget:brightnessFilter];
[stillImageSource addTarget:contrastFilter];
[contrastFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *outputImage1 = [contrastFilter imageFromCurrentFramebuffer];
imageView.image = outputImage1;

- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {85, 255, 85, 255, 85, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor (context, 1.0,1.0, 1.0, 1);
CGContextFillRect(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height));
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGRect imageRect = CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height);
CGContextTranslateCTM(context, 0, imageView.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
//CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, imageRect, brownRef);
//[originalImage drawInRect:CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height)];
CGImageRelease(brownRef);
// CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return doctoredImage;
}
you can try this code to change the specific range of colors defined in mask. if u
want more referance u can refer these stack overflow link : How to change the skin color of face from the source image in ios?

Related

how to use SVGKImage change SVG fill color?

how to use SVGKImage change SVG fill color?
SVGKImage *svgImage = [SVGKImage imageNamed:#"card.svg"];
svgImage.size = self.bounds.size;
SVGKLayeredImageView *imageLayer = [[SVGKLayeredImageView alloc] initWithSVGKImage:svgImage];
SVGKLayer *layer = (SVGKLayer *)imageLayer.layer;
self.layer.mask = layer;
Here, I find a easy way to get the CAShapeLayer
SVGKImage *svgImage = [SVGKImage imageNamed:#"test.svg"];
CAShapeLayer *thisLayer = svgImage.CALayerTree;
//thisLayer has the UIBezierPath info in test.svg,so,you can do anything with it.
You have to use the CALayer sub-layers to changing the color :
SVGKImage *svgImage = [SVGKImage imageNamed:#"Anchor.svg"];
SVGKLayeredImageView *svgImageView = [[SVGKLayeredImageView alloc] initWithSVGKImage:svgImage];
[capturedImageView addSubview:svgImageView];
CALayer* layer = svgImageView.layer;
for (CALayer *subLayer in layer.sublayers) {
DLog(#"%#", [subLayer class]);
for (CALayer *subSubLayer in subLayer.sublayers) {
DLog(#"%#", [subSubLayer class]);
for (CALayer *subSubSubLayer in subSubLayer.sublayers) {
DLog(#"%#", [subSubSubLayer class]);
if( [subSubSubLayer isKindOfClass:[CAShapeLayer class]]){
CAShapeLayer* shapeLayer = (CAShapeLayer*)subSubSubLayer;
shapeLayer.fillColor = [UIColor redColor].CGColor;
}
}
}
}
Hope this helps.
Source : https://github.com/SVGKit/SVGKit/issues/98
I know a method that can change the color of an UIImage.
So, we should get the UIImage from SVGKImage firstly.
UIImage *img = [SVGKImage imageNamed:#"imageName"].UIImage;
And then, we can define a method like this:
- (UIImage *)changeTintColorWithImage:(UIImage *)img color:(UIColor *)tintColor {
UIImage *imageIn = img;
CGRect rect = CGRectMake(0, 0, imageIn.size.width, imageIn.size.height);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageIn.CGImage);
BOOL opaque = alphaInfo == kCGImageAlphaNoneSkipLast
|| alphaInfo == kCGImageAlphaNoneSkipFirst
|| alphaInfo == kCGImageAlphaNone;
UIGraphicsBeginImageContextWithOptions(imageIn.size, opaque, imageIn.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, imageIn.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextClipToMask(context, rect, imageIn.CGImage);
CGContextSetFillColorWithColor(context, tintColor.CGColor);
CGContextFillRect(context, rect);
UIImage *imageOut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageOut;
}
OK. this method can help us change the color of an image.It mainly use some APIs in CoreGraphics.
And I suggest you can create a category with UIImage, it will be convenient to use.
Now you can change the color of png image itself.
Objective C:
UIImage *img = [UIImage imageNamed:#"imagename"];
UIImage *tintedImage = [img imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIImageView *imgVw = [[UIImageView alloc] initWithImage:tintedImage];
imgVw.frame = CGRectMake(0, 20, 100, 100);
imgVw.tintColor = [UIColor greenColor];
[self.view addSubview:imgVw];
SWIFT:
let img = UIImage(named: "imagename")
let tintedImage: UIImage? = img?.withRenderingMode(.alwaysTemplate)
let imgVw = UIImageView(image: tintedImage)
imgVw.frame = CGRect(x: 0, y: 20, width: 100, height: 100)
imgVw.tintColor = UIColor.green
view.addSubview(imgVw)
cheers!!!

How to crop the detected face

I use CoreImage to detect the face. I want to crop the face after face detection. I use this snippet to detect face:
-(void)markFaces:(UIImageView *)facePicture{
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -imageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Get the face rect: Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
UIGraphicsBeginImageContext(faceView.bounds.size);
[faceView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage with a CIFilter
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: #"inputRadius"];
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
UIImage *endImage = [[UIImage alloc] initWithCIImage:resultImage];
//Place the UIImage in a UIImageView
UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
newView.image = endImage;
[self.view addSubview:newView];
CGFloat faceWidth = faceFeature.bounds.size.width;
[imageView addSubview:faceView];
// LEFT EYE
if(faceFeature.hasLeftEyePosition)
{
const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
UIView *leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f
,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
NSLog(#"Left Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE);
leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
[imageView addSubview:leftEyeView];
}
// RIGHT EYE
if(faceFeature.hasRightEyePosition)
{
const CGPoint rightEyePos = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
UIView *rightEye = [[UIView alloc] initWithFrame:CGRectMake(rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5,
rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5,
faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
NSLog(#"Right Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE);
rightEye.backgroundColor = [[UIColor blueColor] colorWithAlphaComponent:0.2];
rightEye.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
[imageView addSubview:rightEye];
}
// MOUTH
if(faceFeature.hasMouthPosition)
{
const CGPoint mouthPos = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5,
faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE)];
NSLog(#"Mouth X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5f,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5f,faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE);
mouth.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.3];
mouth.layer.cornerRadius = faceWidth*MOUTH_SIZE_RATE*0.5;
[imageView addSubview:mouth];
}
}
}
What I want is just crop the face.
You can easily crop the face using this function.it is tested and working properly.
-(void)faceWithFrame:(CGRect)frame{
CGRect rect = frame;
CGImageRef imageRef = CGImageCreateWithImageInRect([self.imageView.image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
self.cropedImg.image =cropedImage;
}
You just pass the face frame and the above function will give the crop face image.

I want to generate a gray image in iOS by drawing using cgcontextfillXXX, but I failed

CGRect rect = CGRectMake(0, 0, 100, 100);
float white[1] = {0.0f};
float gray[1] = {1.0f};
CGContextSetFillColorSpace(bitmap, colorSpace);
CGContextSetFillColorWithColor(bitmap, CGColorCreate(colorSpace, white));
CGContextClearRect(bitmap, rect);
CGContextSetFillColorWithColor(bitmap, CGColorCreate(colorSpace, gray));
CGContextFillEllipseInRect(bitmap, rect);
CGImageRef imgRef = CGBitmapContextCreateImage(bitmap);
UIImage *image = [[[UIImage alloc] initWithCGImage:imgRef] autorelease];
self.imageView.image = image;
CGContextRelease(bitmap);
CGColorSpaceRelease(colorSpace);
the code is above.it does't work as I expect.I'm not familar to iOS CoreGraphic framework.
See this syntax for CGColorCreate in apple Document
Example code is :
CGFloat components[]={r, g, b, 1.f};
drawColor=CGColorCreate(colorSpace, components);
CGColorSpaceRelease(colorSpace);
Note : You have to define r,g,b as your wish which are all CGFloat type..

CGImageCreateWithMask image encodes with alpha channel inverted

/NOTE, I've fixed the code.. look for Edit note/
For iOS 5.0+, for running on the iPad, I've created a function to allow the user to mask an input image, generating two new images, a foreground image and a background image. When I add these to an UIImageView, and display on device or simulator, I get what I expect.
However, when I save these by encoding the data as session data, the resulting images are backwards (ie the image matte has been reversed). Two of us have run over the code, there aren't any places were these are reversed, no copy/paste errors. I thought there could be something to kCGImageAlphaPremultipliedFirst vs kCGImageAlphaPremultipliedLast. When I encode the matted images, they start out with kCGImageAlphaPremultipliedFirst, when they are loaded, they are kCGImageAlphaPremultipliedLast.
Any help or ideas would be greatly appreciate.
Amy#InsatiableGenius
The functions below are called with :
[self createMask];
[self addImageAndBackground:foregroundImg backgroundImg:backgroundImg];
- (UIImage*)maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef sourceImage = [image CGImage];
CGImageRef imageWithAlpha = sourceImage;
if ((CGImageGetAlphaInfo(sourceImage) == kCGImageAlphaNone)
|| (CGImageGetAlphaInfo(sourceImage) == kCGImageAlphaNoneSkipFirst)
|| (CGImageGetAlphaInfo(sourceImage) == kCGImageAlphaNoneSkipLast)) {
imageWithAlpha = CopyImageAndAddAlphaChannel(sourceImage);
}
CGImageRef masked = CGImageCreateWithMask(imageWithAlpha, mask);
CGImageRelease(mask);
if (sourceImage != imageWithAlpha) {
CGImageRelease(imageWithAlpha);
}
UIImage* retImage = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
/* EDIT STARTS HERE return retImage; */
//Added extra render step to force it to save correct alpha values (not the mask)
UIImage* retImage = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
UIGraphicsBeginImageContext(retImage.size);
[retImage drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
retImage = nil;
return newImg;
}
-(void)createMask{
//take whole screen uiimage from paintview
//user painted black for mask, set rest of window to white
[paintView setWhiteBackground:YES];
//get user painted mask
UIImage *maskFromPaint = [paintView allocNormalResImageWithBlur:NO/*blur?*/];
[self dumpTestImg:maskFromPaint name:#"maskFromPaint"];
UIImage *maskNoAlpha = [maskFromPaint resetImageAlpha:1.0];
[self dumpTestImg:maskNoAlpha name:#"maskFromPaintNoAlpha"];
//mask has to be gray
UIImage *maskFromPaintGray = [self convertImageToGrayScale:maskNoAlpha];
[self dumpTestImg:maskFromPaintGray name:#"maskFromPaintGray"];
//Had to call this normalize function because some pngs are not compatiable (8 bit)
UIImage *disp_original = [[UIImage alloc] initWithCGImage:[[original normalize] CGImage] ];
//Resize original to screen size (alternatively we could upscale the paint... not sure which for now)
disp_original = [disp_original resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:inputImageView.frame.size interpolationQuality:kCGInterpolationHigh] ;
CGSize imageInViewSize = disp_original.size;
//use size of displayed original to crop the paintview
CGRect overlayRect = CGRectMake((int)(inputImageView.frame.size.width - imageInViewSize.width) / 2,
(int)(inputImageView.frame.size.height - imageInViewSize.height) / 2,
(int)imageInViewSize.width,
(int)imageInViewSize.height);
//here is the actual crop
//get rectangle from paint that is the same size as the displayed original
CGImageRef maskFromPaintimageRef = CGImageCreateWithImageInRect([maskFromPaintGray CGImage], overlayRect);
UIImage *invertedMaskFromPaint = [UIImage imageWithCGImage:maskFromPaintimageRef];
self.maskImg = [self invertImage:invertedMaskFromPaint];
[self dumpTestImg:self.maskImg name:#"maskFromPaintCropped"];
self.backgroundImg = [self maskImage:disp_original withMask:self.maskImg];
self.foregroundImg = [self maskImage:disp_original withMask:invertedMaskFromPaint];
foregroundImgView.image = foregroundImg;
backgroundImgView.image = backgroundImg;
foregroundImgView.hidden =NO;
backgroundImgView.hidden =NO;
[container bringSubviewToFront:foregroundImgView];
[container bringSubviewToFront:backgroundImgView];
[self dumpTestImg:foregroundImg name:#"foregroundImg"];
[self dumpTestImg:backgroundImg name:#"backgroundImg"];
//cleanup
CGImageRelease(maskFromPaintimageRef);
maskFromPaint = nil;
maskFromPaintGray = nil;
maskNoAlpha = nil;
disp_original = nil;
//put things back
[paintView setWhiteBackground:NO];
}
CGImageRef CopyImageAndAddAlphaChannel(CGImageRef sourceImage) {
CGImageRef retVal = NULL;
size_t width = CGImageGetWidth(sourceImage);
size_t height = CGImageGetHeight(sourceImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef offscreenContext = CGBitmapContextCreate(NULL, width, height,
8, 0, colorSpace, kCGImageAlphaPremultipliedLast );
if (offscreenContext != NULL) {
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), sourceImage);
retVal = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
}
CGColorSpaceRelease(colorSpace);
return retVal;
}
- (UIImage*)invertImage:(UIImage *)sourceImage {
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:#"CIColorInvert"];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage];
[filter setValue:inputImage forKey:#"inputImage"];
return [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
}
-(void)addImageAndBackground:(UIImage *)foregroundImgIn backgroundImg:(UIImage *)backgroundImgIn{
UIImageView *tmpIV;
UIImageView *imgVF = [[UIImageView alloc] initWithImage: foregroundImgIn];
imgVF.userInteractionEnabled = YES;
[self dumpTestImg:foregroundImgIn name:#"foregroundIn"];
UIImageView *imgVB = [[UIImageView alloc] initWithImage: backgroundImgIn];
imgVB.userInteractionEnabled = YES;
[self dumpTestImg:backgroundImgIn name:#"backgroundIn"];
}

How can I add a text on an image in iphone?

I need to put a text over an image in iPhone. It's like the Eurosport iPhone app.
(source: mzstatic.com)
In the same way I need to put a text in my app. How can I do this?
Thanks.
I found two ways
1:
UIImageView *imageView = [[UIImageView alloc] init];
imageView.frame = CGRectMake(0, 25, 200, 200);
imageView.image = [UIImage imageNamed:#"Choice set.png"];
UILabel *myLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 200, 20)];
myLabel.text = #"Hello There";
myLabel.textColor = [UIColor whiteColor];
myLabel.backgroundColor = [UIColor blueColor];
[imageView addSubview:myLabel];
[self.view addSubview:imageView];
[imageView release];
[myLabel release];
2:
Add text to UIImage
-(UIImage *)addText:(UIImage *)img text:(NSString *)text1{
int w = img.size.width;
int h = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
char* text= (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];
CGContextSelectFont(context, "Arial",20, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextShowTextAtPoint(context,10,10,text, strlen(text));
CGImageRef imgCombined = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *retImage = [UIImage imageWithCGImage:imgCombined];
CGImageRelease(imgCombined);
return retImage;
}
then call
UIImage *image = [UIImage imageNamed:#"myimage.png"];
imageView.image = [self addText:image text:#"Hello there"];
Just simple make label with clear background.
yourLabel.backgroundColor = [UIColor clearColor];

Resources