Hello I'd like to create the following Black and White Photoshop effect on a UIImage
https://drive.google.com/file/d/0B5dHxpdDwpPec3dPTWdLVnNhZFk/view?usp=sharing
In which you can change the brightness of each of the six colors (reds yellows green cyans blues magentas)
I used this to make the image black and white but it doesn't allow me to change the specific colors
self.imageView.image = chosenImage;
CIImage *beginImage = [CIImage imageWithCGImage:chosenImage.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
self.imageView.image = newImage;
Thank You for your time
I think you can accomplish that effect with the following function:
- (UIImage *)grayScaleImageWith:(UIImage *)image blackPoint:(CGFloat)blackPoint whitePoint:(CGFloat)whitePoint andGamma:(CGFloat)gamma {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateCalibratedGray(whitePoint, blackPoint, gamma);
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Then call it filling the black and white values with the values selected on the UI:
CGFloat black[3] = { 0, 0, 0 }; // replace content with values from interface
CGFloat white[3] = { 100, 100, 100 }; // replace content with values from interface
[self grayScaleImageWith:image blackPoint:black whitePoint:white andGamma:1.8f];
I have not tested this code yet but I hope at least it points you in the right direction.
Related
UIImage *sticky = [UIImage imageNamed:#"Radio.png"];
[_imgViewSticky setImage:sticky];
CIImage *outputImage = [self.originalImage CIImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:outputImage fromRect:[outputImage extent]];
float widthRatio = [outputImage extent].size.width / 320;
float heighRatio = [outputImage extent].size.height / 480;
CGPoint cgStickyPoint = CGPointMake(_imgViewSticky.frame.origin.x * widthRatio, _imgViewSticky.frame.origin.y * heighRatio);
cgImg = [self setStickyForCGImage:cgImg withPosition:cgStickyPoint];
The last line returns a CGImageRef object.
And I'm assigning the value to final image like this:
UIImage *finalImage = [UIImage ImageWithCGImageRef:cgImg];
Yet I'm not getting the image. Any ideas why? Any Help is much appreciated.
I notice that your CIContext isn't receiving any drawing, which could be why you're not getting an image. I don't have a clear picture of what you want, but this code will superimpose one UIImage on top of another UIImage:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, 0.0); //Create an image context
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)]; //Draw the first UIImage
[stickerImage drawInRect:stickerRect]; //Draw the second UIImage wherever you want on top of the first image
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); //Get the final UIImage result
I have a image & i want to change the color of that image through programatically.
& I want to change the color of this image
UPDATE:
Use this method...
-(UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color {
// load the image
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Use it like below...
yourImageView.image = [self imageNamed:#"yourImageName" withColor:[UIColor orangeColor]];
Here is the Swift version:
extension UIImage {
func colorizeWith(color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(self.size)
let context = UIGraphicsGetCurrentContext()
color.setFill()
CGContextTranslateCTM(context, 0, self.size.height)
CGContextScaleCTM(context, 1.0, -1.0)
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeNormal);
let rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextDrawImage(context, rect, self.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, self.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
let coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImage;
}
}
You can try Ankish Jain's answer, it works for me.
theImageView.image = [theImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[theImageView setTintColor:[UIColor redColor]];
CoreImage Color Filters work great for this kind of tasks - I find them slightly more straightforward than using the Core Graphic classes (CG...) : They work by allowing you to adjust the RGB and Alpha characteristics of the image I have been using them to change the white background of a QRCode to colored. RGBA of white is (1,1,1,1) in your case I believe you have to reverse the colour. Just check the CI documentation of Apple, there are a few dozen filters available CIColorMatrix is just one of them.
CIImage *beginImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filtercd = [CIFilter filterWithName:#"CIColorMatrix" //rgreen
keysAndValues: kCIInputImageKey, beginImage, nil];
[filtercd setValue:[CIVector vectorWithX:0 Y:1 Z:1 W:0] forKey:#"inputRVector"]; // 5
[filtercd setValue:[CIVector vectorWithX:1 Y:0 Z:1 W:0] forKey:#"inputGVector"]; // 6
[filtercd setValue:[CIVector vectorWithX:1 Y:1 Z:0 W:0] forKey:#"inputBVector"]; // 7
[filtercd setValue:[CIVector vectorWithX:0 Y:0 Z:0 W:1] forKey:#"inputAVector"]; // 8
[filtercd setValue:[CIVector vectorWithX:1 Y:1 Z:0 W:0] forKey:#"inputBiasVector"];
CIImage *doutputImage = [filtercd outputImage];
CGImageRef cgimgd = [context createCGImage:doutputImage fromRect:[doutputImage extent]];
UIImage *newImgd = [UIImage imageWithCGImage:cgimgd];
filterd.image = newImgd;
CGImageRelease(cgimgd);
As I've answered here iPhone - How do you color an image? in my opinion the best way to colorize an image from iOS 7 is by using
myImageView.image = [[UIImage imageNamed:#"myImage"] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
and then change the tintColor of the imageView or whatever contains the image.
In my project I have to make a screenshot of the screen and apply blur to create the effect of frosted glass. Content can be moved under the glass and then method bluredImageWithRect: called. I'm trying to optimize the following method to speed up the application. Major losses occur when a blur filter is applied to the screenshot, so I'm looking for a way to take a screenshot in a lower resolution, apply a blur to the screenshot, and then stretch it to fit some rect.
- (CIImage *)bluredImageWithRect:(CGRect)rect {
CGSize smallSize = CGSizeMake(rect.size.width, rect.size.height);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, smallSize.width, smallSize.height, 8, 0, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGContextClearRect(ctx, rect);
CGColorSpaceRelease(colorSpaceRef);
CGContextSetInterpolationQuality(ctx, kCGInterpolationNone);
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextTranslateCTM(ctx, 0.0, self.view.frame.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGImageRef maskImage = [UIImage imageNamed:#"mask.png"].CGImage;
CGContextClipToMask(ctx, rect, maskImage);
[self.view.layer renderInContext:ctx];
CGImageRef imageRef1 = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
NSDictionary *options = #{(id)kCIImageColorSpace : (id)kCFNull};
CIImage *beforeFilterImage = [CIImage imageWithCGImage:imageRef1 options:options];
CGImageRelease(imageRef1);
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, beforeFilterImage, #"inputRadius", [NSNumber numberWithFloat:3.0f], nil];
CIImage *afterFilterImage = blurFilter.outputImage;
CIImage *croppedImage = [afterFilterImage imageByCroppingToRect:CGRectMake(0, 0, smallSize.width, smallSize.height)];
return croppedImage;
}
Here is a tutorial iOS image processing with the accelerate framework that shows how to do a blur effect that may be fast enough for what you need.
I want to take allow the user to take a picture and then show the greyscale version. However, it is very slow because the image file is too big/resolution is too high.
How can I reduce the quality of the image when the user takes the picture?
Heres the code I am using for the transformation:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
/* changes end here */
}
How about downsampling the UIImage before passing it on to the grayscale translation? Something like:
NSData *imageAsData = UIImageJPEGRepresentation(imageFromCamera, 0.5);
UIImage *downsampledImaged = [UIImage imageWithData:imageAsData];
You could use other compression qualities other than 0.5 of course.
If you are using AVFoundation to capture the image you can set the quality of the image to be captured by changing the capture session preset like the following:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetLow;
There is a table of which presents correspond to which resolution in the AVFoundation Programming Guide.
found this little code snippet that seems to do what i want, but im getting yelled at by xcode saying self.CGimage isnt a property of my view controller. (which makes sense since thats a UIimage property). What changes would i need to make to this code for it to be functional? Thanks!
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
UIImage* tempImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
CGImageRef maskingImage = [maskImage CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImage.size.width, maskImage.size.height), maskingImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, image.size.width, image.size.height), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGContextRelease(mainViewContentContext);
CGImageRelease(mainViewContentBitmapContext);
maskingImage = nil;
CGImageRelease(maskingImage);
// return the image
return theImage;
}
Try replacing self.CGImage with image.CGImage.
Place this method in a UIImage category (or subclass).