iOS to Mac GraphicContext Explanation/Conversion - ios

I have been programming for two years on iOS and never on mac. I am working on a little utility for handling some simple image needs that I have in my iOS development. Anyway, I have working code in iOS that runs perfectly but I have absolutely no idea what equivalents are for mac.
I've tried a bunch of different things but I really don't understand how to start a graphics context on the Mac outside of a "drawRect:" method. On the iPhone I would just use UIGraphicsBeghinImageContext(). I know other post have said to use lockFocus/unlockFocus but I'm not sure how exactly to make that work for my needs. Oh, and I really miss UIImage's "CGImage" property. I don't understand why NSImage can't have one, though it sounds a bit trickier than just that.
Here is my working code on iOS—basically it just creates a reflected image from a mask and combines them together:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"Mask_Image.jpg" ofType:nil]];
UIImage *image = [UIImage imageNamed::#"Test_Image1.jpg"];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width, image.size.height + (image.size.height * .5)), NO, [[UIScreen mainScreen]scale]);
[image drawInRect:CGRectMake(0,0, image.size.width, image.size.height)];
[maskedImage drawInRect:CGRectMake(0, image.size.height, maskedImage.size.width, maskedImage.size.height)];
UIImage *anotherImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//do something with anotherImage
Any suggestions for achieving this (simply) on the Mac?

Here's a simple example that draws a blue circle into an NSImage (I'm using ARC in this example, add retains/releases to taste)
NSSize size = NSMakeSize(50, 50);
NSImage* im = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size.width
pixelsHigh:size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[im addRepresentation:rep];
[im lockFocus];
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextClearRect(ctx, NSMakeRect(0, 0, size.width, size.height));
CGContextSetFillColorWithColor(ctx, [[NSColor blueColor] CGColor]);
CGContextFillEllipseInRect(ctx, NSMakeRect(0, 0, size.width, size.height));
[im unlockFocus];
[[im TIFFRepresentation] writeToFile:#"/Users/USERNAME/Desktop/foo.tiff" atomically:NO];
The main difference is that on OS X you first have to create the image, then you can begin drawing into it; on iOS you create the context, then extract the image from it.
Basically, lockFocus makes the current context be the image and you draw directly onto it, then use the image.
I'm not completely sure if this answers all of your question, but I think it's at least one part of it.

Well, here's the note on UIGraphicsBeginImageContextWithOptions:
UIGraphicsBeginImageContextWithOptions
Creates a bitmap-based graphics context with the specified options.
The OS X equivalent, which is also available in iOS (and UIGraphicsBeginImageContextWithOptions is possibly a wrapper around) is CGBitmapContextCreate:
Declared as:
CGContextRef CGBitmapContextCreate (
void *data,
size_t width,
size_t height,
size_t bitsPerComponent,
size_t bytesPerRow,
CGColorSpaceRef colorspace,
CGBitmapInfo bitmapInfo
);
Although it's a C API, you could think of CGBitmapContext as a subclass of CGContext. It renders to a pixel buffer, whereas a CGContext renders to an abstract destination.
For UIGraphicsGetImageFromCurrentImageContext, you can use CGBitmapContextCreateImage and pass your bitmap context to create a CGImage.

Here is a Swift (2.1 / 10.11 API-compliant) version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.currentContext()?.CGContext
CGContextClearRect(ctx, rect)
CGContextSetFillColorWithColor(ctx, NSColor.blackColor().CGColor)
CGContextFillRect(ctx, rect)
im.unlockFocus()

Swift3 version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.current()?.cgContext
ctx!.clear(rect)
ctx!.setFillColor(NSColor.black.cgColor)
ctx!.fill(rect)
im.unlockFocus()

Related

How to convert a UIImage into a black and white UIImage in Objective-C?

I don't want grayscale but rather for the darker colors to turn black and the lighter to turn white. How can I do this? I found something that looks promising here Getting a Black and White UIImage (Not Grayscale) but the line below in the code gives me an error that I can not fix.
CGContextRef contex = CreateARGBBitmapContext(image.size);
You should used GPUImage.
GPUImageAdaptiveThresholdFilter and GPUImageLuminanceThresholdFilter might be what you're looking for.
Example code:
UIImage *image = [UIImage imageNamed:#"yourimage.png"];
GPUImageLuminanceThresholdFilter *filter = [[GPUImageLuminanceThresholdFilter alloc] init];
UIImage *quickFilteredImage = [filter imageByFilteringImage:image];
Hope this helps!
You can convert your black and white image by using following code.
-(UIImage *)convertOriginalImageToBWImage:(UIImage *)originalImage
{
UIImage *newImage;
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, originalImage.size.width * originalImage.scale, originalImage.size.height * originalImage.scale, 8, originalImage.size.width * originalImage.scale, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), [originalImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage];
CGImageRelease(bwImage);
UIGraphicsBeginImageContextWithOptions(originalImage.size, NO, originalImage.scale);
[resultImage drawInRect:CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Or else you can use GPUImage framework .
It is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies.
GPUImage framework
May be it will help you.
For those interested I created a Swift 3 version of #bHuMiCA solution:
extension UIImage {
var bwImage: UIImage? {
guard let cgImage = cgImage,
let bwContext = bwContext else {
return nil
}
let rect = CGRect(origin: .zero, size: size)
bwContext.draw(cgImage, in: rect)
let bwCgImage = bwContext.makeImage()
return bwCgImage.flatMap { UIImage(cgImage: $0) }
}
private var bwContext: CGContext? {
let bwContext = CGContext(data: nil,
width: Int(size.width * scale),
height: Int(size.height * scale),
bitsPerComponent: 8,
bytesPerRow: Int(size.width * scale),
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGImageAlphaInfo.none.rawValue)
bwContext?.interpolationQuality = .high
bwContext?.setShouldAntialias(false)
return bwContext
}
}

subimage from UIImage

I 've a PNG loaded into an UIImage. I want to get a portion of the image based on a path (i.e. it might not be a rectangular). Say, it might be some shape with arcs, etc. Like a drawing path.
What would be the easiest way to do that?
Thanks.
I haven't run this, so it may not be perfect but this should give you an idea.
UIImage *imageToClip = //get your image somehow
CGPathRef yourPath = //get your path somehow
CGImageRef imageRef = [imageToClip CGImage];
size_t width = CGImageGetWidth(imageRef);  
size_t height = CGImageGetHeight(imageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGImageGetColorSpace(imageRef), kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextAddPath(context, yourPath);
CGContextClip(context);
CGImageRef clippedImageRef = CGBitmapContextCreateImage(context);
UIImage *clippedImage = [UIImage imageWithCGImage:clippedImageRef];//your final, masked image
CGImageRelease(clippedImageRef);
CGContextRelease(context);
The easiest way to add a category to the UIImage with follow method:
-(UIImage *)scaleToRect:(CGRect)rect{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:rect];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}

Why does this code resizing an image rotate it 90 degrees from original? [duplicate]

This question already has an answer here:
How to get a correctly rotated UIImage from an ALAssetRepresentation?
(1 answer)
Closed 9 years ago.
I have an iPad app where I'm using the camera. The original image is 480 x 640. I am attempting to resize it to 124 x 160 and then store it in CoreData using this code that I found on the internet:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
The image is returned to me rotated counter-clockwise 90 degrees and I don't see why. I have tried commenting out this statement:
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
but it makes no difference. What is wrong here?
Thanks to everybody who made suggestions.. I kept looking and found this, which scales and keeps the orientation:
CGRect screenRect = CGRectMake(0, 0, 120.0, 160.0);
UIGraphicsBeginImageContext(screenRect.size);
[image drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to consider the orientation of the image when drawing it like this. Check this answer iOS - UIImageView - how to handle UIImage image orientation

Screenshot low quality iOS

In my project I have to make a screenshot of the screen and apply blur to create the effect of frosted glass. Content can be moved under the glass and then method bluredImageWithRect: called. I'm trying to optimize the following method to speed up the application. Major losses occur when a blur filter is applied to the screenshot, so I'm looking for a way to take a screenshot in a lower resolution, apply a blur to the screenshot, and then stretch it to fit some rect.
- (CIImage *)bluredImageWithRect:(CGRect)rect {
CGSize smallSize = CGSizeMake(rect.size.width, rect.size.height);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, smallSize.width, smallSize.height, 8, 0, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGContextClearRect(ctx, rect);
CGColorSpaceRelease(colorSpaceRef);
CGContextSetInterpolationQuality(ctx, kCGInterpolationNone);
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextTranslateCTM(ctx, 0.0, self.view.frame.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGImageRef maskImage = [UIImage imageNamed:#"mask.png"].CGImage;
CGContextClipToMask(ctx, rect, maskImage);
[self.view.layer renderInContext:ctx];
CGImageRef imageRef1 = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
NSDictionary *options = #{(id)kCIImageColorSpace : (id)kCFNull};
CIImage *beforeFilterImage = [CIImage imageWithCGImage:imageRef1 options:options];
CGImageRelease(imageRef1);
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, beforeFilterImage, #"inputRadius", [NSNumber numberWithFloat:3.0f], nil];
CIImage *afterFilterImage = blurFilter.outputImage;
CIImage *croppedImage = [afterFilterImage imageByCroppingToRect:CGRectMake(0, 0, smallSize.width, smallSize.height)];
return croppedImage;
}
Here is a tutorial iOS image processing with the accelerate framework that shows how to do a blur effect that may be fast enough for what you need.

How do I change a partially transparent image's color in iOS?

I have a single-color image that has partial transparency. I have both normal and #2X versions of the image. I would like to be able to tint the image a different color, in code. The code below works fine for the normal image, but the #2X ends up with artifacts. The normal image might have a similar issue If so, I can't detect it on account of resolution.
+(UIImage *) newImageFromMaskImage:(UIImage *)mask inColor:(UIColor *) color {
CGImageRef maskImage = mask.CGImage;
CGFloat width = mask.size.width;
CGFloat height = mask.size.height;
CGRect bounds = CGRectMake(0,0,width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextClipToMask(bitmapContext, bounds, maskImage);
CGContextSetFillColorWithColor(bitmapContext, color.CGColor);
CGContextFillRect(bitmapContext, bounds);
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
UIImage *result = [UIImage imageWithCGImage:mainViewContentBitmapContext];
return result;
}
If it matters, the mask image is loaded using UIImage imageNamed:. Also, I confirmed that the #2X image is loading when run on the retina simulator.
Update: The above code works. The artifacts I was seeing were caused by additional transforms done by the consumer of the images. This question could be deleted since it's not really a question anymore or left for posterity.
I have updated the code above to account for retina resolution images:
- (UIImage *) changeColorForImage:(UIImage *)mask toColor:(UIColor*)color {
CGImageRef maskImage = mask.CGImage;
CGFloat width = mask.scale * mask.size.width;
CGFloat height = mask.scale * mask.size.height;
CGRect bounds = CGRectMake(0,0,width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGContextClipToMask(bitmapContext, bounds, maskImage);
CGContextSetFillColorWithColor(bitmapContext, color.CGColor);
CGContextFillRect(bitmapContext, bounds);
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
return [UIImage imageWithCGImage:mainViewContentBitmapContext scale:mask.scale orientation:UIImageOrientationUp];
}
The code in the question is working code. The bug was elsewhere.

Resources