How to get CGImageRef from CGContextRef? - ios

I am trying to add two images into context, however it does not work and throws
GBitmapContextCreateImage: invalid context 0x0
error. I use the following code:
//some image
CGImageRef image = ...
//some image as well, but masked. Works perfectly.
CGImageRef blurredAndMasked = CGImageCreateWithMask(blurred, mask);
//Both images are fine. Sure.
//Initializing the context and color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, frameSize.width, frameSize.height, 8, 0, colorSpace, kCGBitmapAlphaInfoMask);
//Drawing images into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, frameSize.width, frameSize.height), image);
CGContextDrawImage(ctx, CGRectMake(0, 0, frameSize.width, frameSize.height), blurredAndMasked);
//getting the resulting image
CGImageRef ret = CGBitmapContextCreateImage(ctx);
//releasing the stuff
CGImageRelease(image);
CGImageRelease(blurred);
CGImageRelease(blurredAndMasked);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
this seems fine but the resulting images are all black or look very similar to this:
What should be changed in the code? Thanks in advance!

kCGBitmapAlphaInfoMask is not a valid value for the last (bitmapInfo) argument to CGBitmapContextCreate. It is a mask (hence the name) you can use with the & operator to get just the CGImageAlphaInfo out of a CGBitmapInfo value. You would never pass kCGBitmapAlphaInfoMask where a CGBitmapInfo or CGImageAlphaInfo is expected.
Assuming you don't need a specific byte order, I believe this is the highest performance pixel format with alpha channel on iOS:
CGContextRef ctx = CGBitmapContextCreate(nil,
frameSize.width, frameSize.height, 8, 0, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
And this should be the highest performance without alpha channel:
CGContextRef ctx = CGBitmapContextCreate(nil,
frameSize.width, frameSize.height, 8, 0, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);

Related

Fill CGBitmapContext with color

I'm trying to fill a CGBitmapContext with a solid red color and get CGImgRef, but image is all transparent.
Here's the code
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
1,
1,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGFloat components[] = {1.0,0.0,0.0,1.0};
CGContextSetFillColor(context, components);
CGContextFillRect(context, CGRectMake(0, 0, 1, 1));
CGImageRef imgRef = CGBitmapContextCreateImage(context);
Please don't recommend using UIBezierPaths and other UIKit things. I need to use CoreGraphics.
The problem is that you forgot to set a fill color space. You must do that in order for CGContextSetFillColor to work correctly (because that tells it how many components to expect and how to interpret them). This works:
CGFloat components[] = {1,0,0,1};
CGContextSetFillColorSpace(context, colorSpace); // <-- this was missing
CGContextSetFillColor(context, components);
// and now fill ...
problem was solved using CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor); Btw, it would be interesting to know what i was doing wrong. Also Apple documentation on CGContextSetFillColor says "Note that the preferred API to use is now CGContextSetFillColorWithColor."

how to change rgb value of an image in iOS?

I have a UIImage and I want to decrease the rgb value of each point , how can I do that?
or how can I change one color to another color in an image?
[Xcode8 , swift3]
This answer only applies if you want to change individual pixels..
First use UIImage.cgImage to get a CGImage. Next, create a bitmap context using CGBitmapContextCreate with the CGColorSpaceCreateDeviceRGB colour space and whatever blend modes you want.
Then call CGContextDrawImage to draw the image to the context which is backed by a pixel array provided by you. Cleanup, and you now have an array of pixels.
- (uint8_t*)getPixels:(UIImage *)image {
CGColorSpaceRef cs= CGColorSpaceCreateDeviceRGB();
uint8_t *pixels= malloc(image.size.width * image.size.height * 4);
CGContextRef ctx= CGBitmapContextCreate(rawData, image.size.width, image.size.height, 8, image.size.width * 4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.width, image.size.height));
CGContextRelease(ctx);
return pixels;
}
Modify the pixels however you want.. Then recreate the image from the pixels..
- (UIImage *)imageFromPixels:(uint8_t *)pixels width:(NSUInteger)width height:(NSUInteger)height {
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, width * height * 4, nil);
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageRef = CGImageCreate(width, height, 8, 32, width * 4, cs, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast, provider, nil, NO, kCGRenderingIntentDefault);
pixels = malloc(width * height * 4);
CGContextRef ctx = CGBitmapContextCreate(pixels, width, height, 8, width * 4, cs, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(ctx, (CGRect) { .origin.x = 0, .origin.y = 0, .size.width = width, .size.height = height }, cgImage);
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
CGColorSpaceRelease(cs);
CGImageRelease(cgImageRef);
CGDataProviderRelease(provider);
free(pixels);
return image;
}
One of the ways is to use image as a template and to set color which you want.
extension UIImageView {
func changeImageColor( color:UIColor) -> UIImage
{
image = image!.withRenderingMode(.alwaysTemplate)
tintColor = color
return image!
}
}
//Change color of logo
logoImage.image = logoImage.changeImageColor(color: .red)

Convert indexed color .png to RGB or greyscale

I'm writing this Today Widget, that needs to display an image.
I noticed, that every time the Widget loads, the image is redrawn. This takes about half a second.
After some investigation, I found out, that the culprit is, that the image file is in the Indexed color space.
So: my question is:
How do I convert this file to something that the iPhone can display more efficiently? For instance, an RGB file. I would then save it to a new file, and load that new file in my UIImageView.
I played around a bit with CGImage, since I believe that is the solution direction, but I end up with a white UIImageView.
This is my code:
UIImage * theCartoon = [UIImage imageWithData:imageData];
CGImageRef si = [theCartoon CGImage];
CGDataProviderRef src = CGImageGetDataProvider(si);
CGImageRef imageRef = CGImageCreateWithPNGDataProvider(src, NULL, NO, kCGRenderingIntentDefault);
cartoon.image = [[UIImage alloc] initWithCGImage:imageRef];
Any suggestions on this approach? Some obvious misprogramming?
Try this
// The source image
CGImageRef image = theCartoon.CGImage;
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
// The result image in RGB color space
CGImageRef result = nil;
// Check color space
CGColorSpaceRef srcColorSpace = CGImageGetColorSpace(image);
if (CGColorSpaceGetModel(srcColorSpace) != kCGColorSpaceModelRGB) {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaNoneSkipLast);
CGRect rect = {CGPointZero, size};
CGContextDrawImage(context, rect, image);
result = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
It's been a while since the question was asked, but for others who might need this, here is my solution:
-(UIImage *) convertIndexedColorSpaceToRGB:(UIImage *) sourceImage {
CGImageRef originalImageRef = sourceImage.CGImage;
const CGBitmapInfo originalBitmapInfo = CGImageGetBitmapInfo(originalImageRef);
// See: http://stackoverflow.com/questions/23723564/which-cgimagealphainfo-should-we-use
const uint32_t alphaInfo = (originalBitmapInfo & kCGBitmapAlphaInfoMask);
CGBitmapInfo bitmapInfo = originalBitmapInfo;
switch (alphaInfo)
{
case kCGImageAlphaNone:
bitmapInfo |= kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast;
break;
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaPremultipliedLast:
case kCGImageAlphaNoneSkipFirst:
case kCGImageAlphaNoneSkipLast:
break;
case kCGImageAlphaOnly:
case kCGImageAlphaLast:
case kCGImageAlphaFirst:
{
return sourceImage;
}
break;
}
const CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
const CGSize pixelSize = CGSizeMake(sourceImage.size.width * sourceImage.scale, sourceImage.size.height * sourceImage.scale);
const CGContextRef context = CGBitmapContextCreate(NULL,
pixelSize.width,
pixelSize.height,
CGImageGetBitsPerComponent(originalImageRef),
pixelSize.width*4,
colorSpace,
bitmapInfo
);
CGColorSpaceRelease(colorSpace);
if (!context) return sourceImage;
const CGRect imageRect = CGRectMake(0, 0, pixelSize.width, pixelSize.height);
UIGraphicsPushContext(context);
// Flip coordinate system. See: http://stackoverflow.com/questions/506622/cgcontextdrawimage-draws-image-upside-down-when-passed-uiimage-cgimage
CGContextTranslateCTM(context, 0, pixelSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
[sourceImage drawInRect:imageRect];
UIGraphicsPopContext();
const CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *image = [UIImage imageWithCGImage:decompressedImageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
CGImageRelease(decompressedImageRef);
return image; }

Use CGColorSpaceCreateDeviceGray but enable alpha

I have an UIImage with some alpha values and want to make a gray version of it. I've been using the below, and it works for nonalpha parts of the image, however as alpha is not supported/turned off the alpha parts turn out black... How would I successfully turn alpha support on?
(I modified this from code floating around stackoverflow, to support other scales (read retina))
-(UIImage*)grayscaledVersion2 {
// Create image rectangle with current image width/height
const CGRect RECT = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, RECT.size.width, RECT.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// kCGImageAlphaNone = no alpha, kCGImageAlphaPremultipliedFirst/kCGImageAlphaFirst/kCGImageAlphaLast = crash
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, RECT, [self CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* imageGray = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
DLog(#"greyed %# (%f, %f %f) into %# (%f, %f %f)", self, self.scale, self.size.width, self.size.height, imageGray, imageGray.scale, imageGray.size.width, imageGray.size.height);
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
return imageGray;
}

iOS to Mac GraphicContext Explanation/Conversion

I have been programming for two years on iOS and never on mac. I am working on a little utility for handling some simple image needs that I have in my iOS development. Anyway, I have working code in iOS that runs perfectly but I have absolutely no idea what equivalents are for mac.
I've tried a bunch of different things but I really don't understand how to start a graphics context on the Mac outside of a "drawRect:" method. On the iPhone I would just use UIGraphicsBeghinImageContext(). I know other post have said to use lockFocus/unlockFocus but I'm not sure how exactly to make that work for my needs. Oh, and I really miss UIImage's "CGImage" property. I don't understand why NSImage can't have one, though it sounds a bit trickier than just that.
Here is my working code on iOS—basically it just creates a reflected image from a mask and combines them together:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"Mask_Image.jpg" ofType:nil]];
UIImage *image = [UIImage imageNamed::#"Test_Image1.jpg"];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width, image.size.height + (image.size.height * .5)), NO, [[UIScreen mainScreen]scale]);
[image drawInRect:CGRectMake(0,0, image.size.width, image.size.height)];
[maskedImage drawInRect:CGRectMake(0, image.size.height, maskedImage.size.width, maskedImage.size.height)];
UIImage *anotherImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//do something with anotherImage
Any suggestions for achieving this (simply) on the Mac?
Here's a simple example that draws a blue circle into an NSImage (I'm using ARC in this example, add retains/releases to taste)
NSSize size = NSMakeSize(50, 50);
NSImage* im = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size.width
pixelsHigh:size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[im addRepresentation:rep];
[im lockFocus];
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextClearRect(ctx, NSMakeRect(0, 0, size.width, size.height));
CGContextSetFillColorWithColor(ctx, [[NSColor blueColor] CGColor]);
CGContextFillEllipseInRect(ctx, NSMakeRect(0, 0, size.width, size.height));
[im unlockFocus];
[[im TIFFRepresentation] writeToFile:#"/Users/USERNAME/Desktop/foo.tiff" atomically:NO];
The main difference is that on OS X you first have to create the image, then you can begin drawing into it; on iOS you create the context, then extract the image from it.
Basically, lockFocus makes the current context be the image and you draw directly onto it, then use the image.
I'm not completely sure if this answers all of your question, but I think it's at least one part of it.
Well, here's the note on UIGraphicsBeginImageContextWithOptions:
UIGraphicsBeginImageContextWithOptions
Creates a bitmap-based graphics context with the specified options.
The OS X equivalent, which is also available in iOS (and UIGraphicsBeginImageContextWithOptions is possibly a wrapper around) is CGBitmapContextCreate:
Declared as:
CGContextRef CGBitmapContextCreate (
void *data,
size_t width,
size_t height,
size_t bitsPerComponent,
size_t bytesPerRow,
CGColorSpaceRef colorspace,
CGBitmapInfo bitmapInfo
);
Although it's a C API, you could think of CGBitmapContext as a subclass of CGContext. It renders to a pixel buffer, whereas a CGContext renders to an abstract destination.
For UIGraphicsGetImageFromCurrentImageContext, you can use CGBitmapContextCreateImage and pass your bitmap context to create a CGImage.
Here is a Swift (2.1 / 10.11 API-compliant) version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.currentContext()?.CGContext
CGContextClearRect(ctx, rect)
CGContextSetFillColorWithColor(ctx, NSColor.blackColor().CGColor)
CGContextFillRect(ctx, rect)
im.unlockFocus()
Swift3 version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.current()?.cgContext
ctx!.clear(rect)
ctx!.setFillColor(NSColor.black.cgColor)
ctx!.fill(rect)
im.unlockFocus()

Resources