How to get a CGImageRef from Context-Drawn Images? - ios

Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?

CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());

You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.

Related

How to render offscreen IOS

I am trying to make a metaball implementation in swift but have ran into this problem on the way. Basically I need to draw some alpha radial gradients offscreen and then check each pixel value to see wether it is above a certain alpha threshold if it is than the pixel becomes black other wise it is white.
The problem is that I cant figure out how to make an offscreen context that I can draw on and perform calculations on and then display it on the screen.
I have searched endlessly but I am very confused with the differences between UIcontexts and CGContext. In my current attempt I use a CGBitmapContext but to no avail. Any help would be greatly appreciated (preferably in swift, but anything goes).
You could draw to a bitmap graphics context as described here.
Here is how you can create an image context, draw to it using CG calls, and then get a UIImage:
// Create an N*N image
CGSize size = CGSizeMake(N, N);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// EG:
CGColorRef backColor = [UIColor lightGrayColor].CGColor;
CGContextSetFillColorWithColor(ctx, backColor);
CGContextFillRect(ctx, r1);
.... more drawing code
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can get a CGImageRef very easily:
CGImageRef cgImage = image.CGImage;
Finally, you can get the underlying bytes as explained here.

How to make CGImageCreateWithImageInRect reference transformed image?

Xcode 5, iOS 7
I am loading an image into a UIImage, and then copy it into another UIImage with a Mirror transform, i.e.:
self.imageB.image=[UIImage imageWithCGImage:[self.imageA.image CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Next, I'm trying to combine the two images into one when saving (originalImage refers to the loaded image prior to copying/transforming into imageA and imageB):
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
[self.imageA.image drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[self.imageB.image drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
This works and gives me a single saved mirrored image.
However, I'm trying to do this using only a sub-region of mageA and imageB,
and the result is that the portion from imageB loses it's Mirrored transformation.
I end up with both sides of the final having the same orientation!(Note: visRatio is the percentage of the image I want to keep)
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
CGRect clippedRectA = CGRectMake(0,0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefA = CGImageCreateWithImageInRect([self.imageA.image CGImage], clippedRectA);
UIImage *leftImageA = [UIImage imageWithCGImage:imageRefA];
[leftImageA drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
CGRect clippedRectB = CGRectMake(originalImage.size.width-(originalImage.size.width*visRatio),0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefB = CGImageCreateWithImageInRect([self.imageB.image CGImage], clippedRectB);
UIImage *rightImageB = [UIImage imageWithCGImage:imageRefB];
[rightImageB drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
It's as though "CGImageCreateWithImageInRect" copies the image from the original, not the transformed, data.
How can I accomplish this without losing the mirrored transformations of imageB ?
I figured it out - this will use the original, full resolution data opened into originalImage and saved a clipped, transformed version (I left the mirroring out of this post for simplicity):
//capture the transformed version
CGSize tmpSize=CGSizeMake(originalImage.size.width, originalImage.size.height);
UIGraphicsBeginImageContext(tmpSize);
[self.imageA.image drawInRect:CGRectMake(0,0, tmpSize.width, tmpSize.height)];
UIImage *tmpImageA=UIGraphicsGetImageFromCurrentImageContext();
visRatio=0.5;
//clip it
clippedRectA=CGRectMake(0,0,roundf(originalImage.size.width*visRatio),originalImage.size.height);
imageRefA=CGImageCreateWithImageInRect([tmpImageA CGImage],clippedRectA);
leftImageA=[UIImage imageWithCGImage:imageRefA];
UIGraphicsEndImageContext();
savedImage=UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(savedImage, nil, nil, nil);
I believe the key is to draw the image into the context, then use the context for cropping/saving, instead of the main image (imageA).
If you find a more efficient way, please post it!
Otherwise, I hope this helps someone else....

Crop and recreate UIImage

I want to crop a UIImage (not imageview), before doing pixel operations on it. Is there a reliable way to do this in ios framework?
Here are the related methods in android I am using:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap, int, int, int, int)
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(int, int, android.graphics.Bitmap.Config)
-(UIImage*)scaleToSize:(CGSize)size image:(UIImage*)image
{
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I would advise you to look at:
The CoreAnimation Apple framework to create an image context, draw in it and save this as a UIImage in RAM or save it to disk.
The CoreImage Apple framework if you want a powerful and fully customizable image manipulation solution.
A third party lib to crop and resize images: CocoaPods is a great way to quickly integrate that kind of libs… Here is a list of a some interesting image manipulation pods.
This worked for me:
-(UIImage *)cropCanvas:(UImage *)input x1:(int)x1 y1:(int)y1 x2:(int)x2 y2:(int)y2{
CGRect rect = CGRectMake(x1, y1, input.size.width-x2-x1, input.size.height-y2-y1);
CGImageRef imageref = CGImageCreateWithImageInRect([input CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageref];
return img;
}

CGContext Ref as UIImage

Trying to create a UIimage from a Draw Context.
Not seeing anything. Am i missing something, or completely out my mind?
Code
- (UIImage *)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, 100, 100);
CGContextAddLineToPoint(context, 150, 150);
CGContextAddLineToPoint(context, 100, 200);
CGContextAddLineToPoint(context, 50, 150);
CGContextAddLineToPoint(context, 100, 100);
CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
CGContextFillPath(context);
// Do your stuff here
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CGContextRelease(context);
return img;
}
I'm assuming this is not a -drawRect: method on a view, because the return value is wrong. (-[UIView drawRect:] returns void, not a UIImage*.)
If it is on an NSView, that means you must be calling it directly, to get the return value. But that means that UIKit hasn't set up a graphics context, the way it normally does before it calls -drawRect: on the views in a window.
Therefore, you shouldn't assume that UIGraphicsGetCurrentContext() is valid. It's probably nil (have you checked?).
If you just want an image: use UIGraphicsBeginImageContext() to create a context, then UIGraphicsGetImageFromCurrentImageContext() to extract a UIImage (no need for the intermediary CGImage), then UIGraphicsEndImageContext() to clean up.
If you're trying to capture an image of what your view drew: fix your -drawRect: to return void, and find some other way to get that UIImage out of the view -- either stash it in an ivar, or send it to some other object, or write it to a file, whatever you like.
Also (less importantly):
Don't CGContextRelease(context). You didn't create, copy, or retain it, so you shouldn't release it.
No need for the last CGContextAddLineToPoint(). CGContextFillPath will implicitly close the path for you.

MonoTouch convert color UIImage with alpha to grayscale and blur?

I am trying to find a recipe for producing a a blurred grayscale UIImage from a color PNG and alpha. There are recipes out there in ObjC but MonoTouch does not bind the CGRect functions so not sure how to do this. Any ideas?
Here is one ObjC example of grayscale:
(UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Monotouch does not bind the CGRect functions so not sure how to do this.
When using MonoTouch CGRect is mapped to RectangleF. A lot of extension methods exists that should map to every function provided by GCRect. You should not have any problem in porting ObjectiveC code using them.
If something is missing please fill a bug report # http://bugzilla.xamarin.com and we'll fix it asap (and provide a workaround when possible).
There are recipes out there in ObjC but MonoTouch
If you have links then please edit your question and add them. That will make it easier to help you :)
UPDATE
Here's a, line-by-line, C# translation of your example. It seems to works for me (and it's much easier to my eyes than Objective-C ;-)
UIImage ConvertToGrayScale (UIImage image)
{
RectangleF imageRect = new RectangleF (PointF.Empty, image.Size);
using (var colorSpace = CGColorSpace.CreateDeviceGray ())
using (var context = new CGBitmapContext (IntPtr.Zero, (int) imageRect.Width, (int) imageRect.Height, 8, 0, colorSpace, CGImageAlphaInfo.None)) {
context.DrawImage (imageRect, image.CGImage);
using (var imageRef = context.ToImage ())
return new UIImage (imageRef);
}
}
I wrote a native port of the blur and tint UIImage categories from WWDC for Monotouch.
https://github.com/lipka/MonoTouch.UIImageEffects
Sample code for tint and blur:
UIColor tintColor = UIColor.FromWhiteAlpha (0.11f, 0.73f);
UIImage yourImage;
yourImage.ApplyBlur (20f /*blurRadius*/, tintColor, 1.8f /*deltaSaturationFactor*/, null);

Resources