CGContext Ref as UIImage - ios

Trying to create a UIimage from a Draw Context.
Not seeing anything. Am i missing something, or completely out my mind?
Code
- (UIImage *)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, 100, 100);
CGContextAddLineToPoint(context, 150, 150);
CGContextAddLineToPoint(context, 100, 200);
CGContextAddLineToPoint(context, 50, 150);
CGContextAddLineToPoint(context, 100, 100);
CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
CGContextFillPath(context);
// Do your stuff here
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CGContextRelease(context);
return img;
}

I'm assuming this is not a -drawRect: method on a view, because the return value is wrong. (-[UIView drawRect:] returns void, not a UIImage*.)
If it is on an NSView, that means you must be calling it directly, to get the return value. But that means that UIKit hasn't set up a graphics context, the way it normally does before it calls -drawRect: on the views in a window.
Therefore, you shouldn't assume that UIGraphicsGetCurrentContext() is valid. It's probably nil (have you checked?).
If you just want an image: use UIGraphicsBeginImageContext() to create a context, then UIGraphicsGetImageFromCurrentImageContext() to extract a UIImage (no need for the intermediary CGImage), then UIGraphicsEndImageContext() to clean up.
If you're trying to capture an image of what your view drew: fix your -drawRect: to return void, and find some other way to get that UIImage out of the view -- either stash it in an ivar, or send it to some other object, or write it to a file, whatever you like.
Also (less importantly):
Don't CGContextRelease(context). You didn't create, copy, or retain it, so you shouldn't release it.
No need for the last CGContextAddLineToPoint(). CGContextFillPath will implicitly close the path for you.

Related

CGContextDrawImage off Screen performance

I have been using the following code to create an image off the screen (creating a
context, off-screen in background thread using GDC ) and later render it in the
main thread on the screen only when i receive updated "array of UIImages"
.
I found by doing so the rending path , lines .etc was much faster then dowing it the
drawRect: method of the UIView .
The issue that I'm facing now is that when trying to rending UIImage class using
CGContextDrawImage() it get really slow over ~1.4-2.5 sec to render. From searching
the main issue was that it spends most of its time deflating the png file. I have
tried infating the png image and use CGImageRef still long rending time. I dont have
a control on the data array that contain list if png files it comes in.
Would breaking the image to small pieces or and, using CALayer help , how i can stitch
those images without using "CGContextDrawImage" is it possible ?
I'm not sure what is best way to fix this issues and how , any Idea ?
Thanks In Advance
void creatImageInBackground
{
size = CGSizeMake (self.frame.size.width,self.frame.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
UIGraphicsPushContext(context);
for(UIImage * img in imgList )
{
ulXY = <calcuate >;
lrXY = <calcuate >;
imRec = CGRectMake( ulXY.x,-ulXY.y,(lrXY.x-ulXY.x) ,(lrXY.y -ulXY.y) );
CGContextTranslateCTM( context, 0,(lrXY.y-ulXY.y) );
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, imRec,[[img image] CGImage]);
//[img drawInRect:imRec];
}
UIGraphicsPopContext();
[outputImage release];
outputImage = UIGraphicsGetImageFromCurrentImageContext() ;
[outputImage retain];
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^(void)
{
[self setNeedsDisplay];
[self setHidden:NO];
});
}
Then
- (void)drawRect:(CGRect)rect
{
CGPoint imagePoint = CGPointMake(0, 0);
[outputImage drawAtPoint:imagePoint];
}

Fast screenshot ios

In my project I have to make a screenshot of the screen and apply blur to create the effect of frosted glass. Content can be moved under the glass and blured picture changed.
I'v used Accelerate.framework to speedup blurring, also i,v used OpenGL to draw CIImage directly to GLView.
Now I'm looking for a way to optimize getting screenshot of the screen.
I use this method to get screenshot of some area at the bottom of the screen:
CGSize size = CGSizeMake(rect.size.width, rect.size.height);
// get screenshot of self.view
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGContextClearRect(ctx, rect);
CGColorSpaceRelease(colorSpaceRef);
CGContextSetInterpolationQuality(ctx, kCGInterpolationNone);
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextTranslateCTM(ctx, 0.0, someView.frame.size.height);
CGContextScaleCTM(ctx, 1, -1);
//add mask
CGImageRef maskImage = [UIImage imageNamed:#"mask.png"].CGImage;
CGContextClipToMask(ctx, rect, maskImage);
[someView.layer renderInContext:ctx];
//get screenshot image
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
It works fine and fast if self.view has 1-2 subviews, but if there are several subviews (or it is tableview), then everything starts to slow down.
So i try to find a fast way to get pixels from some rect on screen. Maybe using a low-level API.
if just perform some animations , try this way , called -snapshotViewAfterScreenUpdates: or -resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: method which are UIView provided , these method return UIView object without rendering into a bitmap image , so it is a more efficient .

How to get a CGImageRef from Context-Drawn Images?

Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.

MonoTouch convert color UIImage with alpha to grayscale and blur?

I am trying to find a recipe for producing a a blurred grayscale UIImage from a color PNG and alpha. There are recipes out there in ObjC but MonoTouch does not bind the CGRect functions so not sure how to do this. Any ideas?
Here is one ObjC example of grayscale:
(UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Monotouch does not bind the CGRect functions so not sure how to do this.
When using MonoTouch CGRect is mapped to RectangleF. A lot of extension methods exists that should map to every function provided by GCRect. You should not have any problem in porting ObjectiveC code using them.
If something is missing please fill a bug report # http://bugzilla.xamarin.com and we'll fix it asap (and provide a workaround when possible).
There are recipes out there in ObjC but MonoTouch
If you have links then please edit your question and add them. That will make it easier to help you :)
UPDATE
Here's a, line-by-line, C# translation of your example. It seems to works for me (and it's much easier to my eyes than Objective-C ;-)
UIImage ConvertToGrayScale (UIImage image)
{
RectangleF imageRect = new RectangleF (PointF.Empty, image.Size);
using (var colorSpace = CGColorSpace.CreateDeviceGray ())
using (var context = new CGBitmapContext (IntPtr.Zero, (int) imageRect.Width, (int) imageRect.Height, 8, 0, colorSpace, CGImageAlphaInfo.None)) {
context.DrawImage (imageRect, image.CGImage);
using (var imageRef = context.ToImage ())
return new UIImage (imageRef);
}
}
I wrote a native port of the blur and tint UIImage categories from WWDC for Monotouch.
https://github.com/lipka/MonoTouch.UIImageEffects
Sample code for tint and blur:
UIColor tintColor = UIColor.FromWhiteAlpha (0.11f, 0.73f);
UIImage yourImage;
yourImage.ApplyBlur (20f /*blurRadius*/, tintColor, 1.8f /*deltaSaturationFactor*/, null);

Draw background image using CGContextDrawImage

I want to draw on a UIView that has a background image that I set using the code below:
- (void)setBackgroundImageFromData:(NSData *)imageData {
UIImage *image = [UIImage imageWithData:imageData];
int width = image.size.width;
int height = image.size.height;
CGSize size = CGSizeMake(width, height);
CGRect imageRect = CGRectMake(0, 0, width, height);
UIGraphicsBeginImageContext(size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(currentContext, 0, height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, imageRect, image.CGImage);
UIGraphicsEndImageContext();
}
The initial view is created using the code from Apple's GLPaint example. For the life of me, that background image is not shown. What am I missing?
Thanks!
You create a UIImage, and an imageRect successfully. You then begin a image context to draw the image to, draw the image to the context, and end the context. The problem is that you just allow the context to expire without doing anything to it.
In UIKit you don't push new visuals upwards, you wait until requested to draw. Internal mechanisms cache your images and use them to move things about and otherwise redraw the screen at the usual 60fps.
If this is a custom UIView subclass, then you probably want to keep hold of the UIImage and composite it as part of your drawRect:. You can set the contents of your UIView as having changed by calling setNeedsRedraw — you'll then be asked to redraw your contents at some point in the future.
If this isn't a custom subclass, then the easiest thing to do is to wrap this view in an outer view and add a UIImageView behind it, to which you can set the UIImage.

Resources