How to render offscreen IOS - ios

I am trying to make a metaball implementation in swift but have ran into this problem on the way. Basically I need to draw some alpha radial gradients offscreen and then check each pixel value to see wether it is above a certain alpha threshold if it is than the pixel becomes black other wise it is white.
The problem is that I cant figure out how to make an offscreen context that I can draw on and perform calculations on and then display it on the screen.
I have searched endlessly but I am very confused with the differences between UIcontexts and CGContext. In my current attempt I use a CGBitmapContext but to no avail. Any help would be greatly appreciated (preferably in swift, but anything goes).

You could draw to a bitmap graphics context as described here.

Here is how you can create an image context, draw to it using CG calls, and then get a UIImage:
// Create an N*N image
CGSize size = CGSizeMake(N, N);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// EG:
CGColorRef backColor = [UIColor lightGrayColor].CGColor;
CGContextSetFillColorWithColor(ctx, backColor);
CGContextFillRect(ctx, r1);
.... more drawing code
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can get a CGImageRef very easily:
CGImageRef cgImage = image.CGImage;
Finally, you can get the underlying bytes as explained here.

Related

Remove UIBezirePath Lines drawn On UIImage

I am trying to erase lines drawn on UIImage. I have successfully erased lines drawn on empty canvas.
What would be the trick of erasing lines drawn on UIImage. Below are some things which I have tried but unable to get correct eraser effect.
use touch point and get RGB of image at that point and used that colour stroke.
colorwithpatternimage is too slow.
Kindly suggest any better solution
What I usually do is draw the image to an offscreen buffer (say a CGBitmapContext, for example), draw the Bezier curves over it, and copy the result to the screen.
To remove one of the Beziers, I draw the image to the offscreen buffer, draw all the Bezier curves, except the one (or ones) I don't want, and then copy the result to the screen.
This also has the advantage that it avoids flicker that can be caused by erasing an element that's already onscreen. And it works properly if the curves overlap, whereas drawing with the image as a pattern would likely erase any overlap points.
EDIT: Here's some pseudo-code (never compiled - just from memory) to demonstrate what I mean:
-(UIImage*)drawImageToOffscreenBuffer:(UIImage*)inputImage
{
CGBitmapContextRef offscreen = CGBitmapContextCreate(...[inputImage width], [inputImage height]...);
CGImageRef cgImage = [inputImage CGImage];
CGRect bounds = CGRectMake (0, 0, [inputImage width], [inputImage height]);
CGContextDrawImage (offscreen, bounds, cgImage);
// Now iterate through the Beziers you want to draw
for (i = 0; i < numBeziers; i++)
{
if (drawBezier(i))
{
CGContextMoveToPoint(offscreen, ...);
CGContextAddCurveToPoint(offscreen, ...); // fill in your bezier info here
}
}
// Put result into a CGImage
size_t rowBytes = CGBitmapContextGetBytesPerRow(offscreen);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, CGBitmapContextGetData(offscreen), rowBytes * [inputImage height], NULL);
CGColorSpaceRef colorSpace = CGBitmapContextGetColorSpace(offscreen);
CGImageRef cgResult = CGImageCreate([inputImage width], [inputImage height], ..., dataProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(rgbColorSpace);
// Make a UIImage out of that CGImage
UIImage* result = [UIImage imageWithCGImage:cgResult];
// Can't remember if you need to release the cgResult here? I think so
CGImageRelease(cgResult);
return result;
}

iOS: Adding an outline/stroke to an image context with a transparent background

The images that goes through here are PNGs of different shapes with a transparent background. In addition to merging them (which works fine), I'd like to give the new image a couple of pixels thick outline. But I can't seem to manage that.
(So just to clarify, I'm after an outline around the actual shapes in the context, not a rectangle around the entire image.)
+ (UIImage *)mergeBackgroundImage:(UIImage *)backgroundImage withOverlayingImage:(UIImage *)overlayImage{
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, backgroundImage.scale);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[overlayImage drawInRect:CGRectMake(backgroundImage.size.width - overlayImage.size.width, backgroundImage.size.height - overlayImage.size.height, overlayImage.size.width, overlayImage.size.height)];
//Add stroke.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Thanks for your time!
Markus
If you make a CALayer who's backing is set to a CGImage of your image, you can then use it as a masking layer for your layer that requires an outline1. And once you've done that, you can render your layer into another context, and then get another UIImage from that.
// edit: Something like what's describe in this answer.

Core Graphics and Open GL Drawing

I have a drawing app where I'm using the openGL paint code to draw the strokes, but want to transfer it to another image after the stroke is complete, then clear the OpenGL view. and for that, I'm using CoreGraphics. I'm running into a problem however, where the OpenGL view is being cleared before the image is being transferred via CG (even though I clear it afterwards)
(And I want it the other way, ie the image to be drawn first then the painting image to be erased, to avoid any kind of flickering)
(paintingView is the openGL view)
Here is the code:
// Save the previous line drawn to the "main image"
UIImage *paintingViewImage = [[UIImage alloc] init];
paintingViewImage = [_paintingView snapshot];
UIGraphicsBeginImageContext(self.mainImage.frame.size);
[self.mainImage.image drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
// Get the image from the painting view
[paintingViewImage drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.mainImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.paintingView erase];
So the paintingView is being erased before the mainImage.image variable is being set to the CurrentImage Context.
I'm a only a beginner with these, so any thoughts helpful.
Thanks
You're probably better off using FBOs (OpenGL frame buffer objects). You draw into one FBO, then switch drawing to a new FBO while you save off the previous one. You can ping-pong back-and-forth between the 2 FBOs. Here are the docs for using FBOs on iOS.

Cropping ellipse using core image in ios

I want to crop an ellipse from an image in ios. Using core image framework, I know know to crop a reactangular region.
Using core graphics, I am able to clip the elliptical region. But, the size of the cropped image is same as the size of the original image as I am applying mask to area outside the ellipse.
So, the goal is to crop the elliptical region from an image and size of cropped image won't exceed the rectangular bounds of that image.
Any help would be greatly appreciated. Thanks in advance.
You have to create a context in the correct size, try the following code:
- (UIImage *)cropImage:(UIImage *)input inElipse:(CGRect)rect {
CGRect drawArea = CGRectMake(-rect.origin.x, -rect.origin.y, input.size.width, input.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height));
CGContextClip(ctx);
[input drawInRect:drawArea];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Maybe you have to adjust the drawArea to your needs as i did not test it.

How to get a CGImageRef from Context-Drawn Images?

Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.

Resources