Core Graphics and Open GL Drawing - ios

I have a drawing app where I'm using the openGL paint code to draw the strokes, but want to transfer it to another image after the stroke is complete, then clear the OpenGL view. and for that, I'm using CoreGraphics. I'm running into a problem however, where the OpenGL view is being cleared before the image is being transferred via CG (even though I clear it afterwards)
(And I want it the other way, ie the image to be drawn first then the painting image to be erased, to avoid any kind of flickering)
(paintingView is the openGL view)
Here is the code:
// Save the previous line drawn to the "main image"
UIImage *paintingViewImage = [[UIImage alloc] init];
paintingViewImage = [_paintingView snapshot];
UIGraphicsBeginImageContext(self.mainImage.frame.size);
[self.mainImage.image drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
// Get the image from the painting view
[paintingViewImage drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.mainImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.paintingView erase];
So the paintingView is being erased before the mainImage.image variable is being set to the CurrentImage Context.
I'm a only a beginner with these, so any thoughts helpful.
Thanks

You're probably better off using FBOs (OpenGL frame buffer objects). You draw into one FBO, then switch drawing to a new FBO while you save off the previous one. You can ping-pong back-and-forth between the 2 FBOs. Here are the docs for using FBOs on iOS.

Related

How to render offscreen IOS

I am trying to make a metaball implementation in swift but have ran into this problem on the way. Basically I need to draw some alpha radial gradients offscreen and then check each pixel value to see wether it is above a certain alpha threshold if it is than the pixel becomes black other wise it is white.
The problem is that I cant figure out how to make an offscreen context that I can draw on and perform calculations on and then display it on the screen.
I have searched endlessly but I am very confused with the differences between UIcontexts and CGContext. In my current attempt I use a CGBitmapContext but to no avail. Any help would be greatly appreciated (preferably in swift, but anything goes).
You could draw to a bitmap graphics context as described here.
Here is how you can create an image context, draw to it using CG calls, and then get a UIImage:
// Create an N*N image
CGSize size = CGSizeMake(N, N);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// EG:
CGColorRef backColor = [UIColor lightGrayColor].CGColor;
CGContextSetFillColorWithColor(ctx, backColor);
CGContextFillRect(ctx, r1);
.... more drawing code
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can get a CGImageRef very easily:
CGImageRef cgImage = image.CGImage;
Finally, you can get the underlying bytes as explained here.

How to save image with transparent stroke

I followed #Rob answer and its drawn as I want...but when I saved this image....stroke not transparent anymore
Objective-C How to avoid drawing on same color image having stroke color of UIBezierPath
For save image I written this code
-(void)saveImage {
UIGraphicsBeginImageContextWithOptions(self.upperImageView.bounds.size, NO, 0);
if ([self.upperImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
[self.upperImageView drawViewHierarchyInRect:self.upperImageView.bounds afterScreenUpdates:YES]; // iOS7+
else
[self.upperImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndPDFContext();
UIImageWriteToSavedPhotosAlbum(self.upperImageView.image, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
You are either setting the alpha of the image view with the paths to 1.0 somewhere, or you are using something that doesn't permit transparencies (e.g. UIGraphicsBeginImageContextWithOptions with YES for the opaque parameter, or staging the image in a format that doesn't preserve alpha, such as JPEG).
A few additional thoughts:
I notice that you're only drawing upperImageView. If you want the composite image, you need to draw both. Or are you only trying to save one of the image views?
(For those unfamiliar with that other question, the entire point was how to draw multiple paths over an image, and have the full set of those paths with the same reduced alpha, rather than having the intersection of paths lead to some additive behavior. This was accomplished by having two separate image views, one for the underlying image, and one for the drawn paths. The key to the answer to that question was that one should draw the paths at 100% alpha, but to add that as a layer to a view that, itself, has a reduced alpha.)
What is the alpha of the image view upon which you are drawing.
NB: In the answer to that other question, when saving a temporary copy of the combined paths. we had to temporarily set the alpha to 1.0. But when saving the final result here, we want to keep the "path" image view's alpha at its reduced value.
Unrelated, but you faithfully transcribed a typo (since fixed) in my original example where I accidentally called UIGraphicsEndPDFContext rather than UIGraphicsEndImageContext. You should use the latter.
So, considering two image views, one with the photograph and one with the drawn path, called imageImageView (with alpha of 1.0) and pathsImageView (with alpha of 0.5), respectively, I can save the snapshot like so:
- (void)saveSnapshot {
CGRect rect = self.pathsImageView.bounds;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
if ([self.pathsImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[self.imageImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES]; // iOS7+
[self.pathsImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES];
} else {
[self.imageImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
[self.pathsImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
When I did that, the resulting composite image was in my photo album:

iOS: Adding an outline/stroke to an image context with a transparent background

The images that goes through here are PNGs of different shapes with a transparent background. In addition to merging them (which works fine), I'd like to give the new image a couple of pixels thick outline. But I can't seem to manage that.
(So just to clarify, I'm after an outline around the actual shapes in the context, not a rectangle around the entire image.)
+ (UIImage *)mergeBackgroundImage:(UIImage *)backgroundImage withOverlayingImage:(UIImage *)overlayImage{
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, backgroundImage.scale);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[overlayImage drawInRect:CGRectMake(backgroundImage.size.width - overlayImage.size.width, backgroundImage.size.height - overlayImage.size.height, overlayImage.size.width, overlayImage.size.height)];
//Add stroke.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Thanks for your time!
Markus
If you make a CALayer who's backing is set to a CGImage of your image, you can then use it as a masking layer for your layer that requires an outline1. And once you've done that, you can render your layer into another context, and then get another UIImage from that.
// edit: Something like what's describe in this answer.

Optimizing Image Drawing for iPad 3

I am trying to find the most optimized way to draw images in iOS on the iPad 3. I am generating a reflection for a third party version of coverflow that I am implementing in my app. The reflection is created using NSOperationQueue and then added via UIImageView in the main thread. Because the coverflow part is already using resources for the animations as you scroll through the images, with each new image that is added, there is a bit of a "pop" in the scrolling and it makes the app feel kind of laggy/glitchy. Testing on iPad 1 and 2 the animation is perfectly smooth and looks great.
How can I further optimize the drawing to avoid this. Any ideas are appreciated. I have been looking into "tiling" the reflection so that it presents a little of the reflection at a time, but I'm not sure what the best approach is.
Here is the drawing code:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Carousel-Ref-Mask.jpg" ofType:nil]];
//
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:self.name ofType: nil]];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked scale:[[UIScreen mainScreen]scale] orientation:UIImageOrientationUp];
CGImageRelease(masked);
if (maskedImage) {
[mainView performSelectorOnMainThread:#selector(imageDidLoad:)
withObject:[NSArray arrayWithObjects:maskedImage, endView, nil]
waitUntilDone:YES];
} else
NSLog(#"Unable to find sample image: %#", self.name);
The Mask is just a gradient png that I am using to mask the image. Also, if I just draw this offscreen but don't add it, there isn't hardly any lag. The lag comes from actually adding it on the main thread.
So after spending a great deal of time researching this issue and trying out different approaches (and spending a good while with the "Time" profiler in Instruments), I found that the lag was from the image decoding on the main thread when the image was displayed. By decoding on the background with all CoreGraphics calls I was able to cut the time in half. This still wasn't good enough.
I further found that the reflection being created in my code was taking a long time to display due to the transparency or alpha pixels. I therefor drew it in a context and filled the context with solid black. And then I made the view itself transparent instead of the image. This reduced the time it took on the main thread by 83%—Mission Accomplished

Iphone sdk save drawings without whole context

I want to capture screen of my view from my iPhone app. I have white background view and on that I draw a lines on that view's layer using this method .
- (void)draw {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
if (self.fillColor) {
[self.fillColor setFill];
[self.path fill];
}
if (self.strokeColor) {
[self.strokeColor setStroke];
[self.path stroke];
}
CGContextRestoreGState(context);
}
- (void)drawRect:(CGRect)rect {
// Drawing code.
for (<Drawable> d in drawables) {
[d draw];
}
[delegate drawTemporary];
}
I have use delegate methods to draw lines on layer.
This is the project link from where I get help for this.
https://github.com/search?q=dudel&type=Everything&repo=&langOverride=&start_value=1
Now when I use the following context methods to save the drawing only I successfully save it without that white background.
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But When I use the following method of Bezier Pathe I cannot save the drawing without its white background,It saves the whole screen i.e. that drawing and its background.
UIGraphicsBeginImageContext(self.view.bounds.size);
[dudelView.layer renderInContext:UIGraphicsGetCurrentContext()];
//UIImage *finishedPic = UIGraphicsGetImageFromCurrentImageContext();
So can anybody help me how can I save the drawing only here in this app.
(You've tagged this as MapKit related, but make no mention of MapKit.)
Why don't you just split your drawing sequence into three chunks?
Draw your paths into an image context and get a UIImage, as you described.
Draw a background color.
Draw the UIImage.
Then you can use the UIImage for your "screenshot" as well.
I should also note that if the only thing you don't want in your captured UIImage is the background color, you are better off creating a UIImageView, setting its background color (-setBackgroundColor:), and setting its image to be your UIImage.
UIImageView internally has a number of optimizations that allow it to display graphics with much higher performance than you can get with a custom -drawRect: implementation.
Why don't you just save the drawables array? Those are all the drawings without the underlying image :)

Resources