iOS: Adding an outline/stroke to an image context with a transparent background - ios

The images that goes through here are PNGs of different shapes with a transparent background. In addition to merging them (which works fine), I'd like to give the new image a couple of pixels thick outline. But I can't seem to manage that.
(So just to clarify, I'm after an outline around the actual shapes in the context, not a rectangle around the entire image.)
+ (UIImage *)mergeBackgroundImage:(UIImage *)backgroundImage withOverlayingImage:(UIImage *)overlayImage{
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, backgroundImage.scale);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[overlayImage drawInRect:CGRectMake(backgroundImage.size.width - overlayImage.size.width, backgroundImage.size.height - overlayImage.size.height, overlayImage.size.width, overlayImage.size.height)];
//Add stroke.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Thanks for your time!
Markus

If you make a CALayer who's backing is set to a CGImage of your image, you can then use it as a masking layer for your layer that requires an outline1. And once you've done that, you can render your layer into another context, and then get another UIImage from that.
// edit: Something like what's describe in this answer.

Related

How to save image with transparent stroke

I followed #Rob answer and its drawn as I want...but when I saved this image....stroke not transparent anymore
Objective-C How to avoid drawing on same color image having stroke color of UIBezierPath
For save image I written this code
-(void)saveImage {
UIGraphicsBeginImageContextWithOptions(self.upperImageView.bounds.size, NO, 0);
if ([self.upperImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
[self.upperImageView drawViewHierarchyInRect:self.upperImageView.bounds afterScreenUpdates:YES]; // iOS7+
else
[self.upperImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndPDFContext();
UIImageWriteToSavedPhotosAlbum(self.upperImageView.image, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
You are either setting the alpha of the image view with the paths to 1.0 somewhere, or you are using something that doesn't permit transparencies (e.g. UIGraphicsBeginImageContextWithOptions with YES for the opaque parameter, or staging the image in a format that doesn't preserve alpha, such as JPEG).
A few additional thoughts:
I notice that you're only drawing upperImageView. If you want the composite image, you need to draw both. Or are you only trying to save one of the image views?
(For those unfamiliar with that other question, the entire point was how to draw multiple paths over an image, and have the full set of those paths with the same reduced alpha, rather than having the intersection of paths lead to some additive behavior. This was accomplished by having two separate image views, one for the underlying image, and one for the drawn paths. The key to the answer to that question was that one should draw the paths at 100% alpha, but to add that as a layer to a view that, itself, has a reduced alpha.)
What is the alpha of the image view upon which you are drawing.
NB: In the answer to that other question, when saving a temporary copy of the combined paths. we had to temporarily set the alpha to 1.0. But when saving the final result here, we want to keep the "path" image view's alpha at its reduced value.
Unrelated, but you faithfully transcribed a typo (since fixed) in my original example where I accidentally called UIGraphicsEndPDFContext rather than UIGraphicsEndImageContext. You should use the latter.
So, considering two image views, one with the photograph and one with the drawn path, called imageImageView (with alpha of 1.0) and pathsImageView (with alpha of 0.5), respectively, I can save the snapshot like so:
- (void)saveSnapshot {
CGRect rect = self.pathsImageView.bounds;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
if ([self.pathsImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[self.imageImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES]; // iOS7+
[self.pathsImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES];
} else {
[self.imageImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
[self.pathsImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
When I did that, the resulting composite image was in my photo album:

UIView to UIImage with layer borders

I have a UIView whose layer has two sublayers, each of which has a 1.5 pixel border around the outside. I am trying to create a UIImage from this view with the following code
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
The code does return a UIImage, but the image is clipped – that is, the image doesn't include the all of the borders on the sublayers. I've tried tweaking the sizes/bounds but to no effect. Any suggestions of what else I might try?
Thanks!
What happens if you send the parent layer a
drawInContext: message instead of telling the view to draw itself?

Core Graphics and Open GL Drawing

I have a drawing app where I'm using the openGL paint code to draw the strokes, but want to transfer it to another image after the stroke is complete, then clear the OpenGL view. and for that, I'm using CoreGraphics. I'm running into a problem however, where the OpenGL view is being cleared before the image is being transferred via CG (even though I clear it afterwards)
(And I want it the other way, ie the image to be drawn first then the painting image to be erased, to avoid any kind of flickering)
(paintingView is the openGL view)
Here is the code:
// Save the previous line drawn to the "main image"
UIImage *paintingViewImage = [[UIImage alloc] init];
paintingViewImage = [_paintingView snapshot];
UIGraphicsBeginImageContext(self.mainImage.frame.size);
[self.mainImage.image drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
// Get the image from the painting view
[paintingViewImage drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.mainImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.paintingView erase];
So the paintingView is being erased before the mainImage.image variable is being set to the CurrentImage Context.
I'm a only a beginner with these, so any thoughts helpful.
Thanks
You're probably better off using FBOs (OpenGL frame buffer objects). You draw into one FBO, then switch drawing to a new FBO while you save off the previous one. You can ping-pong back-and-forth between the 2 FBOs. Here are the docs for using FBOs on iOS.

Change color of detail disclosure button dynamically in code

I know similar question(s) have been asked before, but most answers say create a new detail disclosure button image with the required color (e.g. How to change color of detail disclosure button for table cell).
I want to be able to change it to any color I choose, dynamically, at run-time, so using a pre-configured image is not viable.
From reading around, I think there may be a few ways to do this, but I'm not sure which is the best, or exactly how to do it:
Draw the required color circle in code and overlay with an image of shadow round outside of circle and arrow right (with clear alpha channel for rest, so drawn color circle is still visible)
Add image of shadow round outside of circle in UIImageView, and using as a mask, flood fill within this shadow circle, then overlay the arrow.
Add greyscale image, mask it with itself, and overlay the required color (e.g. http://coffeeshopped.com/2010/09/iphone-how-to-dynamically-color-a-uiimage), then overlay that with arrow image.
What is the best way, and does anyone have any code showing exactly how to do it ?
I use the following helper method to tint a grayscale image. It takes the grayscale image, the tint color, and an option mask image used to ensure the tint only happens in part of the grayscale image. This method also ensure the new image has the same (if any) resizable insets as the original image.
+ (UIImage *)tintImage:(UIImage *)baseImage withColor:(UIColor *)color mask:(UIImage *)maskImage {
UIGraphicsBeginImageContextWithOptions(baseImage.size, NO, baseImage.scale);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
if (maskImage) {
CGContextClipToMask(ctx, area, maskImage.CGImage);
} else {
CGContextClipToMask(ctx, area, baseImage.CGImage);
}
[color set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeOverlay);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (!UIEdgeInsetsEqualToEdgeInsets(baseImage.capInsets, UIEdgeInsetsZero)) {
newImage = [newImage resizableImageWithCapInsets:baseImage.capInsets];
}
return newImage;
}
Here is the non-retina grayscale detail disclosure image:
Here is the retina version:
Here is the non-retina mask (between the quotes - it's mostly white):
""
And the retina mask (between the quotes:
""
Consider also using an icon font in a label instead of an image. Something like glyphicons. Then you can set the text colour to whatever you want. You also have good options for scaling. The cost is that you don't have quite such precise control in some cases but it should work well for your case.

Iphone sdk save drawings without whole context

I want to capture screen of my view from my iPhone app. I have white background view and on that I draw a lines on that view's layer using this method .
- (void)draw {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
if (self.fillColor) {
[self.fillColor setFill];
[self.path fill];
}
if (self.strokeColor) {
[self.strokeColor setStroke];
[self.path stroke];
}
CGContextRestoreGState(context);
}
- (void)drawRect:(CGRect)rect {
// Drawing code.
for (<Drawable> d in drawables) {
[d draw];
}
[delegate drawTemporary];
}
I have use delegate methods to draw lines on layer.
This is the project link from where I get help for this.
https://github.com/search?q=dudel&type=Everything&repo=&langOverride=&start_value=1
Now when I use the following context methods to save the drawing only I successfully save it without that white background.
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But When I use the following method of Bezier Pathe I cannot save the drawing without its white background,It saves the whole screen i.e. that drawing and its background.
UIGraphicsBeginImageContext(self.view.bounds.size);
[dudelView.layer renderInContext:UIGraphicsGetCurrentContext()];
//UIImage *finishedPic = UIGraphicsGetImageFromCurrentImageContext();
So can anybody help me how can I save the drawing only here in this app.
(You've tagged this as MapKit related, but make no mention of MapKit.)
Why don't you just split your drawing sequence into three chunks?
Draw your paths into an image context and get a UIImage, as you described.
Draw a background color.
Draw the UIImage.
Then you can use the UIImage for your "screenshot" as well.
I should also note that if the only thing you don't want in your captured UIImage is the background color, you are better off creating a UIImageView, setting its background color (-setBackgroundColor:), and setting its image to be your UIImage.
UIImageView internally has a number of optimizations that allow it to display graphics with much higher performance than you can get with a custom -drawRect: implementation.
Why don't you just save the drawables array? Those are all the drawings without the underlying image :)

Resources