drawRect: How do I do an "inverted clip" - ios

I am creating an iOS user interface to allow a user to pick a rectangle within an existing image, dragging the corners of that rectangle to the desired size. I now have four custom UIButtons (30% alpha) and a custom view (also with 30% alpha) that draws the dashed lines between the four corner buttons.
To "improve" the interface, I would like my drawRect code to make the cropped portion of the image appear "normal" while everything outside the cropped region is washed out (filled with white color, which will give me the correct effect since the UIView is set to 30% alpha).
The obvious algorithm would be:
Fill the entire image with [UIColor whiteColor] fill
Draw the four dashed lines with a [UIColor clearColor] fill
When I do this, the clear fill isn't showing up. I believe this is because the "fill" of the clear color in step #2 isn't being seen because the pixels were already set to white in step #1. Perhaps there's a blend mode that will allow me to see the transparency of the second rectangle? I'm not sure about the various blend modes.
My second attempt, which works, does the following:
Draw the four dashed lines with [UIColor clearColor] fill
Draw four additional rectangles with [UIColor whiteColor] fill, each representing the portions to the left, right, above, and below the cropped region.
As I mention, this method works, but seems to me there should be a simpler way instead of me having to calculate these four additional rectangles each and every time.
There is a similar question on SO Create layer mask with custom-shaped hole that uses CALayer and masks, but this seems to be overkill for what I need.
Does anybody have any suggestions on how to improve this?

You can set the blend mode to kCGBlendModeCopy and use clearColor to reset a pixel's alpha to zero. You can presumably also use kCGBlendModeClear but I haven't tested that.
You can also set the clipping path to just contain the pixels you want cleared and call CGContextClearRect(gc, CGRectInfinite).
If you want to use a clipping mask with a hole in it, you can do so without using a CALayer, and you can build it a little more simply than in the answer you linked, by using the even-odd rule and CGRectInfinite:
CGContextSaveGState(GC); {
CGContextBeginPath(gc);
CGContextAddRect(gc, myRect); // or whatever simple path you want here
CGContextAddRect(gc, CGRectInfinite);
CGContextEOClip(gc);
// drawing code here is clipped to the exterior of myRect
} CGContextRestoreGState(gc);

Related

Make view stand out from the surroundings. Border layer with inverse color?

I have a view (a Button in this case) and the content behind/below it can move around (a ScrollView). The button is just plain white with black text and not bordered with anything. When the content behind the View is white it does not really stand out as I would like it to do. (I can fix this with a black border sure... but...)
I have this idea of having a border around the button that is the opposite color of the view (pixel) behind it. So the border would always contrast with the background and constantly change with the content behind it.
I have googled a bit and looked into visual effects layers and some more complicated (over the top of my head) graphics stuff I don't remember the terminology for.
If you have an idea of how to approach this please tell me. I just really want to see what it would look like.
and have a wonderful day!
I don't think you can create a border that uses colors based on the color of the pixel from the layer behind the current layer using CALayer and borders.
What I would suggest doing is to add a second CALayer to your view's CALayer, inset by -1 in both dimensions (made 1 pixel bigger.) Let's call that the surroundLayer. Then make the surroundLayer's borderColor be white and the view's layer.borderColor be black (or visa-versa.) You can make the surroundLayer's borderColor be 50% opaque so it just lightens/darkens the pixels under it without completely obscuring them, and that is enough to increase the contrast and make your view's border show up regardless of the contents under it.
I've used this technique before and it works well.
Edit:
Check out the project https://github.com/DuncanMC/MaskableImageView.git. The project demonstrates using an image as a mask layer to hide/reveal the contents of a view.
The class MaskableView in that project draws a circular "cursor" that shows where it is revealing/masking the contents of its subview (an image and a label, in the example app.) The cursor is yellow in the middle, with a partly transparent black outer circle around it. This gives good contrast regardless of the colors in the part of the image it is being drawn over.
The MaskableView class has properties that let the caller set the colors use for the "cursor" circle.
Below I posted a short animation of what the eraser tool with a yellow inner circle and an outer, 1/2 transparent black circle looks like.
Without the outer dark circle the yellow inner circle tends to get lost in brighter parts of an image. With the combination of a bright colored inner circle and a partly transparent, dark outer circle, it's easy to see on ANY background:

Smootly mooving a hole in UIImage?

I have a UIImageView displaing an image. This view's layer is masked with CAShapeLayer in order to create circular "hole" in the image. To create the hole I use UIBezierPath with .usesEvenOddFillRule = true.
It works fine when static. But I need that hole to move with user finger. To do that I create new UIBezierPath with even-odd rule each time user moves their finger. On smaller phones with smaller images it looks OK but on iPhone 6 Plus it is choppy.
Any ideas on how to make it smooth are very wellcome. I cannot just move the frame of masking CAShapeLayer - it would move the hole bot also hide some edges of the image. So the only way is to change its .path each time user moves finger and that is slow.
EDIT: matt's answer would work in some scenarios but not in my case: I am not displaying the whole image only a part of it defined by UIBezierPath. This part is most often oval (but can be rectangular or rounded rectangle) and it has "hole" cut in it. While the hole is mowing with users finger the displayed part/shape of the image does not change - it is static.
The ineficient solution that was in place so far wa:
Create UIBezierPath with boundary of displayed part of the image
Set 'even off fill rule' on it
Add UIBezierPath of the hole to it
Set it as path of CAShapeLayer with some opaque fill color
Use that CAShapeLayer as mask of the UIImageView
This procedure was repeated each time a user moved their finger. I cannot simply move the whole mask layer as that would also change the part of the image being displayed. I what it to stay static and move only the hole in it.
it would move the hole bot also hide some edges of the image
Well, I don't agree. Moving the mask is exactly the way to do this. I don't see why you think there's a problem with that. Perhaps the issue is merely that you have not made the mask layer big enough. It does not have to be the same size as the layer it is masking. In this case, it needs to be about 9 times the size of the masked layer (3 horizontal and 3 vertical), so that it will continue to cover the masked the layer no matter how far in any direction the user slides it.

How do you re-stroke a path to another color with exact same result?

I'm developing an app which involves drawing lines. Every times the user moves the finger, that point is added to an path and also added to the CGContext as the example below.
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddCurveToPoint(cacheContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGPathMoveToPoint(path, NULL, point1.x, point1.y);
CGPathAddCurveToPoint(path, NULL, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
Now when I want to add it and stroke it in black I use the following code
CGContextSetStrokeColorWithColor([UIColor blackcolor].CGColor)
CGContextAddPath(cacheContext,path);
CGContextStrokePath(cacheContext);
However the line that gets stroked this time will be a bit smaller then the one that was drawn before. This will result in a slight border around the stroked path. So my question is: How can I get the stroked path to be identical to the path that was drawn into the CGcontext?
The issue is due to anti-aliasing. The path is a geometric ideal. The bitmap generated by stroking the path with a given width, color, etc. is imperfect. The ideal shape covers some pixels completely, but only covers others partially.
The result without anti-aliasing (and assuming an opaque color) is to fully paint pixels which mostly lie within the ideal shape and don't touch the pixels which mostly lie outside of it. That leaves visible jaggies on anything other than vertical or horizontal lines. If you later draw the same path with the same stroke parameters again, exactly the same pixels will be affected and, since they are being fully painted, you can completely replace the old drawing with the new.
With anti-aliasing, any pixel which is only partially within the ideal shape is not completely painted with the new color. Rather, the stroke color is applied in proportion to the percentage of the pixel which is within the ideal shape. The color that was already in that pixel is retained in inverse proportion. For example, a pixel which is 43% within the ideal shape will get a color which is 43% of the stroke color plus 57% of the prior color.
That means that stroking the path a second time with a different color will not completely replace the color from a previous stroke. If you fill a bitmap with white and then stroke a path in red, some of the pixels along the edge will mix a little red with a little of the white to give light red or pink. If you then stroke that path in blue, the pixels along the edge will mix a little blue with a little of the color that was there, which is a light red or pink. That will give a magenta-ish color.
You can disable anti-aliasing using CGContextSetShouldAntialias(), but then you risk getting jaggies. You would have to do this around both strokings of the path.
Alternatively, you can clear the context to some background color before redrawing the path. But for that, you need to be able to completely redraw everything you want to appear.

iOS 7.1 Slide To Unlock Text Animation

I'm not sure if this has been asked before, but I'm having a hard time finding it. Perhaps I'm not using the right search terms, so if an answer already exists, if someone could point me in the right direction, it'd be most appreciated!
I just noticed that the glimmer animation on the "slide to unlock" text of the lockscreen has changed with the iOS 7.1 update. The spotlight now has an ovular / diamond shape that cascades across the letters without appearing on the view behind it.
In the past, I've replicated this type of feature by changing the color of individual letters sequentially, but for this, the animation goes through the middle of the letters. Without affecting the background.
How can I replicate this?
You can animate label text and use custom slider for it, I hope it helps you:
CALayer *maskLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
maskLayer.backgroundColor = [[UIColor colorWithRed:0.0f green:0.0f blue:0.0f alpha:0.15f] CGColor];
maskLayer.contents = (id)[[UIImage imageNamed:#"Mask.png"] CGImage];
// Center the mask image on twice the width of the text layer, so it starts to the left
// of the text layer and moves to its right when we translate it by width.
maskLayer.contentsGravity = kCAGravityCenter;
maskLayer.frame = CGRectMake(myLabel.frame.size.width * -1, 0.0f, myLabel.frame.size.width * 2, myLabel.frame.size.height);
// Animate the mask layer's horizontal position
CABasicAnimation *maskAnim = [CABasicAnimation animationWithKeyPath:#"position.x"];
maskAnim.byValue = [NSNumber numberWithFloat:myLabel.frame.size.width];
maskAnim.repeatCount = 1e100f;
maskAnim.duration = 1.5f;
[maskLayer addAnimation:maskAnim forKey:#"slideAnim"];
myLabel.layer.mask = maskLayer;
You should be able to use the mask property of CALayer to create a cutout of the contents of another layer.
Set the mask to contain your text (maybe a CATextLayer can work here). This is what Shimmer says it uses.
Make the foreground color of your label be a UIColor initiated with
+colorWithPatternImage or
-initWithPatternImage
using an animated image and setting the background color of the label to transparent. I've not tried this, but I don't see why it wouldn't work.
The best way to do this is with a multi layer object.
Top: UILabel with opaque background and clear text
Clear text is rendered in drawRect: func through complicated masking process
Middle: Worker View that is performing a repeating animation moving an image behind the top label
Bottom: a UIView that you add the middle and top subview to in that order. Can be whatever color you want the text to be
An example can be seen here
https://github.com/jhurray/AnimatedLabelExample
The most effective way I've found to recreate the glimmering text effect is to use the Shimmer Cocoapod created by Facebook. Below is the example image from the Shimmer GitHub repo, which is located at the following URL: https://github.com/facebook/Shimmer
Shimmer example
There are full instructions to install and use Shimmer on the repo, but the gist is that after installing the Cocoapod you'll add a special subview or layer into which will go the contents you wish to have glimmer/shimmer, then set the effect to start.
Try to have a semi-transparent foreground with transparent cutouts for the letters. The "glimmer" can be moved across behind the cutouts.
Make a layer on top that has cutout layers with an animated PNG or something as the background.
Under this layer, have another layer with exactly the reverse transparency (letters are opaque and space between letters is transparent.
This way, the user sees through the letters to the animation, and between the letters to whatever the letters are over.
Just make sure you have code to keep the layers in the right order.
I think that it's a semi transparent view, but it's a special view in which the drawrect is overridden to color each pixel of the letters with the same color (but stronger to make it visible) of the pixel in the view beneath it.
Imagine this like the magnifying view. it displays a magnified version of the the view beneath it.

CoreGraphics - Blending only *part* of a view

I recently came across this brilliant article about improving scroll performance with UITableViewCells: http://engineering.twitter.com/2012/02/simple-strategies-for-smooth-animation.html -- While many great tips can be found in this article, there is one in particular that has me intrigued:
Tweets in Twitter for iPhone 4.0 have a drop shadow on top of a subtle textured background. This presented a challenge, as blending is expensive. We solved this by reducing the area Core Animation has to consider non-opaque, by splitting the shadow areas from content area of the cell.
Using the iOS Simulator, clicking Debug - Color Blended Layers would reveal something like this:
The areas marked in red are blended, and the green area is opaque. Great. What the article fails to mention is: How do I implement this? It is my understanding that a UIView is either opaque or it's not. It seems to me that the only way to accomplish this would be with subviews, but the article explicitly states that as being a naive implementation:
Instead, our Tweet cells contain a single view with no subviews; a single drawRect: draws everything.
So how do I section off what is opaque, and what is not in my single drawRect: method?
In the example you show, I don't believe they're showing a background through the view. I think they're simulating a background in core graphics. In other words, in each cell they draw a light gray color for the background. They then draw the shadow (using transparency), and finally they draw the rest of the opaque content on the top. I could be wrong, but I don't believe you can make portions of the view transparent. If so, I'd be very, very interested in it because I use core graphics all the time, but I avoid rounded corners because blending the entire view for it just doesn't seem to be worth it.
Update
After doing some more research and looking through Apple's docs, I don't believe it's possible for only part of a view to be opaque. Also, after reading through Twitter's blog post, I don't think they are saying that they did so. Notice that when they say:
Instead, our Tweet cells contain a single view with no subviews; a single drawRect: draws everything.
They were specifically talking about UILabel and UIImageView. In other words, instead of using those views they're drawing the image directly using Core Graphics. As for the UILabels, I personally use Core Text since it has more font support but they may also be using something simpler like NSString's drawAtPoint:withFont: method. But the main point they're trying to get across is that the content of the cell is all one CG drawing.
Then they move to a new section: Avoid Blending. Here they make a point of saying that they avoid blending by:
splitting the shadow areas from content area of the cell.
The only way to do this is to use different views. There are two approaches they could be using, but first note that the cell dividers are themselves overlays (provided by the tableView). The first way is to use multiple views inside the cell. The second way is to underlay/overlay the shadows/blended-views behind/over the cells by inserting the appropriate views into the UIScrollView. Given their previous statement about having only one view/drawRect for each cell, this is probably what they're doing. Each method will have its challenges, but personally I think it would be easier to split the cell into 3 views (shadow, content, shadow). It would make it a lot easier to handle first/last cell situations.
I'd have to guess something along these lines
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_shadows/dq_shadows.html
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath* path = [UIBezierPath bezierPathWithRoundedRect:self.bounds cornerRadius:10.0f];
CGContextSaveGState(context);
CGRect leftRect = CGRectZero;
CGContextClipToRect(context, leftRect );
CGContextSetBlendMode(context, kCGBlendModeNormal);
// draw shadow
// Call the function CGContextSetShadow, passing the appropriate values.
// Perform all the drawing to which you want to apply shadows.
CGContextSetShadowWithColor(context, CGSizeMake(1.0f, 1.0f), 10.0f, [UIColor blackColor].CGColor);
CGContextAddPath(context, path.CGPath);
CGContextDrawPath(context, kCGPathStroke);
CGContextRestoreGState(context);
CGContextSaveGState(context);
CGRect middleSection = CGRectZero;
CGContextClipToRect(context, middleSection);
CGContextSetFillColorWithColor(context, self.backgroundColor.CGColor);
CGContextFillRect(context, self.bounds);
// draw opaque
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextRestoreGState(context);
My opinion is: Don't let Core Animation draw shadows using the various layer properties. Just draw a prerendered image to both sides, which is in fact a shadow. To factor variable height of a cell in a stretch draw may do the trick.
EDIT:
If the background is plain a prerendered shadow can be applied to both sides without know it is affecting visual appeal.
In case that is not applicable the tableview has to be shrunk to be of the size without the shadow. Then the shadow can be blended without doing it for every cell but just "on top". It really doesn't scroll. This will only work if the shadow is without any "texture", else one will notice it's just applied on top.

Resources