I'm building a Photo filter app (like Instagram, Camera+ and many more..), may main screen is a UIImageView that presenting the image to the user, and a bottom bar with some filters and other options.
One of the option is blur, where the user can use his fingers to pinch or move a circle that represent the non-blur part (radius and position) - all the pixels outside of this circle will be blurred.
When the user touch the screen I want to add a semi transparent layer above my image that represent the blurred part, with a fully transparent circle that represent the non-blur part.
So my question is, how do I add this layer? I suppose I need to use some view above my image view, and to use some mask to get my circle shape? I would really appreciate a good tip here.
One More Thing
I need the circle will not be cut straight, but have a kind of gradient fade. something like Instagram:
And what's very important is to get this effect with good performance, I'd succeed getting this effect with drawRect: but the performance was very bad on old devices (iphone 4, iPod)
Sharp Mask
Whenever you want to draw a path that consists of a shape (or series of shapes) as a hole in another shape, the key is almost always using an 'even odd winding rule'.
From the Winding Rules section of the Cocoa Drawing Guide:
A winding rule is simply an algorithm that tracks information about each contiguous region that makes up the path's overall fill area. A ray is drawn from a point inside a given region to any point outside the path bounds. The total number of crossed path lines (including implicit lines) and the direction of each path line are then interpreted using rules which determine if the region should be filled.
I appreciate that description isn't really helpful without the rules as context and diagrams to make it easier to understand so I urge you to read the links I've provided above. For the sake of creating our circle mask layer the following diagrams depict what an even odd winding rule allows us to accomplish:
Non Zero Winding Rule
Even Odd Winding Rule
Now it's simply a matter of creating the translucent mask using a CAShapeLayer that can be repositioned and expanded and contracted through user interaction.
Code
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) CAShapeLayer *blurFilterMask;
#property (assign) CGPoint blurFilterOrigin;
#property (assign) CGFloat blurFilterDiameter;
#end
#implementation ViewController
// begin the blur masking operation.
- (void)beginBlurMasking
{
self.blurFilterOrigin = self.imageView.center;
self.blurFilterDiameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
CAShapeLayer *blurFilterMask = [CAShapeLayer layer];
// Disable implicit animations for the blur filter mask's path property.
blurFilterMask.actions = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNull null], #"path", nil];
blurFilterMask.fillColor = [UIColor blackColor].CGColor;
blurFilterMask.fillRule = kCAFillRuleEvenOdd;
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.opacity = 0.5f;
self.blurFilterMask = blurFilterMask;
[self refreshBlurMask];
[self.imageView.layer addSublayer:blurFilterMask];
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterOrigin = [sender locationInView:self.imageView];
[self refreshBlurMask];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the circle to expand/contract.
self.blurFilterDiameter += sender.velocity;
[self refreshBlurMask];
}
// Update the blur mask within the UI.
- (void)refreshBlurMask
{
CGFloat blurFilterRadius = self.blurFilterDiameter * 0.5f;
CGMutablePathRef blurRegionPath = CGPathCreateMutable();
CGPathAddRect(blurRegionPath, NULL, self.imageView.bounds);
CGPathAddEllipseInRect(blurRegionPath, NULL, CGRectMake(self.blurFilterOrigin.x - blurFilterRadius, self.blurFilterOrigin.y - blurFilterRadius, self.blurFilterDiameter, self.blurFilterDiameter));
self.blurFilterMask.path = blurRegionPath;
CGPathRelease(blurRegionPath);
}
...
(This diagram may help understand the naming conventions in the code)
Gradient Mask
The Gradients section of Apple's Quartz 2D Programming Guide details how to draw radial gradients which we can use to create a mask with a feathered edge. This involves drawing a CALayers content directly by subclassing it or implementing its drawing delegate. Here we subclass it to encapsulate the data related to it i.e. origin and diameter.
Code
BlurFilterMask.h
#import <QuartzCore/QuartzCore.h>
#interface BlurFilterMask : CALayer
#property (assign) CGPoint origin; // The centre of the blur filter mask.
#property (assign) CGFloat diameter; // the diameter of the clear region of the blur filter mask.
#end
BlurFilterMask.m
#import "BlurFilterMask.h"
// The width in points the gradated region of the blur filter mask will span over.
CGFloat const GRADIENT_WIDTH = 50.0f;
#implementation BlurFilterMask
- (void)drawInContext:(CGContextRef)context
{
CGFloat clearRegionRadius = self.diameter * 0.5f;
CGFloat blurRegionRadius = clearRegionRadius + GRADIENT_WIDTH;
CGColorSpaceRef baseColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colours[8] = { 0.0f, 0.0f, 0.0f, 0.0f, // Clear region colour.
0.0f, 0.0f, 0.0f, 0.5f }; // Blur region colour.
CGFloat colourLocations[2] = { 0.0f, 0.4f };
CGGradientRef gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2);
CGContextDrawRadialGradient(context, gradient, self.origin, clearRegionRadius, self.origin, blurRegionRadius, kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(baseColorSpace);
CGGradientRelease(gradient);
}
#end
ViewController.m (Wherever you are implementing the blur filer masking functionality)
#import "ViewController.h"
#import "BlurFilterMask.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) BlurFilterMask *blurFilterMask;
#end
#implementation ViewController
// Begin the blur filter masking operation.
- (void)beginBlurMasking
{
BlurFilterMask *blurFilterMask = [BlurFilterMask layer];
blurFilterMask.diameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.origin = self.imageView.center;
blurFilterMask.shouldRasterize = YES;
[self.imageView.layer addSublayer:blurFilterMask];
[blurFilterMask setNeedsDisplay];
self.blurFilterMask = blurFilterMask;
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterMask.origin = [sender locationInView:self.imageView];
[self.blurFilterMask setNeedsDisplay];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the mask to expand/contract.
self.blurFilterMask.diameter += sender.velocity;
[self.blurFilterMask setNeedsDisplay];
}
...
(This diagram may help understand the naming conventions in the code)
Note
Ensure the multipleTouchEnabled property of the UIImageView hosting your image is set to YES/true:
Note
For sake of clarity in answering the OPs question this answer continues to use the naming conventions originally used. This may be slightly misleading to others. 'Mask' is this context does not refer to an image mask but mask in a more general sense. This answer doesn't use any image masking operations.
Sounds like you want to use GPUImageGaussianSelectiveBlurFilter which is contained inside the GPUImage framework. It should be a faster more efficient way to achieve what you want.
You can hook up the excludeCircleRadius property to a UIPinchGestureRecognizer in order to allow the user to change the size of the non-blurred circle. Then use the 'excludeCirclePoint' property in conjuction with a UIPanGestureRecognizer to allow the user to move the center of the non-blurred circle.
Read more about how to apply the filter here:
https://github.com/BradLarson/GPUImage#processing-a-still-image
In Swift if anyone needs it (added pan gesture as well):
BlurFilterMask.swift
import Foundation
import QuartzCore
class BlurFilterMask : CALayer {
private let GRADIENT_WIDTH : CGFloat = 50.0
var origin : CGPoint?
var diameter : CGFloat?
override init() {
super.init()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func drawInContext(ctx: CGContext) {
let clearRegionRadius : CGFloat = self.diameter! * 0.5
let blurRegionRadius : CGFloat = clearRegionRadius + GRADIENT_WIDTH
let baseColorSpace = CGColorSpaceCreateDeviceRGB();
let colours : [CGFloat] = [0.0, 0.0, 0.0, 0.0, // Clear region
0.0, 0.0, 0.0, 0.5] // blur region color
let colourLocations : [CGFloat] = [0.0, 0.4]
let gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2)
CGContextDrawRadialGradient(ctx, gradient, self.origin!, clearRegionRadius, self.origin!, blurRegionRadius, .DrawsAfterEndLocation);
}
}
ViewController.swift
func addMaskOverlay(){
imageView!.userInteractionEnabled = true
imageView!.multipleTouchEnabled = true
let blurFilterMask = BlurFilterMask()
blurFilterMask.diameter = min(CGRectGetWidth(self.imageView!.bounds), CGRectGetHeight(self.imageView!.bounds))
blurFilterMask.frame = self.imageView!.bounds
blurFilterMask.origin = self.imageView!.center
blurFilterMask.shouldRasterize = true
self.imageView!.layer.addSublayer(blurFilterMask)
self.blurFilterMask = blurFilterMask
self.blurFilterMask!.setNeedsDisplay()
self.imageView!.addGestureRecognizer(UIPinchGestureRecognizer(target: self, action: "handlePinch:"))
self.imageView!.addGestureRecognizer(UITapGestureRecognizer(target: self, action: "handleTap:"))
self.imageView!.addGestureRecognizer(UIPanGestureRecognizer(target: self, action: "handlePan:"))
}
func donePressed(){
//save photo and add to textview
let parent : LoggedInContainerViewController? = self.parentViewController as? LoggedInContainerViewController
let vc : OrderFlowCareInstructionsTextViewController = parent?.viewControllers[(parent?.viewControllers.count)!-2] as! OrderFlowCareInstructionsTextViewController
vc.addImageToTextView(imageView?.image)
parent?.popViewController()
}
//MARK: Mask Overlay
func handleTap(sender : UITapGestureRecognizer){
self.blurFilterMask!.origin = sender.locationInView(self.imageView!)
self.blurFilterMask!.setNeedsDisplay()
}
func handlePinch(sender : UIPinchGestureRecognizer){
self.blurFilterMask!.diameter = self.blurFilterMask!.diameter! + sender.velocity*3
self.blurFilterMask!.setNeedsDisplay()
}
func handlePan(sender : UIPanGestureRecognizer){
let translation = sender.translationInView(self.imageView!)
let center = CGPoint(x:self.imageView!.center.x + translation.x,
y:self.imageView!.center.y + translation.y)
self.blurFilterMask!.origin = center
self.blurFilterMask!.setNeedsDisplay()
}
Related
I have looked at the libraries like gaugekit but they does not solve my problem.
Are there any other libraries for making gauge view as in the image?
If not, then how can I go around about it?
As #DonMag pointed out.
I have tried to make the changes in gaugekit by adding a view on top the gauge view....but it does not turns out to be good.
So I am stuck out at making the spaces in between the actual gauge.
https://imgur.com/Qk1EpcV
I suggest you create your own custom view, it's not so difficult. Here is how I would do it. I have left out some details for clarity, but you can see in the comments my suggested solutions for that.
First, create a sub-class of UIVew. We will need one property to keep track of the gauge position. This goes into your .h file.
#interface GaugeView : UIView
#property (nonatomic) CGFloat knobPosition;
#end
Next, add the implementation. The GaugeView is a view in itself, so it will be used as the container for the other parts we want. I have used awakeFromNib function to do the initialization, so that you can use the class for a UIView in Storyboard. If you prefer, you can do the initialization from an init function also.
I have not provided code for the knob in the center, but I would suggest you simply create one view with a white disc (or two to make the gray circle) and the labels to hold the texts parts, and beneath that you add an image view with the gray pointer. The pointer can be moved by applying a rotational transform it.
- (void)awakeFromNib {
[super awakeFromNib];
// Initialization part could also be placed in init
[self createSegmentLayers];
// Add knob views to self
// :
// Start somewhere
self.knobPosition = 0.7;
}
Next, create the segments. The actual shapes are not added here, since they will require the size of the view. It is better to defer that to layoutSubviews.
- (void)createSegmentLayers {
for (NSInteger segment = 0; segment < 10; ++segment) {
// Create the shape layer and set fixed properties
CAShapeLayer *shapeLayer = [CAShapeLayer layer];
// Color can be set differently for each segment
shapeLayer.strokeColor = [UIColor blueColor].CGColor;
shapeLayer.lineWidth = 1.0;
[self.layer addSublayer:shapeLayer];
}
}
Next, we need to respond to size changes to the view. This is where we create the actual shapes too.
- (void)layoutSubviews {
[super layoutSubviews];
// Dynamically create the segment paths and scale them to the current view width
NSInteger segment = 0;
for (CAShapeLayer *layer in self.layer.sublayers) {
layer.frame = self.layer.bounds;
layer.path = [self createSegmentPath:segment radius:self.bounds.size.width / 2.0].CGPath;
// If we should fill or not depends on the knob position
// Since the knobPosition's range is 0.0..1.0 we can just multiply by 10
// and compare to the segment number
layer.fillColor = segment < (_knobPosition * 10) ? layer.strokeColor : nil;
// Assume we added the segment layers first
if (++segment >= 10)
break;
}
// Move and size knob images
// :
}
Then we need the shapes.
- (UIBezierPath *)createSegmentPath:(NSInteger)segment radius:(CGFloat)radius {
UIBezierPath *path = [UIBezierPath bezierPath];
// We could also use a table with start and end angles for different segment sizes
CGFloat startAngle = segment * 21.0 + 180.0 - 12.0;
CGFloat endAngle = startAngle + 15.0;
// Draw the path, two arcs and two implicit lines
[path addArcWithCenter:CGPointMake(radius, radius) radius:0.9 * radius startAngle:DEG2RAD(startAngle) endAngle:DEG2RAD(endAngle) clockwise:YES];
[path addArcWithCenter:CGPointMake(radius, radius) radius:0.75 * radius startAngle:DEG2RAD(endAngle) endAngle:DEG2RAD(startAngle) clockwise:NO];
[path closePath];
return path;
}
Finally, we want to respond to changes to the knobPosition property. Calling setNeedsLayout will trigger a call to layoutSubviews.
// Position is 0.0 .. 1.0
- (void)setKnobPosition:(CGFloat)knobPosition {
// Rotate the knob image to point at the right segment
// self.knobPointerImageView.transform = CGAffineTransformMakeRotation(DEG2RAD(knobPosition * 207.0 + 180.0));
_knobPosition = knobPosition;
[self setNeedsLayout];
}
This is what it will look like now. Add the knob, some colors and possibly different sized segments and you are done!
Based on the image I saw the easiest solution might be to create 12 images and then programmatically swap the images as the value it represents grows or shrinks.
I have a class that I've used for a long time that draws a border around a UIView (or anything that inherits from UIView) and give that view rounded corners. I was doing some testing today (after upgrading to Xcode 7 and compiling on iOS 8.3 for the first time) and noticed that the right edge of the UIView is being truncated when I run on iPhone 6/6+ on the simulator (I don't have the actual devices, but I assume the results would be the same).
Here is a simple example. Notice how I've given the superview a red background to make this jump out. The subview is a UIView that has a fixed height and is vertically aligned to the center of the view. That works. The leading and trailing edges are supposed to be pinned to the edge of the superview, as you can see in the constraints in this image. Notice how the inner UILabel and UIButton are centered as they should be, but the UIView container is getting truncated on the right, even though the border is being drawn.
Here are the storyboard settings. The UIView that has the borders is of a fixed height, centered vertically, with leading and trailing edges pinned to the superview:
And finally, here is the code. In the UIViewController, I ask for borders like this. If I comment this code out, the view looks just fine, other than I don't have the borders that I want, of course.
BorderMaker *borderMaker = [[BorderMaker alloc] init];
[borderMaker makeBorderWithFourRoundCorners:_doneUpdatingView borderColor:[SharedVisualElements primaryFontColor] radius:8.0f];
And the BorderMaker class:
#implementation BorderMaker
- (void) makeBorderWithFourRoundCorners : (UIView *) view
borderColor : (UIColor *) borderColor
radius : (CGFloat) radius
{
UIRectCorner corners = UIRectCornerAllCorners;
CGSize radii = CGSizeMake(radius, radius);
[self drawBorder : corners
borderColor : borderColor
view : view
radii : radii];
}
- (void) drawBorder : (UIRectCorner) corners
borderColor : (UIColor *) borderColor
view : (UIView *) view
radii : (CGSize) radii
{
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:view.bounds
byRoundingCorners:corners
cornerRadii:radii];
// Mask the container view’s layer to round the corners.
CAShapeLayer *cornerMaskLayer = [CAShapeLayer layer];
[cornerMaskLayer setPath:path.CGPath];
view.layer.mask = cornerMaskLayer;
// Make a transparent, stroked layer which will dispay the stroke.
CAShapeLayer *strokeLayer = [CAShapeLayer layer];
strokeLayer.path = path.CGPath;
strokeLayer.fillColor = [UIColor clearColor].CGColor;
strokeLayer.strokeColor = borderColor.CGColor;
strokeLayer.lineWidth = 1.5; // the stroke splits the width evenly inside and outside,
// but the outside part will be clipped by the containerView’s mask.
// Transparent view that will contain the stroke layer
UIView *strokeView = [[UIView alloc] initWithFrame:view.bounds];
strokeView.userInteractionEnabled = NO; // in case your container view contains controls
[strokeView.layer addSublayer:strokeLayer];
// configure and add any subviews to the container view
// stroke view goes in last, above all the subviews
[view addSubview:strokeView];
}
Somewhere in that class, it seems that the views bounds are not reflecting the fact that AutoLayout has stretched the view to fill the larger iPhone 6/6+ screen width. Just a guess since I am out of ideas. Any help is appreciated. Thanks!
BorderMaker creates various layers and views based on the current size of the input view. How do those layers and views get resized when the input view changes size? Answer: they don't.
You could add code to update the size in various ways, but I wouldn't recommend it. Since you're rounding all four corners anyway, you can solve this better by just using the existing CALayer support for drawing a border, rounding the corners, and masking its contents.
Here's a simple BorderView class:
BorderView.h
#import <UIKit/UIKit.h>
IB_DESIGNABLE
#interface BorderView : UIView
#property (nonatomic, strong) IBInspectable UIColor *borderColor;
#property (nonatomic) IBInspectable CGFloat borderWidth;
#property (nonatomic) IBInspectable CGFloat cornerRadius;
#end
BorderView.m
#import "BorderView.h"
#implementation BorderView
- (void)setBorderColor:(UIColor *)borderColor {
self.layer.borderColor = borderColor.CGColor;
}
- (UIColor *)borderColor {
CGColorRef cgColor = self.layer.borderColor;
return cgColor ? [UIColor colorWithCGColor:self.layer.borderColor] : nil;
}
- (void)setBorderWidth:(CGFloat)borderWidth {
self.layer.borderWidth = borderWidth;
}
- (CGFloat)borderWidth {
return self.layer.borderWidth;
}
- (void)setCornerRadius:(CGFloat)cornerRadius {
self.layer.cornerRadius = cornerRadius;
}
- (CGFloat)cornerRadius {
return self.layer.cornerRadius;
}
#end
Now, if you create a view in your storyboard and set its custom class to BorderView, you can set up its border right in the storyboard:
Note that I set “Clip Subviews” in the storyboard, so it'll clip subviews if they happen to go outside the rounded bounds of the BorderView.
If you set up constraints on the BorderView, they'll keep everything sized and positioned:
I solved this. The problem is that I was calling these BorderMaker methods from within the viewDidLoad method of the UIViewController. All I had to do was to move this to viewDidAppear. Presumably, as Rob Mayoff suggested, the autolayout wasn't finished by the time that the view was passed to the BorderMaker class, so it was getting a frame that hadn't considered the size of the screen, but rather was just using the width defined in the IB.
After some trial and error, it seems that viewDidAppear is the earliest life cycle method that I can use where autolayout is done with its work.
A CAGradientLayer has two properties startPoint and endPoint. These properties are defined in terms of the unit coordinate space. The result is that if I have two gradient layers with the same start and end points each with different bounds, the two gradients will be different.
How can the startPoint and endPoint of a CAGradientLayer layer be defined not in terms of the unit coordinate space but in standard point coordinates so that the angle/size of the gradient is not affected by the bounds of the layer?
The desired result is that a gradient layer can be resized to any size or shape and the gradient remain in place, although cropped differently.
Qualifications:
I know that this seems like an absolutely trivial transformation between coordinate spaces, but apparently either, yes I am in fact that dumb, or perhaps there's something either broken or extremely counter-intuitive about how CAGradientLayers work. I haven't included an example of what I expect should be the right way to do it, because (assuming I'm just dumb) it would only be misleading.
Edit:
Here is the implementation I have of a CALayer which adds a sub CAGradientLayer and configures it's start and end points. It does not produce the desired results.
#interface MyLayer ()
#property (nonatomic, strong) CAGradientLayer *gradientLayer;
#end
#implementation MyLayer
- (instancetype)init {
if (self = [super init]) {
self.gradientLayer = [CAGradientLayer new];
[self addSublayer:self.gradientLayer];
self.gradientLayer.colors = #[ (id)[UIColor redColor].CGColor, (id)[UIColor orangeColor].CGColor, (id)[UIColor greenColor].CGColor, (id)[UIColor blueColor].CGColor];;
self.gradientLayer.locations = #[ #0, #.333, #.666, #1 ];
}
return self;
}
- (void)layoutSublayers {
[super layoutSublayers];
self.gradientLayer.frame = self.bounds;
self.gradientLayer.startPoint = CGPointMake(0, 0);
self.gradientLayer.endPoint = CGPointMake(100 / self.bounds.size.width, 40 / self.bounds.size.height);
}
#end
I have a .xib file with a number of MyLayer's of different sizes. The gradients of the layers are all different.
You can't define the startPoint and endPoint otherwise. You have two options:
Calculate those based on the view height (50 pixels for a 100 pixels view height = 0.5, for a 100 pixels view height = 0.25)
Create a gradient at the largest fixed size required (ex: with a height of 568), and add it as a subview of another view that will be resized for your needs, with clipsToBounds enabled. That way, you could achieve what you want (have the gradient always start at the top and clip the bottom, keep the gradient centered and clip the top and the bottom, etc)
How can the startPoint and endPoint of a CAGradientLayer layer be defined not in terms of the unit coordinate space but in standard point coordinates so that the angle/size of the gradient is not effected by the bounds of the layer?
It can't. But you can easily think of a different strategy. For example:
Paint the gradient a different way (using Quartz instead relying on a CAGradientLayer to paint it for you).
Use a mask so that the layer appears to be a certain size and shape, and you can change that size and shape by changing the mask, but actually the layer itself is one big constantly sized layer with the same gradient all the time.
Detect that the gradient layer has changed bounds, and change the gradient startPoint and endPoint to match. Here's a working example of a view whose layer is a gradient layer and does this - but you have to remember to redraw the layer every time the bounds change!
override func drawLayer(layer: CALayer!, inContext ctx: CGContext!) {
let grad = layer as! CAGradientLayer
grad.colors = [UIColor.whiteColor().CGColor, UIColor.redColor().CGColor]
let maxx:CGFloat = 500.0 // or whatever
let maxy:CGFloat = 500.0 // or whatever
grad.startPoint = CGPointMake(0,0) // or whatever!
grad.endPoint = CGPointMake(maxx/self.bounds.width, maxy/self.bounds.height)
}
The simple UIView below draws a rounded rectangle. When I pass a corner radius of 65 or below it rounds correctly, but 66 and above and it generates a perfect circle! What is going on here? It should only show a circle when the corner radius is equal to 1/2 the frame width, but it seems that it is drawing a circle when the radius is about 1/3rd, no matter what the size of the view is. This behavior appears on iOS 7. On iOS 6 I get expected behavior.
#import "ViewController.h"
#interface MyView : UIView
#end
#implementation MyView
-(void)drawRect:(CGRect)rect {
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, 200, 200) cornerRadius:65];
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddPath(c, path.CGPath);
[[UIColor redColor] set];
CGContextStrokePath(c);
}
#end
#interface ViewController ()
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
MyView *v = [[MyView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:v];
}
#end
This will probably never be fixed. Here is why.
The math to calculate if you can make a squircle is: radius * magicMultiplier * 2. If the result is longer than the side, it can't make a squircle so it makes a circle.
The magicMultiplier is required because to make it look like a squircle, the bezier curve needs to start from a longer distance than the radius. The magic multiplier provides that extra distance.
From my research and playing around with the bezier function, I believe the magic multiplier might be something around 1.0 + 8.0 / 15.0 = 1.533.
So 66*(1+8/15)*2 is 202.4 which is longer than the shortest side (200), thus it makes it a circle.
However! 65*(1+8/15)*2 is 199.33 which is smaller than 200, so it squircles it correctly.
Possible solutions
Code your own bezier curve function (or get one online)
Use the view's layer.cornerRadius to achieve the same thing since Apple doesn't clamp the corner radius here.
layer.cornerCurve = .continuous
layer.cornerRadius = min(radius, min(bounds.width, bounds.height)/2.0)
// You might want to clamp it yourself
Bear in mind that draw(in ctx) doesn't work with layer.maskedCorners. So you can't use SnapshotTesting with those.
FYI, to->circle bug happens at approx. 65%:
- "INFO --- width/height (SQUARE): 12.121212121212121"
- "INFO --- halfSize == maxRadius: 6.0606060606060606"
- "INFO --- cornerRadius: 3.967272727272727"
- "INFO --- ratioBug: 0.6546"
extension CGRect
{
// For generic Rectangle
// 28112022 (bug happens at 65%) (cornerRadius / maxRadius)
// radiusFactor: [0, 1]
func getOptimalCornerRadius(radiusFactor: CGFloat) -> CGFloat
{
let minSize = self.sizeMin()
let maxRadius = minSize / 2
let cornerRadius = maxRadius * radiusFactor
return cornerRadius
}
}
So I effectively have a image I'd like to zoom horizontally but also be able to respect the location of the pinch. So if you pinched on the left, It'd zoom into the left. Ideally, the points where you pinch would stay with your fingers.
My case is a little more specific, I'm plotting data on a graph, so I instead will be manipulating an array of data and taking a subset. However, I'm sure the math is similar. (BUT, I can't just use an Affinetransform as most examples I've found does)
Any ideas?
The default pinch gesture handler scales the graph around the point of the gesture. Use a plot space delegate to limit the scaling to only one axis.
Do you use a UIScrollView? As far as I know, you'll get this behaviour for free.
I built the below solution for standard UIViews where UIScrollView and CGAffineTransform zooms were not appropriate as I did not want the view's subviews to be skewed. I’ve also used it with a CorePlot line graph.
The zooming is centred on the point where the user starts the pinch.
Here’s a simple implementation:
#interface BMViewController ()
#property (nonatomic, assign) CGFloat pinchXCoord; // The point at which the pinch occurs
#property (nonatomic, assign) CGFloat minZoomProportion; // The minimum zoom proportion allowed. Used to control minimum view width
#property (nonatomic, assign) CGFloat viewCurrentXPosition; // The view-to-zoom’s frame’s current origin.x value.
#property (nonatomic, assign) CGFloat originalViewWidth, viewCurrentWidth; // original and current frame width of the view.
#end
#implementation BMViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Setup the pinch gesture recognizer
UIPinchGestureRecognizer *pinchZoomRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchZoomGestureDetected:)];
[self.view addGestureRecognizer:pinchZoomRecognizer];
}
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
self.originalViewWidth = self.view.frame.size.width;
self.viewCurrentWidth = self.originalViewWidth;
}
- (void)pinchZoomGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
CGPoint pinchLocation = [recognizer locationInView:self.view];
if (recognizer.state == UIGestureRecognizerStateBegan) {
self.viewCurrentWidth = self.view.frame.size.width;
self.viewCurrentXPosition = self.view.frame.origin.x;
// Set the pinch X coordinate to a point relative to the bounds of the view
self.pinchXCoord = pinchLocation.x - self.viewCurrentXPosition;
self.minZoomProportion = self.originalViewWidth / self.viewCurrentWidth;
}
CGFloat proportion = recognizer.scale;
CGFloat width = self.viewCurrentWidth * MAX(proportion, self.minZoomProportion); // Set a minimum zoom width (the original size)
width = MIN(self.originalViewWidth * 4, width); // Set a maximum zoom width (original * 4)
CGFloat rawX = self.viewCurrentXPosition + ((self.viewCurrentWidth - width) * (self.pinchXCoord / self.viewCurrentWidth)); // Calculate the new X value
CGRect frame = self.view.frame;
CGFloat newXValue = MIN(rawX, 0); // Don't let the view move too far right
newXValue = MAX(newXValue, self.originalViewWidth - width); // Don't let the view move too far left
self.view.frame = CGRectMake(newXValue, frame.origin.y, width, frame.size.height);
NSLog(#"newXValue: %f, width: %f", newXValue, width);
}
#end
This is all that needs to be done to resize the horizontal axis of a UIView.
For Core-Plot, it is assumed the CPTGraphHostingView is a subview of the UIViewController’s view being resized. The CPTGraphHostingView is redrawn when its frame/bounds are changed so, in the layoutSubviews method of the containing view change the CPTGraphHostingView and plot area’s frame’s relative to the bounds of its parent (which is the view you should be resizing). Something like this:
self.graphHostingView.frame = self.bounds;
self.graphHostingView.hostedGraph.plotAreaFrame.frame = self.bounds;
I haven’t attempted to change data on the graph as it zooms, but I can’t imagine it would be too difficult. In layoutSubviews on your containing view, call reloadData on your CPTGraph.