I'm developing an iOS 5.0+ app with latest SDK.
I'm very very new on CoreGraphics and I don't know how to draw a radiant gradient on a CALayer.
I have found that I have to use CGContextDrawRadialGradient to draw a radiant gradient.
Searching on Google, I see that I have to add the radiant gradient to CALayer's content, but to draw it I need a CGContext, and I don't know how to get this CGContext.
Do you know how can I do it?
I have found this tutorial, but it also uses CGContext.
My code is this:
#import <UIKit/UIKit.h>
#interface ViewController : UIViewController
#property (weak, nonatomic) IBOutlet UIView *testView;
#end
Implementation:
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
- (void)drawRadiantGradient;
#end
#implementation ViewController
#synthesize testView = _testView;
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self drawRadiantGradient];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)drawRadiantGradient
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat redBallColors[] = {
1.0, 0.9, 0.9, 0.7,
1.0, 0.0, 0.0, 0.8
};
CGFloat glossLocations[] = {0.05, 0.9};
CGGradientRef ballGradient = CGGradientCreateWithColorComponents(colorSpace, redBallColors, glossLocations, 2);
CGRect circleBounds = CGRectMake(20, 250, 100, 100);
CGPoint startPoint = CGPointMake(50, 270);
CGPoint endPoint = CGPointMake(70, 300);
CGContextDrawRadialGradient(context, ballGradient, startPoint, 0, endPoint, 50, 0);
CGContextAddEllipseInRect(context, circleBounds);
CGContextDrawPath(context, kCGPathStroke);
}
I want to create a CALayer, draw a radiant gradient, and add this CALayer to _testView.
You would either draw into your own context (for example an image context that you created using UIGraphicsBeginImageContextWithOptions()
or
become the delegate of that layer and draw into the layers graphics context using drawLayer:inContext:
It sounds to me like you want to do the second option. You would create the CALayer and set yourself as the delegate. Then implement drawLayer:inContext: and use the context the context that is passed as an argument.
Setup
CALayer *yourLayer = [CALayer layer];
// other layer customisation
yourLayer.delegate = self;
And draw
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
// Check that the layer argument is yourLayer (if you are the
// delegate to more than one layer)
// Use the context (second) argument to draw.
}
The UIGraphicsGetCurrentContext() only return a valid context when called from sprecific methods or of you created a new one f.e. by calling UIGraphicsBeginImageContext().
So what you need to do, is either subclass a UIView and overwrite it's drawRect: method with the code in your drawRadiantGradient method. Or you could use the delegate methods provided by the CALayerDelegate protocol. Just set the layer's delegate to your vc and implement the drawLayer:inContext: method. This could look like this:
in your .h file:
#interface MyVC : UIViewController
#property (weak, nonatomic) IBOutlet UIView *drawingView;
/...
and in your .m file:
- (void)viewDidLoad
{
[super viewDidLoad];
self.drawingView.layer.delegate = self;
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
[self drawRadiantGradientInContext:context];
}
Make a custom subclass of UIView that draws the gradient in its drawRect: method, and have your VC create a radial-gradient view and add that as a subview of its view.
Related
I can't figure out why I can't draw on the UIView layer. Is that possible?
My example code i am using:
#implementation Menu
UIView *window;
UIWindow *mainWindow;
CGSize viewwidth;
viewwidth=[[UIScreen mainScreen] bounds].size;
// viewwidth.width or viewwidth.height
//My UIView
window = [[UIView alloc]initWithFrame:CGRectMake(0, 0, viewwidth.width, viewwidth.height)];
window.backgroundColor = [UIColor whiteColor];
window.layer.opacity = 1.0f;
[window setNeedsDisplay];
[mainWindow addSubview: window];
My method drawRect for drawing:
- (void)drawRect:(CGRect)rect
{
// Drawing code
CGRect rectangle = CGRectMake(0, 0, 320, 100);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 0.0); //this is the transparent color
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 0.5);
CGContextFillRect(context, rectangle);
CGContextStrokeRect(context, rectangle); //this will draw the border
}
Am I doing something wrong, the problem is i get a empty white screen?
When you create a project and chose to use Storyboards you will set a ViewController to your Main.storyboard in Interface Builder.
There are differences between iOS Apps that use SceneDelegates and those who do not (iOS before 13.0). But your chosen ViewController will be assigned to be used from within your Main Window/Scene when the App will be loaded. In AppDelegate.m (before iOS 13) or SceneDelegate.m (starting iOS 13) you could directly access or even assign this yourself to the self.window.rootViewController property.
Do to inheritance of UIViewController which come with a basic UIView ready to work with under self.view (in this example inside ViewController.m) you can place more Subviews to that property with [self.view addSubview:yourOtherView].
-(void)drawRect:(CGRect)rect is a method that belongs to UIView and will be called when the view is set to update or layout. But it is important to know that this is not the only method where you can place drawing code. UIView:drawRect can be avoided to lower the Energy impact and workload you app is consuming when you don't need repeatedly redrawing the graphics. Also the workload is heavy influenced to be lower when you are able to lower the CGRect size to draw inside.
One word to value naming.. you will have less confusing code when values are named after the content they belong to. So naming a property "window" which is actually a UIView will stress you when your project becomes more complex. So you created a local value with name mainWindow while window is already existing. When doing so you will have to assign your mainWindow to the apps UIWindow yourself to make it available. And the UIView you named window can confuse you while _window or self.window is already existing as part of the AppDelegate or SceneDelegate. And as mentioned above you have a UIView that belongs to the UIViewController you assigned in Storyboard or in code.
Your drawing code is valid. It may be invisible to you because of transparency you choose to go for.
below some code to compare to for your ViewController.
#import "ViewController.h"
#import "MyGraphicView.h"
#interface ViewController ()
#property (nonatomic) MyGraphicView *graphicView;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
_graphicView = [[MyGraphicView alloc] initWithFrame:self.view.frame];
[self.view addSubview:_graphicView];
}
#end
and a MyGraphicView.h
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface MyGraphicView : UIView
#end
NS_ASSUME_NONNULL_END
and corresponding MyGraphicView.h
#import "MyGraphicView.h"
#implementation MyGraphicView
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
// This method is called when a view is first displayed or when an event occurs that invalidates a visible part of the view.
- (void)drawRect:(CGRect)rect {
CGRect rectangle = CGRectMake(0, 0, 320, 100);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 0.0);
CGContextSetRGBStrokeColor(context, 0.0, 1.0, 0.0, 0.5);
CGContextFillRect(context, rectangle);
CGContextStrokeRect(context, rectangle);
}
#end
I would like to get some help with a scenekit related problem.
I have a dae (collada) file. The scene contains a mesh with bones and one animation. The bones will be moving in the animation and i would like to draw a line from a fix point to a selected moving bone. The expected result should looks like this: img 1, img 2. One end of the line is fixed and the other end of the line should follow the animated bone. Like a rubber band. I tried to implement it in a sample project but i was not able to get the line follow the animation. I don't know how to follow the animation's position per frame and i don't know how could i update the line on the screen. The code:
#import "ViewController.h"
#import SceneKit;
#import <OpenGLES/ES2/gl.h>
#interface ViewController ()<SCNSceneRendererDelegate>
#property (weak, nonatomic) IBOutlet SCNView *scnView;
#property (strong, nonatomic) SCNNode *topBone;
#property (strong, nonatomic) SCNNode *lineNode;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.scnView.allowsCameraControl = YES;
self.scnView.playing = YES;
self.scnView.delegate = self; // Set the delegate to myself. Because during the animation i would like to redraw the line to the correct position
// I have an animation with bones.
SCNScene *mainScene = [SCNScene sceneNamed:#"art.scnassets/test.dae"]; // Loading the test scene
self.topBone = [mainScene.rootNode childNodeWithName:#"TopBone" recursively:YES]; // Find the top bone. I would like to draw a line between the top of the bone and a fix point (5,5,0)
self.lineNode = [self lineFrom:self.topBone.presentationNode.position to:SCNVector3Make(5.0, 5.0, 0.0)]; // Create a node for the line and draw the line
self.scnView.scene = mainScene; // Set the scene
[mainScene.rootNode addChildNode:self.lineNode]; // Add the line node to the scene
}
// I would like to use the render delegate to update the line position. Maybe this is not the right solution...
- (void)renderer:(id<SCNSceneRenderer>)aRenderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time {
SCNVector3 v = self.topBone.presentationNode.position; // Problem1: The position of the top bone never change. It is always the same value. It is not follow the animation
// Problem 2: if I could get the correct bone position in the animation frame, how could I redraw the line in the Node in every frame?
glLineWidth(20); // Set the line width
}
- (SCNNode *)lineFrom:(SCNVector3)p1 to:(SCNVector3)p2 { // Draw a line between two points and return it as a node
int indices[] = {0, 1};
SCNVector3 positions[] = { p1, p2 };
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData primitiveType:SCNGeometryPrimitiveTypeLine primitiveCount:1 bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
line.materials.firstObject.diffuse.contents = [UIColor redColor];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
return lineNode;
}
#end
I'm building a Photo filter app (like Instagram, Camera+ and many more..), may main screen is a UIImageView that presenting the image to the user, and a bottom bar with some filters and other options.
One of the option is blur, where the user can use his fingers to pinch or move a circle that represent the non-blur part (radius and position) - all the pixels outside of this circle will be blurred.
When the user touch the screen I want to add a semi transparent layer above my image that represent the blurred part, with a fully transparent circle that represent the non-blur part.
So my question is, how do I add this layer? I suppose I need to use some view above my image view, and to use some mask to get my circle shape? I would really appreciate a good tip here.
One More Thing
I need the circle will not be cut straight, but have a kind of gradient fade. something like Instagram:
And what's very important is to get this effect with good performance, I'd succeed getting this effect with drawRect: but the performance was very bad on old devices (iphone 4, iPod)
Sharp Mask
Whenever you want to draw a path that consists of a shape (or series of shapes) as a hole in another shape, the key is almost always using an 'even odd winding rule'.
From the Winding Rules section of the Cocoa Drawing Guide:
A winding rule is simply an algorithm that tracks information about each contiguous region that makes up the path's overall fill area. A ray is drawn from a point inside a given region to any point outside the path bounds. The total number of crossed path lines (including implicit lines) and the direction of each path line are then interpreted using rules which determine if the region should be filled.
I appreciate that description isn't really helpful without the rules as context and diagrams to make it easier to understand so I urge you to read the links I've provided above. For the sake of creating our circle mask layer the following diagrams depict what an even odd winding rule allows us to accomplish:
Non Zero Winding Rule
Even Odd Winding Rule
Now it's simply a matter of creating the translucent mask using a CAShapeLayer that can be repositioned and expanded and contracted through user interaction.
Code
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) CAShapeLayer *blurFilterMask;
#property (assign) CGPoint blurFilterOrigin;
#property (assign) CGFloat blurFilterDiameter;
#end
#implementation ViewController
// begin the blur masking operation.
- (void)beginBlurMasking
{
self.blurFilterOrigin = self.imageView.center;
self.blurFilterDiameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
CAShapeLayer *blurFilterMask = [CAShapeLayer layer];
// Disable implicit animations for the blur filter mask's path property.
blurFilterMask.actions = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNull null], #"path", nil];
blurFilterMask.fillColor = [UIColor blackColor].CGColor;
blurFilterMask.fillRule = kCAFillRuleEvenOdd;
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.opacity = 0.5f;
self.blurFilterMask = blurFilterMask;
[self refreshBlurMask];
[self.imageView.layer addSublayer:blurFilterMask];
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterOrigin = [sender locationInView:self.imageView];
[self refreshBlurMask];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the circle to expand/contract.
self.blurFilterDiameter += sender.velocity;
[self refreshBlurMask];
}
// Update the blur mask within the UI.
- (void)refreshBlurMask
{
CGFloat blurFilterRadius = self.blurFilterDiameter * 0.5f;
CGMutablePathRef blurRegionPath = CGPathCreateMutable();
CGPathAddRect(blurRegionPath, NULL, self.imageView.bounds);
CGPathAddEllipseInRect(blurRegionPath, NULL, CGRectMake(self.blurFilterOrigin.x - blurFilterRadius, self.blurFilterOrigin.y - blurFilterRadius, self.blurFilterDiameter, self.blurFilterDiameter));
self.blurFilterMask.path = blurRegionPath;
CGPathRelease(blurRegionPath);
}
...
(This diagram may help understand the naming conventions in the code)
Gradient Mask
The Gradients section of Apple's Quartz 2D Programming Guide details how to draw radial gradients which we can use to create a mask with a feathered edge. This involves drawing a CALayers content directly by subclassing it or implementing its drawing delegate. Here we subclass it to encapsulate the data related to it i.e. origin and diameter.
Code
BlurFilterMask.h
#import <QuartzCore/QuartzCore.h>
#interface BlurFilterMask : CALayer
#property (assign) CGPoint origin; // The centre of the blur filter mask.
#property (assign) CGFloat diameter; // the diameter of the clear region of the blur filter mask.
#end
BlurFilterMask.m
#import "BlurFilterMask.h"
// The width in points the gradated region of the blur filter mask will span over.
CGFloat const GRADIENT_WIDTH = 50.0f;
#implementation BlurFilterMask
- (void)drawInContext:(CGContextRef)context
{
CGFloat clearRegionRadius = self.diameter * 0.5f;
CGFloat blurRegionRadius = clearRegionRadius + GRADIENT_WIDTH;
CGColorSpaceRef baseColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colours[8] = { 0.0f, 0.0f, 0.0f, 0.0f, // Clear region colour.
0.0f, 0.0f, 0.0f, 0.5f }; // Blur region colour.
CGFloat colourLocations[2] = { 0.0f, 0.4f };
CGGradientRef gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2);
CGContextDrawRadialGradient(context, gradient, self.origin, clearRegionRadius, self.origin, blurRegionRadius, kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(baseColorSpace);
CGGradientRelease(gradient);
}
#end
ViewController.m (Wherever you are implementing the blur filer masking functionality)
#import "ViewController.h"
#import "BlurFilterMask.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) BlurFilterMask *blurFilterMask;
#end
#implementation ViewController
// Begin the blur filter masking operation.
- (void)beginBlurMasking
{
BlurFilterMask *blurFilterMask = [BlurFilterMask layer];
blurFilterMask.diameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.origin = self.imageView.center;
blurFilterMask.shouldRasterize = YES;
[self.imageView.layer addSublayer:blurFilterMask];
[blurFilterMask setNeedsDisplay];
self.blurFilterMask = blurFilterMask;
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterMask.origin = [sender locationInView:self.imageView];
[self.blurFilterMask setNeedsDisplay];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the mask to expand/contract.
self.blurFilterMask.diameter += sender.velocity;
[self.blurFilterMask setNeedsDisplay];
}
...
(This diagram may help understand the naming conventions in the code)
Note
Ensure the multipleTouchEnabled property of the UIImageView hosting your image is set to YES/true:
Note
For sake of clarity in answering the OPs question this answer continues to use the naming conventions originally used. This may be slightly misleading to others. 'Mask' is this context does not refer to an image mask but mask in a more general sense. This answer doesn't use any image masking operations.
Sounds like you want to use GPUImageGaussianSelectiveBlurFilter which is contained inside the GPUImage framework. It should be a faster more efficient way to achieve what you want.
You can hook up the excludeCircleRadius property to a UIPinchGestureRecognizer in order to allow the user to change the size of the non-blurred circle. Then use the 'excludeCirclePoint' property in conjuction with a UIPanGestureRecognizer to allow the user to move the center of the non-blurred circle.
Read more about how to apply the filter here:
https://github.com/BradLarson/GPUImage#processing-a-still-image
In Swift if anyone needs it (added pan gesture as well):
BlurFilterMask.swift
import Foundation
import QuartzCore
class BlurFilterMask : CALayer {
private let GRADIENT_WIDTH : CGFloat = 50.0
var origin : CGPoint?
var diameter : CGFloat?
override init() {
super.init()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func drawInContext(ctx: CGContext) {
let clearRegionRadius : CGFloat = self.diameter! * 0.5
let blurRegionRadius : CGFloat = clearRegionRadius + GRADIENT_WIDTH
let baseColorSpace = CGColorSpaceCreateDeviceRGB();
let colours : [CGFloat] = [0.0, 0.0, 0.0, 0.0, // Clear region
0.0, 0.0, 0.0, 0.5] // blur region color
let colourLocations : [CGFloat] = [0.0, 0.4]
let gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2)
CGContextDrawRadialGradient(ctx, gradient, self.origin!, clearRegionRadius, self.origin!, blurRegionRadius, .DrawsAfterEndLocation);
}
}
ViewController.swift
func addMaskOverlay(){
imageView!.userInteractionEnabled = true
imageView!.multipleTouchEnabled = true
let blurFilterMask = BlurFilterMask()
blurFilterMask.diameter = min(CGRectGetWidth(self.imageView!.bounds), CGRectGetHeight(self.imageView!.bounds))
blurFilterMask.frame = self.imageView!.bounds
blurFilterMask.origin = self.imageView!.center
blurFilterMask.shouldRasterize = true
self.imageView!.layer.addSublayer(blurFilterMask)
self.blurFilterMask = blurFilterMask
self.blurFilterMask!.setNeedsDisplay()
self.imageView!.addGestureRecognizer(UIPinchGestureRecognizer(target: self, action: "handlePinch:"))
self.imageView!.addGestureRecognizer(UITapGestureRecognizer(target: self, action: "handleTap:"))
self.imageView!.addGestureRecognizer(UIPanGestureRecognizer(target: self, action: "handlePan:"))
}
func donePressed(){
//save photo and add to textview
let parent : LoggedInContainerViewController? = self.parentViewController as? LoggedInContainerViewController
let vc : OrderFlowCareInstructionsTextViewController = parent?.viewControllers[(parent?.viewControllers.count)!-2] as! OrderFlowCareInstructionsTextViewController
vc.addImageToTextView(imageView?.image)
parent?.popViewController()
}
//MARK: Mask Overlay
func handleTap(sender : UITapGestureRecognizer){
self.blurFilterMask!.origin = sender.locationInView(self.imageView!)
self.blurFilterMask!.setNeedsDisplay()
}
func handlePinch(sender : UIPinchGestureRecognizer){
self.blurFilterMask!.diameter = self.blurFilterMask!.diameter! + sender.velocity*3
self.blurFilterMask!.setNeedsDisplay()
}
func handlePan(sender : UIPanGestureRecognizer){
let translation = sender.translationInView(self.imageView!)
let center = CGPoint(x:self.imageView!.center.x + translation.x,
y:self.imageView!.center.y + translation.y)
self.blurFilterMask!.origin = center
self.blurFilterMask!.setNeedsDisplay()
}
I having some problems with drawRect and create a new instance of a UIView custom class.
My question is: How can i use of drawrect?
I created a UIViewController with a UIScrollView and inside this UIScrollView about 50 UIViews. Each UIView create inside 3 elements (Custom UIViews) each one with a drawrect:
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClearRect(c, self.bounds);
CGContextSetStrokeColorWithColor(c, color.CGColor);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetInterpolationQuality(c, kCGInterpolationHigh);
CGContextSetLineWidth(c, thickness);
CGContextAddArc(c, self.frame.size.width/2, self.frame.size.height/2, radius, angleIni*M_PI/180, angleEnd*M_PI/180, 0);
CGContextStrokePath(c);
}
At first it works like a charm, but when i try to remove this UIViewController and Create it again I'm getting a EXC_BAD_ACCESS.
If I remove the drawrect code everything begin to work again with no EXC_BAD_ACCESS. So I conclued tha my problem ismy drawrect method.
Is there some right way to true remove my UIviewController? On dealloc I'm removing all my UIViews inside UIScrollView and setting it to nil, besides I'm setting UIScrollView to nil too, like:
for (UIView *item in MyUICscrollView.subviews)
{
[item removeFromSuperview];
item = nil;
}
MyUICscrollView = nil;
Had someone the same issue?
I've got a pretty simple custom subclass of UIView:
#import "BarView.h"
#import <QuartzCore/QuartzCore.h>
#implementation BarView
#synthesize barColor;
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawRect");
// Draw a rectangle.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, self.barColor.CGColor);
CGContextBeginPath(context);
CGContextAddRect(context, self.bounds);
CGContextFillPath(context);
}
- (void)dealloc
{
self.barColor = nil;
[super dealloc];
}
#end
If I call initWithFrame:rect on this class with some non-zero rect, it works fine; but when I call initWithFrame:CGRectZero, drawRect is -never- called, even after I modify the view's frame to a non-zero rect.
I certainly understand why a view with a zero frame would never have its drawRect: called, but why is it never called even after the frame is changed?
A quick look at the Apple docs says this
Changing the frame rectangle automatically redisplays the view without calling its drawRect: method. If you want UIKit to call the drawRect: method when the frame rectangle changes, set the contentMode property to UIViewContentModeRedraw.
This is in the documentation for the frame property:
https://developer.apple.com/documentation/uikit/uiview/1622621-frame?language=objc
If you want the view to redraw, just call setNeedsDisplay on it and you should be fine (or, as the docs say, you can set it to UIViewContentModeRedraw, but it is up to you)
In Swift 5
override init(frame: CGRect) {
super.init(frame: frame)
contentMode = UIView.ContentMode.redraw