I am trying to use Apples method for detecting if a point is in on a UIBezierPath. However it returns an ‘invalid context’.
As you can see from the NSlog, I am passing a UIBezierPath and A point to check. In my case a touch point.
I don’t understand why. Can someone explain it to me or point me in the correct direction?
NSLOG -----
Path <UIBezierPath: 0x7f57110>
Contains point Path <UIBezierPath: 0x7f57110>
Touch point 425.000000 139.000000
<Error>: CGContextSaveGState: invalid context 0x0
<Error>: CGContextAddPath: invalid context 0x0
<Error>: CGContextPathContainsPoint: invalid context 0x0
<Error>: CGContextRestoreGState: invalid context 0x0
NO
Straight from Apples Documentation on how to determine a point in a path
- (BOOL)containsPoint:(CGPoint)point onPath:(UIBezierPath *)path inFillArea:(BOOL)inFill {
NSLog(#"contains point Path %#", path);
NSLog(#"Touch point %f %f", point.x, point.y );
CGContextRef context = UIGraphicsGetCurrentContext();
CGPathRef cgPath = path.CGPath;
BOOL isHit = NO;
// Determine the drawing mode to use. Default to detecting hits on the stroked portion of the path.
CGPathDrawingMode mode = kCGPathStroke;
if (inFill) { // Look for hits in the fill area of the path instead.
if (path.usesEvenOddFillRule)
mode = kCGPathEOFill;
else
mode = kCGPathFill;
}
// Save the graphics state so that the path can be removed later.
CGContextSaveGState(context);
CGContextAddPath(context, cgPath);
// Do the hit detection.
isHit = CGContextPathContainsPoint(context, point, mode);
CGContextRestoreGState(context);
return isHit;
}
Here is my touchesBegan method. I have my paths in an NSMutableArray. I parse the array to check all my paths to see if any has been touched.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint curPoint = [[touches anyObject] locationInView:self];
for (int i = 0; i < [pathInfo count]; i++){
NSArray *row = [[NSArray alloc] initWithArray:[pathInfo objectAtIndex:i]];
UIBezierPath *path = [row objectAtIndex:0];
NSLog(#"Path %#", path);
if ([self containsPoint:curPoint onPath:path inFillArea:NO]){
NSLog(#"YES");
} else {
NSLog(#"NO");
}
}
}
The CGContextPathContainsPoint method requires a graphics context, which Apple's sample code gets from UIGraphicsGetCurrentContext. However, UIGraphicsGetCurrentContext only works inside -[UIView drawRect:] or after a call to a function that sets a UI graphics context, like UIGraphicsBeginImageContext.
You can perform your hit-testing without a graphics context by using CGPathCreateCopyByStrokingPath (which was added in iOS 5.0) and CGPathContainsPoint on the stroked copy:
static BOOL strokedPathContainsPoint(CGPathRef unstrokedPath,
const CGAffineTransform *transform, CGFloat lineWidth,
CGLineCap lineCap, CGLineJoin lineJoin, CGFloat miterLimit,
CGPoint point, bool eoFill)
{
CGPathRef strokedPath = CGPathCreateCopyByStrokingPath(unstrokedPath,
transform, lineWidth, lineCap, lineJoin, miterLimit);
BOOL doesContain = CGPathContainsPoint(strokedPath, NULL, point, eoFill);
CGPathRelease(strokedPath);
return doesContain;
}
You have to decide what line width and other stroking parameters you want to use. For example:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint curPoint = [[touches anyObject] locationInView:self];
for (int i = 0; i < [pathInfo count]; i++){
NSArray *row = [[NSArray alloc] initWithArray:[pathInfo objectAtIndex:i]];
UIBezierPath *path = [row objectAtIndex:0];
NSLog(#"Path %#", path);
if (strokedPathContainsPoint(path.CGPath, NULL, 10.0f, kCGLineCapRound,
kCGLineJoinRound, 0, curPoint, path.usesEvenOddFillRule))
{
NSLog(#"YES");
} else {
NSLog(#"NO");
}
}
}
Note that CGPathCreateCopyByStrokingPath is probably somewhat expensive, so you might want to stroke your paths once, and save the stroked copies, instead of stroking them every time you need to test a point.
tl;dr: You should use CGPathContainsPoint( ... ) instead.
What went wrong
Your problem is that you have no context where you are trying to get it
CGContextRef context = UIGraphicsGetCurrentContext(); // <-- This line here...
The method UIGraphicsGetCurrentContext will only return a context if there is a valid current context. The two main examples are
Inside drawRect: (where the context is the view you are drawing into)
Inside you own image context (when you use UIGraphicsBeginImageContext() so that you can use Core Graphics to draw into an image (maybe you pass it to some other part of your code and display it in an image view or save it to disk)).
The solution
I don't know why you were doing all the extra work of contexts, saving and restoring state, etc. I seems that you missed the simple method CGPathContainsPoint().
BOOL isHit = CGPathContainsPoint(
path.CGPath,
NULL,
point,
path.usesEvenOddFillRule
);
Edit
If you want to hit test a stroke path you could use CGPathCreateCopyByStrokingPath() to to first create a new filled path of the path you are stroking (given a certain width etc.). Ole Begemann has a very good explanation on his blog about how to do it (including some sample code).
Related
I would like to draw a "disappearing stroke" on a UIImageView, which follows a touch event and self-erases after a fixed time delay. Here's what I have in my ViewController.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
CGPoint lp = lastPoint;
UIColor *color = [UIColor blackColor];
[self drawLine:5 from:lastPoint to:currentPoint color:color blend:kCGBlendModeNormal];
double delayInSeconds = 1.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self drawLine:brush from:lp to:currentPoint color:[UIColor clearColor] blend:kCGBlendModeClear];
});
lastPoint = currentPoint;
}
- (void)drawLine:(CGFloat)width from:(CGPoint)from to:(CGPoint)to color:(UIColor*)color blend:(CGBlendMode)mode {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.tempDrawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context, to.x, to.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, width);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextSetBlendMode(context, mode);
CGContextStrokePath(context);
self.tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.tempDrawImage setAlpha:1];
UIGraphicsEndImageContext();
}
The draw phase works nicely, but there are a couple problems with the subsequent erase phase.
While the line "fill" is correctly cleared, a thin stroke around the path remains.
The "erase phase" is choppy, nowhere near as smooth as the drawing phase. My best guess is that this is due to the cost of UIGraphicsBeginImageContext run in dispatch_after.
Is there a better approach to drawing a self-erasing line?
BONUS: What I'd really like is for the path to "shrink and vanish." In other words, after the delay, rather than just clearing the stroked path, I'd like to have it shrink from 5pt to 0pt while simultaneously fading out the opacity.
I would just let the view draw continuously with 60 Hz, and each time draw the entire line using points stored in an array. This way, if you remove the oldest points from the array, they will not be drawn anymore.
to hook up your view to display refresh rate (60 Hz), try this:
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
Store an age property along with each point, then just loop over the array and remove points which are older than your threshold.
e.g.
#interface AgingPoint <NSObject>
#property CGPoint point;
#property NSTimeInterval birthdate;
#end
// ..... later, in the draw call
NSTimeInterval now = CACurrentMediaTime();
AgingPoint *p = [AgingPoint new];
p.point = touchlocation; // get yr touch
p.birthdate = now;
// remove old points
while(myPoints.count && now - [myPoints[0] birthdate] > 1)
{
[myPoints removeObjectAtIndex: 0];
}
myPoints.add(p);
if(myPoints.count < 2)
return;
UIBezierPath *path = [UIBezierPath path];
[path moveToPoint: [myPoints[0] point]];
for (int i = 1; i < myPoints.count; i++)
{
[path lineToPoint: [myPoints[i] point];
}
[path stroke];
So on each draw call, make a new bezierpath, move to the first point, then add lines to all other points. Finally, stroke the line.
To implement the "shrinking" line, you could draw just short lines between consecutive pairs of points in your array, and use the age property to calculate stroke width. This is not perfect, as the individual segments will have the same width at start and end point, but it's a start.
Important: If you are going to draw a lot of points, performance will become an issue. This kind of path rendering with Quartz is not exactly tuned to render real fast. In fact, it is very, very slow.
Cocoa arrays and objects are also not very fast.
If you run into performance issues and you want to continue this project, look into OpenGL rendering. You will be able to have this run a lot faster with plain C structs pushed into your GPU.
There were a lot of great answers here. I think the ideal solution is to use OpenGL, as it'll inevitably be the most performant and provide the most flexibility in terms of sprites, trails, and other interesting visual effects.
My application is a remote controller of sorts, designed to simply provide a small visual aid to track motion, rather than leave persistent or high fidelity strokes. As such, I ended up creating a simple subclass of UIView which uses CoreGraphics to draw a UIBezierPath. I'll eventually replace this quick-fix solution with an OpenGL solution.
The implementation I used is far from perfect, as it leaves behind white paths which interfere with future strokes, until the user lifts their touch, which resets the canvas. I've posted the solution I used here, in case anyone might find it helpful.
I have a code block which aims to capture snapshot of pdf based custom views for each page. To accomplish it I'll create view controller in a loop and then iterate. The problem is even view controller released custom view doesn't released and look like live on Instruments tool. As a result for loop iterates a lot so it breaks the memory (up to 500MB for 42 living) and crashes.
Here is the iteration code;
do
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
kMFMarkupKeyPageNextPageID)) isMemberOfClass:[NSNull class]]);
_generatingSnapshots = NO;
And here the captureSnapshot method;
- (UIImage *)captureSnapshot
{
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
Instruments;
Edit for further details:
Below code is from CUIPDFView a subclass of UIView;
- (void)drawRect:(CGRect)rect
{
[self drawInContext:UIGraphicsGetCurrentContext()];
}
-(void)drawInContext:(CGContextRef)context
{
CGRect drawRect = CGRectMake(self.bounds.origin.x, self.bounds.origin.y,self.bounds.size.width, self.bounds.size.height);
CGContextSetRGBFillColor(context, 1.0000, 1.0000, 1.0000, 1.0f);
CGContextFillRect(context, drawRect);
// PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system
// before we start drawing.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Grab the first PDF page
CGPDFPageRef page = CGPDFDocumentGetPage(_pdfDocument, _pageNumberToUse);
// We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing
CGContextSaveGState(context);
// CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any
// base rotations necessary to display the PDF page correctly.
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true);
// And apply the transform.
CGContextConcatCTM(context, pdfTransform);
// Finally, we draw the page and restore the graphics state for further manipulations!
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
}
When I delete drawRect method implementation, memory allocation problem dismiss but obviously it can't print the pdf.
Try putting an #autoreleasepool inside your loop:
do
{
#autoreleasepool
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
This will flush the autorelease pool each time through the loop, releasing all the autoreleased objects.
I would be surprised if this is actually a problem with ARC. An object that is still alive still has a strong reference to the layer.
What does [AOAssetManager saveImage:...] do with the image. Are you sure it is not holding onto it?
Is _pageWithID: doing something that is keeping a pointer to CUIPageViewController around?
I am trying to move a plane along a path as the user is defining it, similar to the controls in Flight Control. I am able to draw the path, albeit inefficiently, but I cannot get the plane to animate along the path smoothly.
I originally tried to move the plane along the path by using the implicit animations provided by changing the position, but because it does not allow me to alter the speed of the animation, the plane behaves badly.
I am now trying to use an animation block recursively, but the completion block is being called way too soon, resulting in the plane just following the path being drawn.
This is the setup code:
- (CALayer *)plane{
if(!_plane){
_plane = [CALayer layer];
UIImage *planeImage = [UIImage imageNamed:#"airplane.png"];
_plane.bounds = CGRectMake(20.0, 20.0, planeImage.size.width, planeImage.size.height);
_plane.contents = (id)(planeImage.CGImage);
[self.layer addSublayer:_plane];
}
return _plane;
}
- (void)touchesEnded:(NSSet *)touches
withEvent:(UIEvent *)event{
if(isDrawingPath){
[self.trackPath addLineToPoint:[[touches anyObject] locationInView:self]];
NSLog(#"%#", self.trackPath);
UITouch *touch = [touches anyObject];
CGPoint toPoint = [touch locationInView:self];
//now, save each point in order to make the path
[self.points addObject:[NSValue valueWithCGPoint:toPoint]];
[self setNeedsDisplay];
[self goToPointWithIndex:0];
}
isDrawingPath = NO;
}
This implementation works, but badly. The plane follows the path, but it is choppy:
- (void)goToPointWithIndex:(NSNumber *)indexer{
int toIndex = [indexer intValue];
if(toIndex < self.points.count){
//extract the value from array
CGPoint toPoint = [(NSValue *)[self.points objectAtIndex:toIndex] CGPointValue];
CGPoint pos = self.plane.position;
float delay = PLANE_SPEED * (sqrt( pow(toPoint.x - pos.x, 2) + pow(toPoint.y - pos.y, 2)));
self.plane.position = toPoint;
// Allows animation to continue running
if(toIndex < self.points.count - 1){
toIndex++;
}
//float delay = 0.2;
NSLog(#"%f", delay);
//repeat the method with a new index
//this method will stop repeating as soon as this "if" gets FALSE
NSLog(#"index: %d, x: %f, y: %f", toIndex, toPoint.x, toPoint.y);
[self performSelector:#selector(goToPointWithIndex:) withObject:[NSNumber numberWithInt:toIndex] afterDelay:delay];
}
}
This is what I am trying to do with blocks. It just skips to the end of the path drawn instead of follow the entire thing.
- (void)goToPointWithIndex:(int)toIndex{
if(self.resetPath) return;
//extract the value from array
if(toIndex < self.points.count) {
CGPoint toPoint = [(NSValue *)[self.points objectAtIndex:toIndex] CGPointValue];
NSLog(#"index: %d, x: %f, y: %f", toIndex, toPoint.x, toPoint.y);
CGPoint pos = self.plane.position;
//float delay = PLANE_SPEED * (sqrt( pow(toPoint.x - pos.x, 2) + pow(toPoint.y - pos.y, 2)));
// Allows animation to continue running
if(toIndex < self.points.count - 1){
toIndex++;
}
[UIView animateWithDuration:0.2
animations:^{
self.plane.position = toPoint;
}
completion:^(BOOL finished) {
if(finished == YES) {NSLog(#"Complete");[self goToPointWithIndex:toIndex];}
}];
}
}
I have no idea where I'm going wrong. I'm new to objective-c and blocks. Is the completion block supposed to run right after the animation begins? It doesn't make sense to me, but its the only explanation I can find.
EDIT: I ended up making the plane a UIView so I could still use block animations. The CALayer transactions did not like my recursion very much at all.
Since here self.plane is a custom CALayer (as opposed to the layer backing a UIView), it will not respect the parameters of UIView animateWithDuration:
From the docs:
Custom layer objects ignore view-based animation block parameters and use the default Core Animation parameters instead.
Instead use a CABasicAnimation and CALayer's addAnimation:forKey: method (and probably in this case a CAAnimationGroup). How to animate CALayers like this is described in the docs under "Animating Layer Content."
Some reason you're only overriding touchesEnded? Then you will only pick up one point for any finger path across the screen. It seems to me you should override touchesMoved and maybe touchesBegan too (and do more or less the same thing as in touchesEnded).
I see that the GMSPolyline protocol already defines a color property for its stroke color, but is there a way to shade the inside of its polygon (ideally with transparency)? I’m looking for a Google Maps equivalent to MKPolygon and friends.
A Polyline is different to a Polygon's. Polylines' have no concept of a fill color. File a feature request for Polygons to be added to the SDK.
There is a way, you can get something like this:
The approach is rather simple:
Add transparent noninteracting UIView with overriden drawing code and pass it CGPoints for drawing polygons
Get your CLLocationCoordinate2D coordinates for polygons and convert them to CGPoints for drawing
Update those CGPoints every time map moves so you can redraw them in the right position and make that UIView redraw itself.
So, what you want to do is add an UIView on top of your mapview, which is transparent and non-userinteractive, which has overridden drawRect method. It is provided with a double array of CGPoints, like CGpoint **points, acccessed with points[i][j] where i is each of closed polygons and j are individual points of each polygon. The class would be, let's call it OverView:
#import "OverView.h"
#interface OverView ()
{
CGPoint **points;
int *pointsForPolygon;
int count;
}
#end
#implementation OverView
- (id)initWithFrame:(CGRect)frame andNumberOfPoints:(int)numpoints andPoints:(CGPoint **)passedPoints andPointsForPolygon:(int *)passedPointsForPolygon;{
self = [super initWithFrame:frame];
if (self) {
// You want this to be transparent and non-user-interactive
self.userInteractionEnabled = NO;
self.backgroundColor = [UIColor clearColor];
// Passed data
points = passedPoints; // all CGPoints
pointsForPolygon = passedPointsForPolygon; // number of cgpoints for each polygon
count = numpoints; // Number of polygons
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect
{
for(int i=0; i<count; i++) // For each of polygons, like blue ones in picture above
{
if (pointsForPolygon[i] < 2) // Require at least 3 points
continue;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(context, 2.0);
for(int j = 0; j < pointsForPolygon[i]; j++)
{
CGPoint point = points[i][j];
if(j == 0)
{
// Move to the first point
CGContextMoveToPoint(context, point.x, point.y);
}
else
{
// Line to others
CGContextAddLineToPoint(context, point.x, point.y);
}
}
CGContextClosePath(context); // And close the path
CGContextFillPath(context);
CGContextStrokePath(context);
}
}
#end
Now, in original UIViewController with mapview, you need to have access to all coordinates that make all the polygons (same array as points, but consisting of CLLocationCoordinate2D, and several others:
#interface ViewController () <GMSMapViewDelegate>
{
CGPoint **points;
int howmanypoints;
int *pointsForPolygon;
CLLocationCoordinate2D **acoordinates;
}
acoordinates is populated wherever you get your coordinates for polygons, I parse the response string from Fusion Tables, part of my parser method
- (void)parseResponse2
{
NSMutableArray *fullArray = [[self.fusionStringBeaches componentsSeparatedByString:#"\n"] mutableCopy];
howmanypoints = fullArray.count; // This is number of polygons
pointsForPolygon = (int *)calloc(howmanypoints, sizeof(int)); // Number of points for each of the polygons
points = (CGPoint **)calloc(howmanypoints, sizeof(CGPoint *));
acoordinates = (CLLocationCoordinate2D **)calloc(howmanypoints, sizeof(CLLocationCoordinate2D *));
for(int i=0; i<fullArray.count; i++)
{
// Some parsing skipped here
points[i] = (CGPoint *)calloc(koji, sizeof(CGPoint));
acoordinates[i] = (CLLocationCoordinate2D *)calloc(koji, sizeof(CLLocationCoordinate2D));
pointsForPolygon[i] = koji;
if (koji > 2)
{
// Parsing skipped
for (int j=0; j<koji; j++)
{
CLLocationCoordinate2D coordinate = CLLocationCoordinate2DMake(coordinates[j].latitude, coordinates[j].longitude);
// Here, you convert coordinate and add it to points array to be passed to overview
points[i][j] = [self.mapView.projection pointForCoordinate:coordinate];
// and added that coordinate to array for future access
acoordinates[i][j] = coordinate;
}
}
}
// Finally, allocate OverView passing points array and polygon and coordinate counts
self.overView = [[OverView alloc] initWithFrame:self.view.bounds
andNumberOfPoints:howmanypoints
andPoints:points
andPointsForPolygon:pointsForPolygon];
// And add it to view
[self.view addSubview:self.overView];
}
Now, you have Polygons where you want them, but must observe - (void)mapView:(GMSMapView *)mapView didChangeCameraPosition:(GMSCameraPosition *)position delegate method as drawn polygons won't move with map. The trick is that you have your 2D array of coordinates acoordinates and you can user helper function (CGPoint *)[self.mapview.projection pointForCoordinate:(CLLocationCoordinate2D)coordinate] to recalculate the positions, like:
- (void)mapView:(GMSMapView *)mapView didChangeCameraPosition:(GMSCameraPosition *)position
{
if (points != nil)
{
// Determine new points to pass
for (int i=0; i<howmanypoints; i++)
{
for(int j=0; j<pointsForPolygon[i]; j++)
{
// Call method to determine new CGPoint for each coordinate
points[i][j] = [self.mapView.projection pointForCoordinate:acoordinates[i][j]];
}
}
// No need to pass points again as they were passed as pointers, just refresh te view
[self.overView setNeedsDisplay];
}
}
And that's it. Hope you got the gist of it. Please, comment if I need to clarify something. I can also make a small complete project and upload it to github so you can research it better.
I need help with drawing something like this:
I have been told that the gray background bar and the purple bar should be drawn on separate layers. And then the dots there that signifies the chapters of a book (which this slider is about) will be on a layer on top of those two.
I have accomplished the task of creating the gradient on the active bar and drawing it like so:
- (void)drawRect:(CGRect)rect{
self.opaque=NO;
CGRect viewRect = self.bounds;
//NSLog(#"innerRect width is: %f", innerRect.size.width);
CGFloat perPageWidth = viewRect.size.width/[self.model.book.totalPages floatValue];
NSLog(#"perpage width is: %f", perPageWidth);
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath *beizerPathForSegment= [UIBezierPath bezierPath];
NSArray *arrayFromReadingSessionsSet =[self.model.readingSessions allObjects];
NSArray *arrayFromAssesmentSet = [self.model.studentAssessments allObjects];
NSLog(#"array is : %#", self.model.readingSessions);
CGGradientRef gradient = [self gradient];
for (int i=0;i<[arrayFromReadingSessionsSet count]; i++) {
ReadingSession *tempRSObj= [arrayFromReadingSessionsSet objectAtIndex:i];
CGFloat pageDifference = [tempRSObj.endPage floatValue]-[tempRSObj.startPage floatValue];
NSLog(#"startpage is: %#, end page is: %#, total pages are: %#", tempRSObj.startPage, tempRSObj.endPage, self.model.book.totalPages) ;
CGRect ProgressIndicator = CGRectMake(perPageWidth*[tempRSObj.startPage floatValue], viewRect.origin.y, perPageWidth*pageDifference, viewRect.size.height);
[beizerPathForSegment appendPath:[UIBezierPath bezierPathWithRoundedRect:ProgressIndicator cornerRadius:13.0]];
}
[beizerPathForSegment addClip];
CGContextDrawLinearGradient(context, gradient, CGPointMake(CGRectGetMidX([beizerPathForSegment bounds]), CGRectGetMaxY([beizerPathForSegment bounds])),CGPointMake(CGRectGetMidX([beizerPathForSegment bounds]), 0), (CGGradientDrawingOptions)NULL);
}
How do I shift it onto a layer and then create another layer and another layer and then put them over one another?
TIA
I’m guessing the person you spoke with was referring to CALayer. In iOS, every view has a CALayer backing it. Instead of implementing -drawRect: in your view, do this:
link with QuartzCore
#import <QuartzCore/QuartzCore.h> anywhere you want to use this.
Use to your view’s layer property.
Layers behave a lot like views, in that you can have sublayers and superlayers, and layers have properties for things like background color, and they can be animated. A couple of subclasses that will probably be useful for your purposes are CAGradientLayer and CAShapeLayer. For more on how to use layers, refer to the Core Animation Programming Guide.