How to add, rotate and drag line in IOS? - ios

What would be the best way in accomplishing this? Let's say I have a method (not defined) that would allow a line with a fixed size and color to be drawn on the screen. This line would need to then accept rotate gestures and panning gestures in order to move around the screen. It won't resize, it only needs to rotate and translate.
What is the best way of going about this? Should lines be subviews or sublayers to parent view? What is the method for drawing a line in ios? How to handle multiple lines on screen? I just want someone to lead me down the right path in the ios graphics jungle.

Firstly, you need to consider how complex the whole drawing is. From your description it sounds like the task is relatively simple. If that is the case, then Core Graphics would be the way to go. If the drawing is significantly more complex, you should look at OpenGL ES and GLKit, though using OGL involves a fair bit more work
Assuming Core Graphics, I'd store the centre point, angle and length of the line, and change the angle and size using the gesture recognizers, and calculate the points to draw using basic trig. Loop over the points to draw in the view -drawRect method and draw each one with the appropriate CG functions - call [view setNeedsDisplay] or [view setNeedsDisplayInRect:areaToRedraw]to trigger the redraws. (The second method only redraws the part of the view you specify, and can be used to improved performance).
The first of a series of tutorials on Core Graphics is here.- the part on 'drawing lines' will be most relevant. I haven't done this one (I used the old edition of this book), but I've followed a lot of others from this site and found them very helpful
As a side note you'll probably need a way to focus on a particular line if you have more than one on the screen- an easy way would be to find the line centre point closest to the point the user touched.

It seems that the best API for drawing lines like you want is with Core Graphics. Put this code within your UIView's drawRect method:
/* Set the color that we want to use to draw the line */
[[UIColor redColor] set];
/* Get the current graphics context */
CGContextRef currentContext =UIGraphicsGetCurrentContext();
/* Set the width for the line */
CGContextSetLineWidth(currentContext,5.0f);
/* Start the line at this point */
CGContextMoveToPoint(currentContext,50.0f, 10.0f);
/* And end it at this point */
CGContextAddLineToPoint(currentContext,100.0f, 200.0f);
/* Use the context's current color to draw the line */
CGContextStrokePath(currentContext);
For the gesture recognition, use UIGestureRecognizers. Use the following methods
- (IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer
- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer

Related

UIView with drawRect/update image performance issue on Retina displays

The problem in short:
LooksLike drawRect itself (even empty) leads to a significant performance bottleneck depending on device's resolution - the bigger screen is the worse things are.
Is there a way to speed up redrawing of view's content?
In more details:
I'm creating a small drawing app on iOS - user moves his finger over a display to draw a line.
The idea behind this is quite simple - while user moves his finger touchesMoved accumulates the changes into offscreen buffer image and invalidates the view to merge the offscreen buffer with the view's content.
The simple code snippet may look like this:
#interface CanvasView : UIView
...
end;
#implementation CanvasView{
UIImage *canvasContentImage;
UIImage *bufferImage
CGContextRef drawingContext;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
// prepare drawing to start
UIGraphicsBeginImageContext(canvasSize);
drawingContext = UIGraphicsGetCurrentContext();
...
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
// draw to bufferImage
CGContextMoveToPoint(drawingContext, prevPoint.x, prevPoint.y);
CGContextAddLineToPoint(drawingContext, point.x, point.y);
CGContextStrokePath(drawingContext);
...
[self setNeedDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
//finish drawing
UIGraphicsEndImageContext();
//merge canvasContentImage with bufferImage
...
}
-(void)drawRect:(CGRect)rect{
// draw bufferImage - merge it with current view's content
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, canvasContentImage.CGImage);
CGContextDrawImage(context, imageRect, bufferImage.CGImage);
...
}
I've also implemented a small helper class to calculate fps rate.
So the approach above works rather good on non-retina screens producing nearly 60fps. However fps rate dramatically drops on retina-screens. For example on iPad Retina it is about 15-20 fps which is too slow.
The first obvious reason I thought is that setNeedsDisplay causes to redraw a full screen which is a big wast of resources. So I moved to setNeedsDisplayInRect to update only a dirty region. Surprisingly it didn't change anything regarding to performance (at lest nothing noticeable according to measurements and visually).
So I've started to try different approaches to figure out the bottleneck. When I've commented out all the drawing logic the fps rate still stayed at 15-20 - looks like the problem lies outside of drawing logic.
Finally when I've fully commented out the drawRect method the fps rises to 60. Not that I removed only the implementation but even a declaration. Not sure of my terminology so here is the results:
// -(void)drawRect:(CGRect)rect{
// // draw bufferImage - merge it with current view's content
// ...
// }
What is more interesting when I moved all the drawing code from drawRect method to touchMoved method it doesn't impact the performance, however the same amount of drawing/processing logic still remains comparing to the version with drawRect method - even updating the entire view every time still gives me 60fps.
One problem is that without drawRect I'm not able to visualize that changes.
So I've came to what pregenerated drawRect method warns about:
"Only override drawRect: if you perform custom drawing. An empty
implementation adversely affects performance during animation."
My guess is that the system creates and destroys graphics context every time custom drawRect is triggered leading to "dversely affects performance"
So the questions are:
Is there any way to speed up drawRect calls, like make the system reuse resources from call to call of drawRect or something?
If this is a dead end, what other approaches available to update view's content? Moving to OpenGL is not an option at the moment as there are a lot of code/logic already implemented and it will take a lot of effort to port it.
I'll be glad to provide any additional information needed.
And Thanks in advance for any suggestions!
EDIT:
After some more investigation and experiments I've came to using UIImageView and it's image property to update view's content dynamically. It gave a small performance improvements (the drawing is more stable at 19-22 fps). However it is still far from target 50-60fps.
One think I've noticed is that updating only a dirty part of offscreen buffer image really makes sense - without forcing the view's content update, pure logic of redrawing offscreen buffer gives around 60 fps.
But once I'm trying to attach the updated image to UIImageView.image property to update it's content the fps drops to mentioned 19-22. This is reasonable as assigning the property forces whole image to be redrawn on view's side.
So the question still remains - is there any way to updated only a specified portion of view's (UIImageView's) displaying content?
After spending several days I've came to unexpected (at least for myself) results.
I was able to achieve 30fps on retina iPad which is acceptable result for now.
The trick that worked for me was:
Subclass UIImageView
Use UIImageView.image property to update content - it gives a better results comparing to ordinary UIView with setNeedsDisplay/setNeedsDisplayInRect methods.
Use [self performSelector:#selector(setImage:) withObject:img afterDelay:0]; instead of just UIImageView.image = img to set the updated image to UIImageView.
The last point is a kind of spell for me now however it gives the minimal necessary frame rate (even if the original image is redrawn fully each frame not only dirty regions).
My guess if why performSelector helped to gain fps for updating view in my case is that scheduling setImage on view's queue optimizes possible internal idles which may occur during touch event processing.
This is only my guess and if anyone can provide relevant explanation I'll be glad to post it here or accept that answer.

Best way to handle autoresizing of UIView with custom drawing

I'm working on a custom view, that has some specific Core Graphics drawings. I want to handle the view's autoresizing as efficiently as possible.
If I have a vertical line drawn in UIView, and the view's width stretches, the line's width will stretch with it. I want to keep the original width, therefore I redraw each time in -layoutSubviews:
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// ONLY drawing code ...
}
- (void)layoutSubviews
{
[super layoutSubviews];
[self setNeedsDisplay];
}
This works fine, however I don't think this is a efficient approach - unless CGContext drawing is blazing fast.
So is it really fast? Or is there better way to handle view's autoresizing? (CALayer does not support autoresizing on iOS).
UPDATE :
this is going to be a reusable view. And its task is to draw visual representation of data, supplied by the dataSource. So in practice there could really be a lot of drawing. If it is impossible to get this any more optimized, then there's nothing I can do... but I seriously doubt I'm taking the right approach.
It really depends on what you mean by "fast" but in your case the answer is probably "No, CoreGraphics drawing isn't going to give you fantastic performance."
Whenever you draw in drawRect (even if you use CoreGraphics to do it) you're essentially drawing into a bitmap, which backs your view. The bitmap is eventually sent over to the lower level graphics system, but it's a fundamentally slower process than (say) drawing into an OpenGL context.
When you have a view drawing with drawRect it's usually a good idea to imagine that every call to drawRect "creates" a bitmap, so you should minimize the number of times you need to call drawRect.
If all you want is a single vertical line, I would suggest making a simple view with a width of one point, configured to layout in the center of your root view and to stretch vertically. You can color that view by giving it a background color, and it does not need to implement drawRect.
Using views is usually not recommended, and drawing directly is actually preferred, especially when the scene is complex.
If you see your drawing code is taking a considerable toll, steps to optimize drawing further is to minimize drawing, by either only invalidating portions of the view rather than entirely (setNeedsDisplayInRect:) or using tiling to only draw portions.
For instance, when a view is resized, if you only need to draw in the areas where the view has changed, you can monitor and calculate the difference in size between current and previous layout, and only invalidate regions which have changed. Edit: It seems iOS does not allow partial view drawing, so you may need to move your drawing to a CALayer, and use that as the view's layer.
CATiledLayer can also give a possible solution, where you can cache and preload tiles and draw required tiles asynchronously and concurrently.
But before you take drastic measures, test your code in difficult conditions and see if your code is performant enough. Invalidating only updated regions can assist, but it is not always straightforward to limit drawing to a provided rectangle. Tiling adds even more difficulty, as the tiling mechanism requires learning, and elements are drawn on background threads, so concurrency issues also come in play.
Here is an interesting video on the subject of optimizing 2D drawing from Apple WWDC 2012:
https://developer.apple.com/videos/wwdc/2012/?include=506#506

What's the best most CPU efficient way to draw views with a lot of animations in iOS?

I'm trying to draw a graphic equaliser for an iOS project.
The equaliser will have 7 bars, representing different frequency bands, than move up and down based on real-time audio data.
Can anyone suggest the best way to approach this in iOS?
New frequency data comes in at about 11Hz, and so the bars would have to animate to a new size 11 times per second.
Do I create a UIView for each bar and dynamically resize it's frame height?
Do I draw the bars as thick CGStrokes and redraw them within the parent view as needed?
Another option?
Thanks in advance
You want to use Core Animation. The basic principle is to create a bunch of "layer" objects, which can either be bitmap images, vector shapes, or text. Each layer is stored on the GPU and most operations can be animated at 60 frames per second.
Think of layers like a DOM node in a HTML page, they can be nested inside each other and you can apply attributes to each one similar to CSS. The list of attributes available matches everything the GPU can do efficiently.
It sounds like you want vector shapes. Basically you create all your shapes at startup, for example in the awakeFromNib method of a UIView subclass. For simple rectangles use CALayer and set a background colour. For more complicated shapes create a CAShapeLayer and a UIBezierPath, then apply it with shapeLayer.path = bezierPath.CGPath;.
Then, whenever you want to change something, you apply those changes to the layer object. For example, here I'm rotating a layer with a 1 second linear animation:
[CATransaction begin];
[CATransaction setAnimationDuration:1];
[CATransaction setAnimationTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]];
[self.needleLayer setValue:[NSNumber numberWithFloat:DegreesToRadians(degrees) forKeyPath:#"transform.rotation.z"];
[CATransaction commit];
// you'll want to declare this somewhere
CGFloat DegreesToRadians(CGFloat degrees)
{
return degrees * M_PI / 180;
}
More complicated animations, eg a series of changes scheduled to execute back to back, can be done using a CAKeyframeAnimation: https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/CoreAnimation_guide/CreatingBasicAnimations/CreatingBasicAnimations.html
Note Core Animation only does 2D graphics. Apple has Scene Kit which is basically the same thing for 3D, but so far it's only available on OS X. Hopefully iOS 8 will include it, but until then if you want 3D graphics on iOS you need to use Open GL.
CALayers which you resize on demand would probably be the most efficient way to do this, if the bars are solid colours. This allows you to optionally animate between sizes as well.
View resizing triggers off layout cycles, which you don't want. Drawing using CG calls is pretty slow.
Obviously the only real way to find out is to profile (on a device) using instruments and the core animation tool. But from experience, layer sizing is faster than drawing.
Definitely not a UIView for each - instead, a single UIView for the entire equalizer. Fill in the drawRect method with the appropriate CG calls to draw whatever is required. You can queue the view to refresh as needed with the appropriate data. Tapping into CADisplayLink will help you get the frame-rate you're looking for.
https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CADisplayLink_ClassRef/Reference/Reference.html
NOTE: You can also subclass CALayer and draw in it if you prefer something lighter-weight than UIView but I think you'll be fine with the former.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

iOS : need inputs in developing efficient ( performance wise ) drawing app

I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.

Resources