Core graphics using too much CPU - ios

I am implementing the following graphics drawRect function but it uses more than 50% of the CPU - any idea on how I could solve that? I just draw a few random lines, but I want that they all have a different width.
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
#autoreleasepool {
CGContextRef context = UIGraphicsGetCurrentContext();
CGMutablePathRef path = CGPathCreateMutable();
float width = rect.size.width;
int nbLine=10; // i want to draw 10 paths
for (int iLine=1;iLine<nbLine;iLine++){
float Pathwidth=0.8*(nbLine-(float)iLine)/nbLine;
CGContextBeginPath(context);
CGContextSetLineWidth(context, Pathwidth); //each path should have its own width
CGPathMoveToPoint(path, NULL, 0,0);
for (int i=0;i<10;i++){
float x=width/(i+1);
float y=1;//for this example, I just put a fixed number here - it's normally an external variable
CGPathAddQuadCurveToPoint(path, NULL, x+width/10, y, x,0);
}
CGContextAddPath(context, path);
CGContextStrokePath(context);
}
CGPathRelease(path);
}
}
thank you !

There are a few things you can try.
Use instruments to find out exactly which line(s) are using the CPU.
Build the path in a UIBezierPath once and then draw them each time in drawRect.
Look at where setNeedsDisplay is being called from. Most likely each time it draws it isn't using up too much CPU. It is very possible that the problem is that it is rapidly drawing over and over.

If you are pressed for performance you can use a GLKView. Core Drawing is based in OpenGL with a whole bunch of optimizations set for graphical clarity and quality. But if those options are slowing you down to the point of non-usability then that may be your best bet.
My second suggestion would be to not call the draw so often. You said it gets called every 4 ms which is 250 times per second. The user can't see that fine of detail, so that is extravagant.
My third suggestion is to use a UIView, draw once, then modify it's transform based on your y variable. It appears as though you could do a simple y scaling to achieve what you are trying to do (no x changes, no width of line changes (after drawing it once)). I could be over simplifying based on your code, but it would be a good thing to try. You could also do a mix of this suggestion and your code and redraw if the y scale transform becomes too large.

Related

iOS:Quartz2d to Draw floor plans

I am building an app where I wanted show floor plans to the user which is interactive that means they can tap on each area zoom those and find finer details.
I am getting a JSON response from the back-end which has the metadata info of the floor plans, shape info and points. I am planning to parse the points and draw the shapes on a view using quartz (started learning Quartz2d). As a start I took a simple blue print which looks like the below image.
As per the blue print the center is at (0,0) and there are 4 points.
Below are the points that I am getting from backend for the blue print.
X:-1405.52, Y:686.18
X:550.27, Y:683.97
X:1392.26, Y:-776.79
X:-1405.52, Y:-776.79
I tried to draw this
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
for (Shape *shape in shapes.shapesArray) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
BOOL isFirstPoint = YES;
for (Points *point in shape.arrayOfPoints) {
NSLog(#"X:%f, Y:%f",point.x,point.y);
if (isFirstPoint) {
CGContextMoveToPoint(context, point.x, point.y);
//[bpath moveToPoint:CGPointMake(point.x,point.y)];
isFirstPoint = NO;
continue;
}
CGContextAddLineToPoint(context, point.x, point.x);
}
CGContextStrokePath(context);
}
}
But I am getting the below image as result which looks not the correct one
Questions:
Am I in the correct direction of achieving this?
How to draw points at -ve direction?
According to the co-ordinates, the drawing will be very huge, I want to draw it first and then shrink to fit to the screen so that users can later zoom in/ pan etc
UPDATE:
I have achieved some of the basic things using translation and scaling. Now the problem is how I will fit the content drawn, to the view bounds. Since my co-ordinates are pretty big, it goes out of bounds, I want it to fit.
Please find below the test code that I am having.
Any idea how to fit it?
DrawshapefromJSONPoints
I have found a solution to the problem. Now I am able to draw shapes from a set of points and fit them to the screen and also make them zoom able without quality loss. Please find below the link to the project. This can be tweaked to support colors, different shapes, I am currently handling only polygons.
DrawShapeFromJSON
A few observations:
There is nothing wrong with using quartz for this, but you might find it more flexible using OpenGL since you mention requirements for hit-testing areas and scaling the model in realtime.
Your code is assuming an ordered list of points since you plot the path using CGContextMoveToPoint. If this is guaranteed by the data contract then fine; however, you will need to write a more intelligent renderer if you have multiple closed paths returned in your JSON.
Questions 2 & 3 can be covered with a primer in computer graphics (specifically model-view matrix and transformations). If your world coordinates are centered at (0,0,0) you can scale the vertices by applying a scalar to each vertex. Drawing points on the negative axis will make more sense when you are not using the quartz-2d coordinate system (0,0)-(w, h)

In which case does drawRect not receive the full frame of the UIView?

I am writing a patch bay control, and I'm using UIViews to draw the links between the patches.
These links are subviews of a large UIView, itself a subview of a UIScrollView.
Links can become quite large, typically four times the size of the screen.
Links need to be redrawn when one of their end patch moves.
However, there are situations where only a part of the link is visible.
Instruments indicates that most of the time is spent in my QCLink drawRect method.
I have checked that the drawRect method is called with the full bounds of the QCLink each time that this QCLink need to be redrawn.
Is this a situation where I should only have to redraw a part of the UIView (the rect argument in drawRect:) ?
Here are some screen captures to help you understand the problem I'm facing.
In which case does drawRect not receive the full frame of the UIView?
Your draw implementation should always be prepared to draw a portion of your view. For some tasks, the default clipping is good.
Just follow the view invalidation process -- if you invalidate a rect, then the view system traverses views and asks them to draw what lies in that rect (considering things like opacity). That rect may be (composed of) a union of rects, but that too may be clipped by the system.
So you are probably overdrawing or doing redundant drawing -- consider how that may be reduced. For starters, you might want to put all your cords in one view, and do everything you can to minimize surfaces which are not opaque. After that, you should determine where you are overdrawing. Quartz Debug can point out these redundant draws. You should be using setNeedsDisplayInRect: rather than setNeedsDisplay, especially where drawing times are critical.
You are probably looking for a CATiledLayer. Tiled layers only the draw the parts which are on screen and need to be rendered, making them great for views that may be too large for the screen or where you are going to pinch-to-zoom. They are core to how views like UIWebView are rendered.
To switch your view to use a tiled layer, you just have to declare this method:
+ (Class)layerClass
{
return [CATiledLayer class];
}
You will then start seeing calls to drawRect: with values like (0, 0, 256, 256), (0, 256, 256, 256)...
When moving things around, you can get an extra performance win by calling setNeedsDisplayInRect: instead of setNeedsDisplay. This will limit drawing to the invalidated rect.
It's always the full rectangle. What about using CAShapeLayers instead?
Something like:
CAShapeLayer * link = [ CAShapeLayer layer ] ;
link.strokeColor = [ [ UIColor greenColor ] CGColor ] ;
CGMutablePathRef p = CGPathCreateMutable() ;
CGPathMoveToPoint( p, NULL, start ) ;
CGPathAddLineToPoint( p, NULL, end ) ;
link.path = path ;
[ parentView.layer addSublayer:link ] ;

iOS CGPath Performance

UPDATE
I got around CG's limitations by drawing everything with OpenGL. Still some glitches, but so far it's working much, much faster.
Some interesting points :
GLKView : That's an iOS-specific view, and it helps a lot in setting up the OpenGL context and rendering loop. If you're not on iOS, I'm afraid you're on your own.
Shader precision : The precision of shader variables in the current version of OpenGL ES (2.0) is 16-bit. That was a little low for my purposes, so I emulated 32-bit arithmetics with pairs of 16-bit variables.
GL_LINES : OpenGL ES can natively draw simple lines. Not very well (no joints, no caps, see the purple/grey line on the top of the screenshot below), but to improve that you'll have to write a custom shader, convert each line into a triangle strip and pray that it works! (supposedly that's how browsers do that when they tell you that Canvas2D is GPU-accelerated)
Draw as little as possible. I suppose that makes sense, but you can frequently avoid rendering things that are, for instance, outside of the viewport.
OpenGL ES has no support for filled polygons, so you have to tesselate them yourself. Consider using iPhone-GLU : that's a port of the MESA code and it's pretty good, although it's a little hard to use (no standard Objective-C interface).
Original Question
I'm trying to draw lots of CGPaths (typically more than 1000) in the drawRect method of my scroll view, which is refreshed when the user pans with his finger. I have the same application in JavaScript for the browser, and I'm trying to port it to an iOS native app.
The iOS test code is (with 100 line operations, path being a pre-made CGMutablePathRef) :
- (void) drawRect:(CGRect)rect {
// Start the timer
BSInitClass(#"Renderer");
BSStartTimedOp(#"Rendering");
// Get the context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetFillColorWithColor(context, [[UIColor redColor] CGColor]);
CGContextSetStrokeColorWithColor(context, [[UIColor blueColor] CGColor]);
CGContextTranslateCTM(context, 800, 800);
// Draw the points
CGContextAddPath(context, path);
CGContextStrokePath(context);
// Display the elapsed time
BSEndTimedOp(#"Rendering");
}
In JavaScript, for reference, the code is (with 10000 line operations) :
window.onload = function() {
canvas = document.getElementById("test");
ctx = canvas.getContext("2d");
// Prepare the points before drawing
var data = [];
for (var i = 0; i < 100; i++) data.push ({x: Math.random()*canvas.width, y: Math.random()*canvas.height});
// Draw those points, and write the elapsed time
var __start = new Date().getTime();
for (var i = 0; i < 100; i++) {
for (var j = 0; j < data.length; j++) {
var d = data[j];
if (j == 0) ctx.moveTo (d.x, d.y);
else ctx.lineTo(d.x,d.y)
}
}
ctx.stroke();
document.write ("Finished in " + (new Date().getTime() - __start) + "ms");
};
Now, I'm much more proficient in optimizing JavaScript than I am at iOS, but, after some profiling, it seems that CGPath's overhead is absolutely, incredibly bad compared to JavaScript. Both snippets run at about the same speed on a real iOS device, and the JavaScript code has 100x the number of line operations of the Quartz2D code!
EDIT: Here is the top of the time profiler in Instruments :
Running Time Self Symbol Name
6487.0ms 77.8% 6487.0 aa_render
449.0ms 5.3% 449.0 aa_intersection_event
112.0ms 1.3% 112.0 CGSColorMaskCopyARGB8888
73.0ms 0.8% 73.0 objc::DenseMap<objc_object*, unsigned long, true, objc::DenseMapInfo<objc_object*>, objc::DenseMapInfo<unsigned long> >::LookupBucketFor(objc_object* const&, std::pair<objc_object*, unsigned long>*&) const
69.0ms 0.8% 69.0 CGSFillDRAM8by1
66.0ms 0.7% 66.0 ml_set_interrupts_enabled
46.0ms 0.5% 46.0 objc_msgSend
42.0ms 0.5% 42.0 floor
29.0ms 0.3% 29.0 aa_ael_insert
It is my understanding that this should be much faster on iOS, simply because the code is native... So, do you know :
...what I am doing wrong here?
...and if there's another, better solution to draw that many lines in real-time?
Thanks a lot!
As you described on question, using OpenGL is the right solution.
Theoretically, you can emulate all kind of graphics drawing with OpenGL, but you need to implement all shape algorithm yourself. For example, you need to extend edge corners of lines yourself. There's no concept of lines in OpenGL. The line drawing is kind of utility feature, and almost used only for debugging. You should treat everything as a set of triangles.
I believe 16bit floats are enough for most drawings. If you're using coordinates with large numbers, consider dividing space into multiple sector to make coordinate numbers smaller. Floats' precision become bad when it's going to very large or very small.
Update
I think you will meet this issue soon if you try to display UIKit over OpenGL display. Unfortunately, I also couldn't find the solution yet.
How to synchronize OpenGL drawing with UIKit updates
You killed CGPath performance by using CGContextAddPath.
Apple explicitly says this will run slowly - if you want it to run fast, you are required to attach your CGPath objects to CAShapeLayer instances.
You're doing dynamic, runtime drawing - blocking all of Apple's performance optimizations. Try switching to CALayer - especially CAShapeLayer - and you should see performance improve by a large amount.
(NB: there are other performance bugs in CG rendering that might affect this use case, such as obscure default settings in CG/Quartz/CA, but ... you need to get rid of the bottleneck on CGContextAddPath first)

Why is renderInContext so much slower than drawing to the screen?

I have a pretty large CAShapeLayer that I'm rendering. The layer is completely static, but it's contained in a UIScrollView so it can move around and be zoomed -- basically, it must be redrawn every now and then. In an attempt to improve the framerate of this scrolling, I set shouldRasterize = YES on the layer, which worked perfectly. Because I never change any property of the layer it never has a rasterization miss and I get a solid 60 fps. High fives all around, right?
Until the layer gets a little bigger. Eventually -- and it doesn't take long -- the rasterized image gets too large for the GPU to handle. According to my console, <Notice>: CoreAnimation: surface 2560 x 4288 is too large, and it just doesn't draw anything on the screen. I don't really blame it -- 2560 x 4288 is pretty big -- but I spent a while scratching my head before I noticed this in the device console.
Now, my question is: how can I work around this limitation? How can I rasterize a really large layer?
The obvious solution seems to be to break the layer up into multiple sublayers, say one for each quadrant, and rasterize each one independently. Is there an "easy" way to do this? Can I create a new layer that renders a rectangular area from another layer? Or is there some other solution I should explore?
Edit
Creating a tiled composite seems to have really bad performance because the layers are re-rasterized every time they enter the screen, creating for a very jerky scrolling experience. Is there some way to cache those rasterizations? Or is this the wrong approach altogether?
Edit
Alright, here's my current solution: render the layer once to a CGImageRef. Create multiple tile layers using sub-rectangles from that image, and actually put those on the screen.
- (CALayer *)getTiledLayerFromLayer:(CALayer *)sourceLayer withHorizontalTiles:(int)horizontalTiles verticalTiles:(int)verticalTiles
{
CALayer *containerLayer = [CALayer layer];
CGFloat tileWidth = sourceLayer.bounds.size.width / horizontalTiles;
CGFloat tileHeight = sourceLayer.bounds.size.height / verticalTiles;
// make sure these are integral, otherwise you'll have image alignment issues!
NSLog(#"tileWidth:%f height:%f", tileWidth, tileHeight);
UIGraphicsBeginImageContextWithOptions(sourceLayer.bounds.size, NO, 0);
CGContextRef tileContext = UIGraphicsGetCurrentContext();
[sourceLayer renderInContext:tileContext];
CGImageRef image = CGBitmapContextCreateImage(tileContext);
UIGraphicsEndImageContext();
for(int horizontalIndex = 0; horizontalIndex < horizontalTiles; horizontalIndex++) {
for(int verticalIndex = 0; verticalIndex < verticalTiles; verticalIndex++) {
CGRect frame = CGRectMake(horizontalIndex * tileWidth, verticalIndex * tileHeight, tileWidth, tileHeight);
CGRect visibleRect = CGRectMake(horizontalIndex / (CGFloat)horizontalTiles, verticalIndex / (CGFloat)verticalTiles, 1.0f / horizontalTiles, 1.0f / verticalTiles);
CALayer *tile = [CALayer layer];
tile.frame = frame;
tile.contents = (__bridge id)image;
tile.contentsRect = visibleRect;
[containerLayer addSublayer:tile];
}
}
CGImageRelease(image);
return containerLayer;
}
This works great...sort of. One the one hand, I get 60fps panning and zooming of a 1980 x 3330 layer on a retina iPad. On the other hand, it takes 20 seconds to start up! So while this solution solves my original problem, it gives me a new one: how can I generate the tiles faster?
Literally all of the time is spent in the [sourceLayer renderInContext:tileContext]; call. This seems weird to me, because if I just add that layer directly I can render it about 40 times per second, according to the Core Animation Instrument. Is it possible that creating my own image context causes it to not use the GPU or something?
Breaking the layer into tiles is the only solution. You can however implement it in many different ways. I suggest doing it manually (creating layers & sublayers on your own), but many recommend using CATiledLayer http://www.mlsite.net/blog/?p=1857, which is the way maps are usually implemented - zooming and rotating is quite easy with this one. The tiles of CATiledLayer are loaded (drawn) on demand, just after they are put on the screen. This implies a short delay (blink) before the tile is fully drawn and AFAIK it is quite hard to get rid of this behaviour.

Animating the drawing of a line programatically using Quartz 2d on IOS

I'm trying to draw an animated growing line using Quartz 2d, by adding points to an existing line, gradually over time. I started drawing a new line, In the drawRect method of a UIView, by obtaining the CGContextRef, setting its draw properties, and moving the first point to (0,0).
CGContextRef context= UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context,[UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 2);
CGContextMoveToPoint(context,0,0);
later, in my next drawRect call, i tried extending that line, by again, obtaining the CGContextRef, and adding a new point to it.
GContextRef context= UIGraphicsGetCurrentContext();
CGContextAddLineToPoint(context,x,y);
but it seems that the current CGContextRef doesn't have any record of my previous CGContextMoveToPoint command from the last drawRect call, therefore doesn't have any reference that i already started drawing a line.
Am i doing something wrong here? is there a way refering an already drawn line?
You basically need to treat each call to drawRect as if it was starting from scratch. Even if you are only asked to update a subrect of the view, you should assume that any state associated with the graphics context, such as drawing position and colours, will have been reset. So in your case, you need to keep track of the start position and redraw the whole line each time.
I think the better approach is to animate some thin UIView. Look my answer here.
If you need more than just horizontal line, rotate that view. I think it's better for the performance.

Resources