Efficient way to draw a graph line by line in CALayer - ios

I need to draw a line chart from values that come to me every half a seconds. I've come up with my custom CALayer for this graph which stores all the previous lines and every two seconds redraws all previous lines and adds one new line. I find this solution non-optimal because there's only need to draw one additional line to the layer, no reason to redraw potentially thousands of previous lines.
What do you think would be the best solution in this case?

Use your own NSBitmapContext or UIImage as a backing store. Whenever new data comes in draw to this context and set your layer's contents property to the context's image.

I am looking at an identical implementation. Graph updates every 500 ms. Similarly I felt uncomfortable drawing the entire graph each iteration. I implemented a solution 'similar' to what Nikolai Ruhe proposed as follows:
First some declarations:
#define TIME_INCREMENT 10
#property (nonatomic) UIImage *lastSnapshotOfPlot;
and then the drawLayer:inContext method of my CALayer delegate
- (void) drawLayer:( CALayer*)layer inContext:(CGContextRef)ctx
{
// Restore the image of the layer from the last time through, if it exists
if( self.lastSnapshotOfPlot )
{
// For some reason the image is being redrawn upside down!
// This block of code adjusts the context to correct it.
CGContextSaveGState(ctx);
CGContextTranslateCTM(ctx, 0, layer.bounds.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Now we can redraw the image right side up but shifted over a little bit
// to allow space for the new data
CGRect r = CGRectMake( -TIME_INCREMENT, 0, layer.bounds.size.width, layer.bounds.size.height );
CGContextDrawImage(ctx, r, self.lastSnapshotOfPlot.CGImage );
// And finally put the context back the way it was
CGContextRestoreGState(ctx);
}
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor blueColor].CGColor );
CGContextBeginPath( ctx );
// This next section is where I draw the line segment on the extreme right end
// which matches up with the stored graph on the image. This part of the code
// is application specific and I have only left it here for
// conceptual reference. Basically I draw a tiny line segment
// from the last value to the new value at the extreme right end of the graph.
CGFloat ppy = layer.bounds.size.height - _lastValue / _displayRange * layer.bounds.size.height;
CGFloat cpy = layer.bounds.size.height - self.sensorData.currentvalue / _displayRange * layer.bounds.size.height;
CGContextMoveToPoint(ctx,layer.bounds.size.width - TIME_INCREMENT, ppy ); // Move to the previous point
CGContextAddLineToPoint(ctx, layer.bounds.size.width, cpy ); // Draw to the latest point
CGContextStrokePath(ctx);
// Finally save the entire current layer to an image. This will include our latest
// drawn line segment
UIGraphicsBeginImageContext(layer.bounds.size);
[layer renderInContext: UIGraphicsGetCurrentContext()];
self.lastSnapshotOfPlot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Is this the most efficient way?
I have not been programming in ObjectiveC long enough to know so all suggestions/improvements welcome.

Related

How to LIGHTLY erase drawn path in cgcontext?

I managed to implement erase drawings on CGContext
UIImageView *maskImgView = [self.view viewWithTag:K_MASKIMG];
UIGraphicsBeginImageContext(maskImgView.image.size);
[maskImgView.image drawAtPoint:CGPointMake(0,0)];
float alp = 0.5;
UIImage *oriBrush = [UIImage imageNamed:_brushName];
//sets the style for the endpoints of lines drawn in a graphics context
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat eraseSize = oriBrush.size.width*_brushSize/_zoomCurrentFactor;
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineJoin(ctx, kCGLineJoinRound);
CGContextSetLineWidth(ctx,eraseSize);
CGContextSetRGBStrokeColor(ctx, 1, 1, 1, alp);
CGContextSetBlendMode(ctx, kCGBlendModeClear);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, lastPoint.x,lastPoint.y);
CGPoint vector = CGPointMake(currentPoint.x - lastPoint.x, currentPoint.y - lastPoint.y);
CGFloat distance = hypotf(vector.x, vector.y);
vector.x /= distance;
vector.y /= distance;
for (CGFloat i = 0; i < distance; i += 1.0f) {
CGPoint p = CGPointMake(lastPoint.x + i * vector.x, lastPoint.y + i * vector.y);
CGContextAddLineToPoint(ctx, p.x, p.y);
}
CGContextStrokePath(ctx);
maskImgView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
Problem is, this TOTALLY erase anything. The alpha set in this function ( CGContextSetRGBStrokeColor(ctx, 1, 1, 1, alp);) seems to be ignored totally.
I want to erase just lightly and repeated erasing will then totally removes the drawing.
Any ideas?
EDIT: As per request, I add more details about this code:
alp=_brushAlpha is a property delcared in this ViewController class. It ranges from 0.1 to 1.0. At testing I set it to 0.5. This drawing code is triggered by pan gesture recognizer (change state). It is basically following the finger (draw/erase by finger).
You've set the blending mode to clear. That ignores stroke color. You should play with the various modes a bit, but I suspect you want something like sourceAtop or maybe screen. See the CGBlendMode docs for full details.
You have a flag named clearsContextBeforeDrawing in UIView. if you set it to YES it will clear it before every draw.
according to documentation
A Boolean value that determines whether the view’s bounds should be automatically cleared before drawing.
When set to YES, the drawing buffer is automatically cleared to transparent black before the drawRect: method is called. This behavior ensures that there are no visual artifacts left over when the view’s contents are redrawn. If the view’s opaque property is also set to YES, the backgroundColor property of the view must not be nil or drawing errors may occur. The default value of this property is YES.
If you set the value of this property to NO, you are responsible for ensuring the contents of the view are drawn properly in your drawRect: method. If your drawing code is already heavily optimized, setting this property is NO can improve performance, especially during scrolling when only a portion of the view might need to be redrawn.

ios - How to improve draw and update large image performance?

I'm developing a selective color app in iOS (There are many similar apps in App Store, example: https://itunes.apple.com/us/app/color-splash/id304871603?mt=8). My idea is simple: use two UIViews, one for foreground (black and white image) and one for background (color image). I use touchesBegan, touchesMoved and so on events to track user input in foreground. In touches moved, I use kCGBlendModeClear to erase a path that the user moved the finger. Finally, I combine two images in UIViews together to get the result. The result will be displayed in the foreground view.
To achieve that idea, I have written two different implements.They work well with small image but very slow with large image (> 3MB).
In first version, I use two UIImageViews (imgForegroundView and imgBackgroundView).
Here is code to get the image result when user moved finger from point p1 to point p2. This code will be called from touchesMoved event:
-(UIImage*)getImageWithPoint:(CGPoint)p1 andPoint:(CGPoint)p2{
UIGraphicsBeginImageContext(originalImg.size);
[self.imgForegroundView.image drawInRect:CGRectMake(0, 0, originalImg.size.width, originalImg.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, brushSize);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, p1.x, p1.y);
CGContextAddLineToPoint(context, p2.x, p2.y);
CGContextStrokePath(context);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
After get image result, I replace imgForegroundView.image with it.
In version 2, I use idea in http://www.effectiveui.com/blog/2011/12/02/how-to-build-a-simple-painting-app-for-ios/. The background is still UIImageView but the foreground is a subclass of UIView. In foreground, I use a cache context to store image. When user move finger, I draw on cache context, then I update the view by override drawRect method. In drawRect method, I get image from cache context and draw it to current context.
- (void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cacheImage = CGBitmapContextCreateImage(cacheContext);
CGContextDrawImage(context, self.bounds, cacheImage);
CGImageRelease(cacheImage);
}
Then, in same way with first version, I get image from foreground and combine it with background.
With small image (<= 2 MB), both versions work well. But with larger image, it is very terrible: after user moves finger a long times (3 - 5 seconds, depend on image size), the image will be updated.
I want my app can achieve the speed near real time such as example app above, but I don't know how to do. Can anyone give me some suggestions?

How to create a static, actual width MKOverlayPathRenderer subclass?

I've been working on a simple path overlay to an existing MKMapView. It takes a CGMutablePath and draws it onto the map. The goals is that the path is drawn representing an actual width. e.g. The subclass takes a width in meters and converts that width into a representative line width. Here is the one version of the code that calculates the line width:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
float mapPoints = meterWidth * MKMapPointsPerMeterAtLatitude(self.averageLatitude);
float screenPoints = mapPoints * zoomScale; // dividing would keep the apparent width constant
self.lineWidth = ceilf(screenPoints * 2);
CGContextAddPath(context, _mutablePath);
[self applyStrokePropertiesToContext:context atZoomScale:zoomScale];
CGContextStrokePath(context);
Here we first find the number of map points that correspond to our line width and then convert that to screen points. We do the conversion based on the header comments in MKGeometry.h:
// MKZoomScale provides a conversion factor between MKMapPoints and screen points.
// When MKZoomScale = 1, 1 screen point = 1 MKMapPoint. When MKZoomScale is
// 0.5, 1 screen point = 2 MKMapPoints.
Finally, we add the path to the context, apply the stroking properties to the path and stroke it.
However this gives exceedingly flakey results. The renderer often draws random fragments of line in various places outside the expected live area or doesn't draw some tiles at all. Sometimes the CPU is pegged redrawing multiple version of the same tile (as best I can tell) over and over. The docs aren't much help in this case.
I do have a working version, but it doesn't seem like the correct solution as it completely ignores zoomScale and doesn't use -applyStrokePropertiesToContext:atZoomScale:
float mapPoints = meterWidth * MKMapPointsPerMeterAtLatitude(self.averageLatitude);
self.lineWidth = ceilf(mapPoints * 2);
CGContextAddPath(context, _mutablePath);
CGContextSetStrokeColorWithColor(context, self.strokeColor.CGColor);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextStrokePath(context);
Anyone have pointers on what is wrong with this implementation?

Core Graphics CGPathRef drawn at a specified point

I have a "palette" of paths that I will draw many times; perhaps 100.
I'd like to draw these at a specified location like this:
CGPathRef foo = ...
CGPathRef bar = ...
// do this dozens of times at differing points
[self draw:context path:foo atX:100 andY:50];
[self draw:context path:bar atX:200 andY:50];
What I'm doing now is translating. It works, but I'm not sure that this is the most performant solution. Something like this:
- (CGRect) draw:(CGContextRef) context path:(CGPathRef) path atX:(CGFloat) x andY: (CGFloat)y
{
CGContextSaveGState(context);
CGContextTranslateCTM(context, x, y);
CGRect pathBoundingRect = CGPathGetBoundingBox(path);
CGContextSetFillColorWithColor(context, drawColor);
CGContextAddPath(context, path);
CGContextDrawPath(context, kCGPathFill);
CGContextRestoreGState(context);
return pathBoundingRect;
}
Do you have any suggestions for improvement?
If they move, it would probably be much faster to draw each one in its own UIView (so in the beginning, they would all be identical), and position the view itself.
That way, the translation (of the views) would automatically be done on the GPU instead of the CPU, and drawRect: would only need to be called once for each path object.

Quartz2D performance - how to improve

I am working in iOS and have been presented with this problem. I will be receiving a relatively large set of data (a 500x500 or greater C-style array). I need to construct what is essentially an X/Y plot from this data, where each datapoint in the 500x500 grid corresponds to a color based on it's value. The thing is that this changes over time, making some animation as it were, so the calculations have to be fast in order to change when a new set of data comes in.
So basically, for every point in the array, I need to figure out which color it should map to, then figure out the square to draw on the grid to represent the data. If my grid were 768x768 pixels, but I have a 500x500 dataset, then each datapoint would represent about a 1.5x1.5 rectangle (that's rounded, but I hope you get the idea).
I tried this by creating a new view class and overriding drawRect. However, that met with horrible performance with anything much over a 20x20 dataset.
I have seen some suggestions about writing to image buffers, but I have not been able to find any examples of doing that (I'm pretty new to iOS). Do you have any suggestions or could you point me to any resources which could help?
Thank you for your time,
Darryl
Here's some code that you can put in a method that will generate and return a UIImage in an offscreen context. To improve performance, try to come up with ways to minimize the number of iterations, such as making your "pixels" bigger, or only drawing a portion that changes.
UIGraphicsBeginImageContext(size); // Use your own image size here
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
for (CGFloat x = 0.0; x<size.width; x+=1.0) {
for (CGFloat y=0.0; y< size.height; y+=1.0) {
// Set your color here
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0));
CGContextFillRect(context, CGRectMake(x, y, 1.0, 1.0));
}
}
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
[outputImage retain];
// clean up drawing environment
//
UIGraphicsEndImageContext();
return [outputImage autorelease];
If you want to be fast, you shouldn't call explicit drawing functions for every pixel. You can allocate memory and use CGBitmapContextCreate to build an image with the data in that memory - which is basically a bytes array. Do your calculations and write the color information (a-r-g-b) directly into that buffer. You have to do the maths on your own, though, regarding the sub-pixel accuracy (blending).
I don't have an example at hand, but searching for CGBitmapContextCreate should head you the right direction.

Resources