How to cache CGContextRef - ios

Unsatisfied with my previous results, I have been asked to create a freehand drawing view that will not blur when zoomed. The only way I can imagine this is possible is to use a CATiledLayer because otherwise it is just too slow when drawing a line when zoomed. Currently, I have it set up so that it will redraw every line every time, but I want to know if I can cache the results of the previous lines (not as pixels because they need to scale well) in a context or something.
I thought about CGBitmapContext, but would that mean I would need to tear down and set up a new context after every zoom? The problem is that on a retina display, the line drawing is too slow (on iPad 2 it is so-so), especially when drawing while zoomed. There is an app in the App Store called GoodNotes which beautifully demonstrates that this is possible, and it is possible to do it smoothly, but I can't understand how they are doing it. Here is my code so far (result of most of today):
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, mLineWidth);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
//Protect the local variables against the multithreaded nature of CATiledLayer
[mLock lock];
NSArray *pathsCopy = [mStrokes copy];
for(UIBezierPath *path in pathsCopy) //**Would like to cache these**
{
CGContextAddPath(c, path.CGPath);
CGContextStrokePath(c);
}
if(mCurPath)
{
CGContextAddPath(c, mCurPath.CGPath);
CGContextStrokePath(c);
}
CGRect pathBounds = mCurPath.bounds;
if(pathBounds.size.width > 32 || pathBounds.size.height > 32)
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
}
Profiling shows the hottest function by far is GCSFillDRAM8by1

First, as the path stroking is the most expensive operation, you shouldn't lock around it as this prevent you from drawing tiles concurrently on different cores.
Secondly, I think you could avoid to call CGContextStrokePath several times by adding all the paths in the context and stroking them altogether.
[mLock lock];
for ( UIBezierPath *path in mStrokes ) {
CGContextAddPath(c, path.CGPath);
}
if ( mCurPath ) {
CGContextAddPath(c, mCurPath.CGPath);
}
CGRect pathBounds = mCurPath.bounds;
if ( pathBounds.size.width > 32 || pathBounds.size.height > 32 )
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
CGContextStrokePath(c);
The CGContextRef is just a canvas in which the drawing operations occur. You cannot cache it but you may the create a CGImageRef with a flattened bitmap image of your paths and reuse that image. This won't help with zooming (as you'd need to recreate the image when the level of detail changes) but can be useful to improve the performance when the user is drawing a really long path.
There is a really interesting WWDC 2012 Session Video on that subject: Optimizing 2D Graphics and Animation Performance.

The bottleneck was actually the way I was using CATiledLayer. I guess it is too much to update with freehand info. I set it up with levels of detail as I saw in the docs and tutorials online, but in the end I didn't need that much. I just hooked up the scrollview delegate, cleared the contents when it was done zooming and changed the contentScale of the layer to match the scroll view. The result was beautiful (it disappears and fades back in, but that can't be helped).

Related

How to change individual square colour with respect to music per beats continuously for making a visualizer in IPhone?

I want to make a visualisation for my music player.so that i draw a grid view and i want to change each square colour randomly or continuously.
My Code for draw grid
- (void)drawRect:(CGRect)rect
for (int i = 0; i < 4 ;i = i + 1) {
for (int j = 0; j < 4; j = j + 1) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGRect rectangle = CGRectMake((j*(100+2))+2,(i*(100+2))+2,100,100);
CGContextAddRect(context, rectangle);
CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
CGContextFillPath(context);
CGContextStrokePath(context);
}
}
}
it look like
In my opinion you are overcomplicating yourself, and limiting future possibilities. If i were you, i would have a grid of UIViews or UIImageViews placed in an array. (You can do it programmatically or with the IB). (You can add the edges by modifying the border property in the view layer)
Then you can do all sort of things by setting their background colors independently, color evens, color odds, random all, anything you want since all you have to do is cycle through the array setting the colors accordingly per beat.
For the beats part is way more complicated than it seems. check this question, it offers a lot of tips on "music information retrieval".
How to detect the BPM of a song in php
I think you should have a single custom UIView.
Then call at short intervals setNeedsDisplayInRect: with the area of the view that you want to redraw.
Finally implement drawRect: and make sure to make optimize it by only redrawing the specified area, and doing it fast!
As for the music beats, better open a separate question ;)

Scaling an image is slow on an iPad 4th gen, are there faster alternatives?

I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.

Efficient way to draw a graph line by line in CALayer

I need to draw a line chart from values that come to me every half a seconds. I've come up with my custom CALayer for this graph which stores all the previous lines and every two seconds redraws all previous lines and adds one new line. I find this solution non-optimal because there's only need to draw one additional line to the layer, no reason to redraw potentially thousands of previous lines.
What do you think would be the best solution in this case?
Use your own NSBitmapContext or UIImage as a backing store. Whenever new data comes in draw to this context and set your layer's contents property to the context's image.
I am looking at an identical implementation. Graph updates every 500 ms. Similarly I felt uncomfortable drawing the entire graph each iteration. I implemented a solution 'similar' to what Nikolai Ruhe proposed as follows:
First some declarations:
#define TIME_INCREMENT 10
#property (nonatomic) UIImage *lastSnapshotOfPlot;
and then the drawLayer:inContext method of my CALayer delegate
- (void) drawLayer:( CALayer*)layer inContext:(CGContextRef)ctx
{
// Restore the image of the layer from the last time through, if it exists
if( self.lastSnapshotOfPlot )
{
// For some reason the image is being redrawn upside down!
// This block of code adjusts the context to correct it.
CGContextSaveGState(ctx);
CGContextTranslateCTM(ctx, 0, layer.bounds.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Now we can redraw the image right side up but shifted over a little bit
// to allow space for the new data
CGRect r = CGRectMake( -TIME_INCREMENT, 0, layer.bounds.size.width, layer.bounds.size.height );
CGContextDrawImage(ctx, r, self.lastSnapshotOfPlot.CGImage );
// And finally put the context back the way it was
CGContextRestoreGState(ctx);
}
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor blueColor].CGColor );
CGContextBeginPath( ctx );
// This next section is where I draw the line segment on the extreme right end
// which matches up with the stored graph on the image. This part of the code
// is application specific and I have only left it here for
// conceptual reference. Basically I draw a tiny line segment
// from the last value to the new value at the extreme right end of the graph.
CGFloat ppy = layer.bounds.size.height - _lastValue / _displayRange * layer.bounds.size.height;
CGFloat cpy = layer.bounds.size.height - self.sensorData.currentvalue / _displayRange * layer.bounds.size.height;
CGContextMoveToPoint(ctx,layer.bounds.size.width - TIME_INCREMENT, ppy ); // Move to the previous point
CGContextAddLineToPoint(ctx, layer.bounds.size.width, cpy ); // Draw to the latest point
CGContextStrokePath(ctx);
// Finally save the entire current layer to an image. This will include our latest
// drawn line segment
UIGraphicsBeginImageContext(layer.bounds.size);
[layer renderInContext: UIGraphicsGetCurrentContext()];
self.lastSnapshotOfPlot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Is this the most efficient way?
I have not been programming in ObjectiveC long enough to know so all suggestions/improvements welcome.

UIGraphicsGetCurrentContext() short lifetime

I have a view which implements freehand drawing, but I have a small problem. I noticed on the iPad 3 that everything went to hell, so I tried to update my drawing code (probably as I should have done in the first place) to only update the portion that was stroked. However, the first stroke after open, and the first stroke after about 10 seconds of idle are extremely slow. After everything is "warmed up" it is smooth as butter and only takes about 0.15ms per drawRect. I don't know why, but the whole view rectangle is getting marked as dirty for the first drawRect, and the first drawRect after idle (then it takes about 150 ms to update). The stack trace shows that my rectangle is being overridden by CABackingStoreUpdate_
I tried not drawing the layer if the rectangle was huge, but then my entire context goes blank (will reappear as I draw over the old areas like a lotto ticket). Does anyone have any idea what goes on with UIGraphicsGetCurrentContext()? That's the only place I can imagine the trouble is. That is, my views context got yanked by the context genie so it needs to render itself fully again. Is there any setting I can use to persist the same context? Or is there something else going on here...there is no need for it to update the full rectangle after the initial display.
My drawRect is very simple:
- (void)drawRect:(CGRect)rect
{
CGContextRef c = mDrawingLayer ? CGLayerGetContext(mDrawingLayer) : NULL;
if(!mDrawingLayer)
{
c = UIGraphicsGetCurrentContext();
mDrawingLayer = CGLayerCreateWithContext(c, self.bounds.size, NULL);
c = CGLayerGetContext(mDrawingLayer);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
}
if(mClearFlag)
{
CGContextClearRect(c, self.bounds);
mClearFlag = NO;
}
CGContextStrokePath(c);
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), self.bounds, mDrawingLayer);
NSLog(#"%.2fms : %f x %f", (CFAbsoluteTimeGetCurrent() - startTime)*1000.f, rect.size.width, rect.size.height);
}
I found a useful thread on on the Apple Dev Forums describing this exact problem. It only exists since iOS 5.0 and the theory is that it is because Apple introduced a double buffering system, so the first two drawRects will always be full. However, there is no explanation for why this will happen again after idle. The theory is that the underlying buffer is not guaranteed by the GPU, and this will be discarded at whim and need to be recreated. The solution (until Apple issues some kind of real solution) is to ping the buffer so that it won't be released:
mDisplayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(pingRect)];
[mDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
- (void)pingRect
{
//Already drawing
if(mTouchCount > 0) return;
//Even touching just one pixel will keep the buffer alive
[self setNeedsDisplayInRect:CGRectMake(0, 0, 1, 1)];
}
The only weakness is if the user keeps their finger perfectly still for more than 5 seconds, but I think that is an acceptable risk.
EDIT Interesting update. It turns out simply calling setNeedsDisplay is enough to keep the buffer alive, even if it returns immediately. So I added this to my drawRect method:
- (void)drawRect:(CGRect)rect
{
if(rect.size.width == 1.f)
return;
//...
}
Hopefully, it will curb the power usage that this refresh method will surely increase.

Quartz2D performance - how to improve

I am working in iOS and have been presented with this problem. I will be receiving a relatively large set of data (a 500x500 or greater C-style array). I need to construct what is essentially an X/Y plot from this data, where each datapoint in the 500x500 grid corresponds to a color based on it's value. The thing is that this changes over time, making some animation as it were, so the calculations have to be fast in order to change when a new set of data comes in.
So basically, for every point in the array, I need to figure out which color it should map to, then figure out the square to draw on the grid to represent the data. If my grid were 768x768 pixels, but I have a 500x500 dataset, then each datapoint would represent about a 1.5x1.5 rectangle (that's rounded, but I hope you get the idea).
I tried this by creating a new view class and overriding drawRect. However, that met with horrible performance with anything much over a 20x20 dataset.
I have seen some suggestions about writing to image buffers, but I have not been able to find any examples of doing that (I'm pretty new to iOS). Do you have any suggestions or could you point me to any resources which could help?
Thank you for your time,
Darryl
Here's some code that you can put in a method that will generate and return a UIImage in an offscreen context. To improve performance, try to come up with ways to minimize the number of iterations, such as making your "pixels" bigger, or only drawing a portion that changes.
UIGraphicsBeginImageContext(size); // Use your own image size here
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
for (CGFloat x = 0.0; x<size.width; x+=1.0) {
for (CGFloat y=0.0; y< size.height; y+=1.0) {
// Set your color here
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0));
CGContextFillRect(context, CGRectMake(x, y, 1.0, 1.0));
}
}
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
[outputImage retain];
// clean up drawing environment
//
UIGraphicsEndImageContext();
return [outputImage autorelease];
If you want to be fast, you shouldn't call explicit drawing functions for every pixel. You can allocate memory and use CGBitmapContextCreate to build an image with the data in that memory - which is basically a bytes array. Do your calculations and write the color information (a-r-g-b) directly into that buffer. You have to do the maths on your own, though, regarding the sub-pixel accuracy (blending).
I don't have an example at hand, but searching for CGBitmapContextCreate should head you the right direction.

Resources