I have been struggling with this problem for over one month trying to to figure out what is causing it with no solution. Since the code is pretty long i wouldn't be able to post it here.
Basically i have made drawing app. When you dubble tap the screen the screen and everything will reset, almost like I am reloading the view. When i reset the scene the processor usage will go down to around 9%, but then when i start drawing again the processor usage will go up to where I last ended. So say for example i draw and image and the processor power goes up to 50%, then dubble tap to reset the view to what is what from the beginning it will go down to 9%. Then when i start drawing again it will go up to 50%, and next time 60%,70% etc.
Maybe it is hard for you to see what is causing the problem due the lack of information so I could send my source code if someone is interested helping me by PMing me.
greentimer = [NSTimer scheduledTimerWithTimeInterval:0.02 target:self selector:#selector(movement2) userInfo:nil repeats:YES];
-(void)movement2{
static int intigrer;
intigrer = (intigrer+1)%3;
UIGraphicsBeginImageContext(CGSizeMake(320, 568));
[drawImage.image drawInRect:rekked];
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 8.0);
CGContextSetRGBStrokeColor(ctx, r12, g12, b12, 1);
CGContextBeginPath(ctx);
if (intigrer == 1 && integrer2 < greenran - greenran2) {
CGPathMoveToPoint(path, NULL, greentmporary.x, greentmporary.y);
CGPathAddLineToPoint(path, NULL, greenpoint1.x, greenpoint1.y);
}
green.center = greenpoint1;
if (integrer2 < greenran - greenran2) {
CGContextMoveToPoint(ctx, greentmporary.x, greentmporary.y);
CGContextAddLineToPoint(ctx, greenpoint1.x, greenpoint1.y);
}
CGContextStrokePath(ctx);
[drawImage setFrame:CGRectMake(0, 0, 320, 568)];
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [self updatePoint2:YES];
static BOOL yes;
if (!yes) {
[self.view insertSubview:drawImage atIndex:0];
yes = YES;
}
ctx = nil;
}
You need to release your CGPaths and CGImages
CGPathRelease(path);
CGImageRelease(image);
CGContextRelease(context);
I think your problem is that you are making self.view more complex by each fire of the timer because you keep adding subviews to it. So the complexity of your scene increases each time its rendered until its completely reset. I could not really follow all your code, to be honest, as I am not familiar with what you are trying to achieve.
I think an approach to solving the problem is to run your program with Instruments with the Profile option (instead of doing a 'Run'). Select the 'Automation' template.
There is a way to issue logElementTree() on your running program and it gives a dump of the UIView hierarchy with images. There are lots of good articles on it, e.g.
http://cocoamanifest.net/articles/2011/05/uiautomation-an-introduction.html
Related
This seems like such a basic thing to want, I can't believe I'm not able to find out how to do it. To make the description easy to understand, suppose I simply want to draw a bunch of random rectangles on the screen. These random rectangles would keep adding on top of each other repeatedly until something stopped the process. How would one do that?
The closest explanation I've seen is drawing applications, where the basic scheme is to draw into an image view, first copying the previous image into the new image and then adding the new content. Copying the original image sure seems like a waste of effort, and it sure seems like it should be possible to simply write the new content in place over whatever is there. Am I missing something obvious?
Note that drawRect replaces the entire frame. It works well for drawing a small set of objects, but it quickly becomes awkward when there's an indefinite amount of history that also needs to be displayed.
Edit: I'm attaching some sample images that are screen prints from a Mix C program that does what I'm after. Essentially, there are cellular automata that move around the screen leaving trails. The color of the trail depends upon the logic in the automaton as well as the color of the pixel where the automaton just traveled to. The automata should be able to move at rates of hundreds of pixels per second. Because of the logic used by the automata, I need to be able to not only write quickly to the image but also be able to inquire what the color of a pixel is (or mirrored data).
Typically you do this by either creating separate paths or layers for all your rectangles (if you want to keep track of them), or by drawing repeatedly into a CGBitmapContextRef, and then converting that into an image and drawing it in drawRect:. This is basically the same approach you're describing ("where the basic scheme is to draw into an image view…") except there's no need to copy the image. You just keep using the same context and making new images out of it.
The other tool you could use here is a CGLayer. The Core Graphics team discourages its use because of performance concerns, but it does make this kind of drawing much more convenient. When you look at the docs, and they say "benefit from improved performance," remember that this was written in 2006, and when I asked the Core Graphics team about it, they said that the faster answer today is CGBitmapContext. But you can't beat CGLayer for convenience on this kind of problem.
This should be fine by maintaining a CGBitmapContext that you continually write into (and that allows you to also read from it). When it changes, call setNeedsDisplayInRect:. In drawRect:, create the image, and draw it using CGContextDrawImage, passing the rect you were passed. (You may be passed the entire rect.)
It may be a little more flexible to do this on the CALayer instead of the UIView, but I doubt you'll see a great difference in performance. The view passes drawing to its layer.
The number of times a second this updates isn't really that important. drawRect: will not be called more often than the frame rate (max of 60 fps), no matter how often you call setNeedsDisplayInRect:. So you won't be creating images hundreds or thousands of times a second; just at the time that you need to draw something.
Are you seeing particular performance problems, or are you just concerned that you may in the future encounter performance problems? Do you have any sample code that shows the issue? I'm not saying it can't be slow (it might be if you're trying to do this full screen with retina). But you want to start with the simple solution and then optimize. Apple does a lot of graphics optimizations behind the scenes. Don't try to second guess them too much. They generate and draw images really well.
I've accepted another answer, but I'm including my own answer to show the alternative I used for testing.
-(void)viewWillLayoutSubviews {
self.pixelCount = 0;
self.seconds = 0;
CGRect frame = self.testImageView.bounds;
UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1.0);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextMoveToPoint(context, 0.0, 0.0);
CGContextAddRect(context, frame);
CGContextFillRect(context, frame);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextStrokePath(context);
UIImage *blank = UIGraphicsGetImageFromCurrentImageContext();
self.context = context;
self.testImageView.image = blank;
// This timer has no delay, so it will draw squares as fast as possible.
[NSTimer scheduledTimerWithTimeInterval:0.0 target:self selector:#selector(drawRandomRectangle) userInfo:nil repeats:NO];
// This timer is used to control the frequency at which the imageViews image is updated.
[NSTimer scheduledTimerWithTimeInterval:1/20.f target:self selector:#selector(updateImage) userInfo:nil repeats:YES];
// This timer just outputs the counter once per second so I can record statistics.
[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:#selector(displayCounter) userInfo:nil repeats:YES];
}
-(void)updateImage
{
self.testImageView.image = UIGraphicsGetImageFromCurrentImageContext();
}
-(void)displayCounter
{
self.seconds++;
NSLog(#"%d",self.pixelCount/self.seconds);
}
-(void)drawRandomRectangle
{
int x1 = arc4random_uniform(self.view.bounds.size.width);
int y1 = arc4random_uniform(self.view.bounds.size.height);
int xdif = 20;
int ydif = 20;
x1 -= xdif/2;
y1 -= ydif/2;
CGFloat red = (arc4random() % 256) / 255.0f;
CGFloat green = (arc4random() % 256) / 255.0f;
CGFloat blue = (arc4random() % 256) / 255.0f;
UIColor *randomColor = [UIColor colorWithRed:red green:green blue:blue alpha:1.0f];
CGRect frame = CGRectMake(x1*1.0f, y1*1.0f, xdif*1.0f, ydif*1.0f);
CGContextSetStrokeColorWithColor(self.context, [UIColor blackColor].CGColor);
CGContextSetFillColorWithColor(self.context, randomColor.CGColor);
CGContextSetLineWidth(self.context, 1.0);
CGContextAddRect(self.context, frame);
CGContextStrokePath(self.context);
CGContextFillRect(self.context, frame);
CGContextStrokePath(self.context);
if (self.pixelCount < 100000) {
[NSTimer scheduledTimerWithTimeInterval:0.0 target:self selector:#selector(drawRandomRectangle) userInfo:nil repeats:NO];
}
self.pixelCount ++;
}
The graph shows image updates per second on the x-axis and number of 20x20 squares drawn to the context per second in the y-axis.
Unsatisfied with my previous results, I have been asked to create a freehand drawing view that will not blur when zoomed. The only way I can imagine this is possible is to use a CATiledLayer because otherwise it is just too slow when drawing a line when zoomed. Currently, I have it set up so that it will redraw every line every time, but I want to know if I can cache the results of the previous lines (not as pixels because they need to scale well) in a context or something.
I thought about CGBitmapContext, but would that mean I would need to tear down and set up a new context after every zoom? The problem is that on a retina display, the line drawing is too slow (on iPad 2 it is so-so), especially when drawing while zoomed. There is an app in the App Store called GoodNotes which beautifully demonstrates that this is possible, and it is possible to do it smoothly, but I can't understand how they are doing it. Here is my code so far (result of most of today):
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, mLineWidth);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
//Protect the local variables against the multithreaded nature of CATiledLayer
[mLock lock];
NSArray *pathsCopy = [mStrokes copy];
for(UIBezierPath *path in pathsCopy) //**Would like to cache these**
{
CGContextAddPath(c, path.CGPath);
CGContextStrokePath(c);
}
if(mCurPath)
{
CGContextAddPath(c, mCurPath.CGPath);
CGContextStrokePath(c);
}
CGRect pathBounds = mCurPath.bounds;
if(pathBounds.size.width > 32 || pathBounds.size.height > 32)
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
}
Profiling shows the hottest function by far is GCSFillDRAM8by1
First, as the path stroking is the most expensive operation, you shouldn't lock around it as this prevent you from drawing tiles concurrently on different cores.
Secondly, I think you could avoid to call CGContextStrokePath several times by adding all the paths in the context and stroking them altogether.
[mLock lock];
for ( UIBezierPath *path in mStrokes ) {
CGContextAddPath(c, path.CGPath);
}
if ( mCurPath ) {
CGContextAddPath(c, mCurPath.CGPath);
}
CGRect pathBounds = mCurPath.bounds;
if ( pathBounds.size.width > 32 || pathBounds.size.height > 32 )
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
CGContextStrokePath(c);
The CGContextRef is just a canvas in which the drawing operations occur. You cannot cache it but you may the create a CGImageRef with a flattened bitmap image of your paths and reuse that image. This won't help with zooming (as you'd need to recreate the image when the level of detail changes) but can be useful to improve the performance when the user is drawing a really long path.
There is a really interesting WWDC 2012 Session Video on that subject: Optimizing 2D Graphics and Animation Performance.
The bottleneck was actually the way I was using CATiledLayer. I guess it is too much to update with freehand info. I set it up with levels of detail as I saw in the docs and tutorials online, but in the end I didn't need that much. I just hooked up the scrollview delegate, cleared the contents when it was done zooming and changed the contentScale of the layer to match the scroll view. The result was beautiful (it disappears and fades back in, but that can't be helped).
I have a view which implements freehand drawing, but I have a small problem. I noticed on the iPad 3 that everything went to hell, so I tried to update my drawing code (probably as I should have done in the first place) to only update the portion that was stroked. However, the first stroke after open, and the first stroke after about 10 seconds of idle are extremely slow. After everything is "warmed up" it is smooth as butter and only takes about 0.15ms per drawRect. I don't know why, but the whole view rectangle is getting marked as dirty for the first drawRect, and the first drawRect after idle (then it takes about 150 ms to update). The stack trace shows that my rectangle is being overridden by CABackingStoreUpdate_
I tried not drawing the layer if the rectangle was huge, but then my entire context goes blank (will reappear as I draw over the old areas like a lotto ticket). Does anyone have any idea what goes on with UIGraphicsGetCurrentContext()? That's the only place I can imagine the trouble is. That is, my views context got yanked by the context genie so it needs to render itself fully again. Is there any setting I can use to persist the same context? Or is there something else going on here...there is no need for it to update the full rectangle after the initial display.
My drawRect is very simple:
- (void)drawRect:(CGRect)rect
{
CGContextRef c = mDrawingLayer ? CGLayerGetContext(mDrawingLayer) : NULL;
if(!mDrawingLayer)
{
c = UIGraphicsGetCurrentContext();
mDrawingLayer = CGLayerCreateWithContext(c, self.bounds.size, NULL);
c = CGLayerGetContext(mDrawingLayer);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
}
if(mClearFlag)
{
CGContextClearRect(c, self.bounds);
mClearFlag = NO;
}
CGContextStrokePath(c);
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), self.bounds, mDrawingLayer);
NSLog(#"%.2fms : %f x %f", (CFAbsoluteTimeGetCurrent() - startTime)*1000.f, rect.size.width, rect.size.height);
}
I found a useful thread on on the Apple Dev Forums describing this exact problem. It only exists since iOS 5.0 and the theory is that it is because Apple introduced a double buffering system, so the first two drawRects will always be full. However, there is no explanation for why this will happen again after idle. The theory is that the underlying buffer is not guaranteed by the GPU, and this will be discarded at whim and need to be recreated. The solution (until Apple issues some kind of real solution) is to ping the buffer so that it won't be released:
mDisplayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(pingRect)];
[mDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
- (void)pingRect
{
//Already drawing
if(mTouchCount > 0) return;
//Even touching just one pixel will keep the buffer alive
[self setNeedsDisplayInRect:CGRectMake(0, 0, 1, 1)];
}
The only weakness is if the user keeps their finger perfectly still for more than 5 seconds, but I think that is an acceptable risk.
EDIT Interesting update. It turns out simply calling setNeedsDisplay is enough to keep the buffer alive, even if it returns immediately. So I added this to my drawRect method:
- (void)drawRect:(CGRect)rect
{
if(rect.size.width == 1.f)
return;
//...
}
Hopefully, it will curb the power usage that this refresh method will surely increase.
I am totally new to iOS developing and I've got an idea in my mind but I do not know exactly how to implement it correctly.
What I need is my program to draw a line which can be controlled by the user by tapping buttons. It is kind of like the "snake" game. I tried out core graphics but it is not quite the right approach I guess. I did following:
- (void)viewDidLoad{
[NSTimer scheduledTimerWithTimeInterval:.02 target:self selector:#selector(updateGame:) userInfo:nil repeats:YES];
}
-(void)updateGame:(id)sender{
double timeInterval= [self.lastDate timeIntervalSinceNow]*-1;
for (int n=0; n<playerNum; n++) {
CGPoint lastPoint=[[locationArray objectAtIndex:n]CGPointValue];
CGPoint updatedLoc= CGPointMake(lastPoint.x+100*timeInterval*sin([[directionArray objectAtIndex:n]doubleValue]), lastPoint.y+60*timeInterval*cos([[directionArray objectAtIndex:n]doubleValue]));
[locationArray replaceObjectAtIndex:n withObject:[NSValue valueWithCGPoint:updatedLoc]];
[self.drawingView drawToBufferFrom:lastPoint to:updatedLoc withColor:[colorArray objectAtIndex:n]];
}
self.lastDate=[NSDate date];
}
In DrawingView.m
-(void)drawToBufferFrom:(CGPoint)lastLoc to:(CGPoint)currentLoc withColor:(UIColor *)color{
//[color setStroke];
CGContextSetStrokeColorWithColor(offScreenBuffer, color.CGColor);
//CGContextSetRGBStrokeColor(offScreenBuffer, 1, 0, 1, 1);
CGContextBeginPath(offScreenBuffer);
CGContextSetLineWidth(offScreenBuffer, 10);
CGContextSetLineCap(offScreenBuffer, kCGLineCapRound);
CGContextMoveToPoint(offScreenBuffer, lastLoc.x, lastLoc.y);
CGContextAddLineToPoint(offScreenBuffer, currentLoc.x, currentLoc.y);
CGContextDrawPath(offScreenBuffer, kCGPathStroke);
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGImageRef cgImage= CGBitmapContextCreateImage(offScreenBuffer);
UIImage* screenImage= [[UIImage alloc]initWithCGImage:cgImage];
CGImageRelease(cgImage);
[screenImage drawInRect:self.bounds];
}
The direction changes by the user tapping on either side of the screen.
However what I am facing is: Major lags on the device itself so I think there needs to be an easier way to draw those lines without any lags.
UIBezierPath is easier, but a better way especially if you want to include overlays or sprites would be cocos2d or similar game dev framework...
I am trying to make a custom animated bar graph for an iPad application (i.e., bar height increases to set level when activated). I am quite new to iOS development and I just want to get feedback on how to approach this task.
I am trying to play around with the answer in this entry and I want to know if it's right that I'm starting from this point.
If you just want a solid bar, you can create a UIView of the size and placement that you need, set its background color, and add it to your view. This is decent coding, no shame in using a UIView to draw solid rectangles. :]
For more complicated graphics, you might want to create a custom subclass of UIView and override its drawRect message to do some custom drawing. For example:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 4.0);
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 0, 1.0); // opaque yellow
CGContextMoveToPoint(context, x1, y1); // for suitable definition of x1,y1, etc
CGContextAddLineToPoint(context, x2, y2);
CGContextStrokePath(context);
}
or whatever other CGContext* sort of drawing you might want to do (e.g. pie charts, line charts, etc).
To animate a bar that you create by adding a UIView with a background color, stick the following whenever the animation starts:
timer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(onTimer:) userInfo:nil repeats:YES];
self.startTime = [NSDate date];
and then add the following message (note: the bar will grow upwards).
- (void) onTimer:(NSTimer*)firedTimer
{
float time = [self.startTime timeIntervalSinceNow] * -1;
if (time>kMaxTime)
{
[timer invalidate];
timer = nil;
time = kMaxTime;
}
int size = time * kPixelsPerSecond;
myBar.frame = CGRectMake(x, y - size, width, size);
}
idk about that link, but you can generate them from here http://preloaders.net/ that should give you a good base to make your own