I recently came across this brilliant article about improving scroll performance with UITableViewCells: http://engineering.twitter.com/2012/02/simple-strategies-for-smooth-animation.html -- While many great tips can be found in this article, there is one in particular that has me intrigued:
Tweets in Twitter for iPhone 4.0 have a drop shadow on top of a subtle textured background. This presented a challenge, as blending is expensive. We solved this by reducing the area Core Animation has to consider non-opaque, by splitting the shadow areas from content area of the cell.
Using the iOS Simulator, clicking Debug - Color Blended Layers would reveal something like this:
The areas marked in red are blended, and the green area is opaque. Great. What the article fails to mention is: How do I implement this? It is my understanding that a UIView is either opaque or it's not. It seems to me that the only way to accomplish this would be with subviews, but the article explicitly states that as being a naive implementation:
Instead, our Tweet cells contain a single view with no subviews; a single drawRect: draws everything.
So how do I section off what is opaque, and what is not in my single drawRect: method?
In the example you show, I don't believe they're showing a background through the view. I think they're simulating a background in core graphics. In other words, in each cell they draw a light gray color for the background. They then draw the shadow (using transparency), and finally they draw the rest of the opaque content on the top. I could be wrong, but I don't believe you can make portions of the view transparent. If so, I'd be very, very interested in it because I use core graphics all the time, but I avoid rounded corners because blending the entire view for it just doesn't seem to be worth it.
Update
After doing some more research and looking through Apple's docs, I don't believe it's possible for only part of a view to be opaque. Also, after reading through Twitter's blog post, I don't think they are saying that they did so. Notice that when they say:
Instead, our Tweet cells contain a single view with no subviews; a single drawRect: draws everything.
They were specifically talking about UILabel and UIImageView. In other words, instead of using those views they're drawing the image directly using Core Graphics. As for the UILabels, I personally use Core Text since it has more font support but they may also be using something simpler like NSString's drawAtPoint:withFont: method. But the main point they're trying to get across is that the content of the cell is all one CG drawing.
Then they move to a new section: Avoid Blending. Here they make a point of saying that they avoid blending by:
splitting the shadow areas from content area of the cell.
The only way to do this is to use different views. There are two approaches they could be using, but first note that the cell dividers are themselves overlays (provided by the tableView). The first way is to use multiple views inside the cell. The second way is to underlay/overlay the shadows/blended-views behind/over the cells by inserting the appropriate views into the UIScrollView. Given their previous statement about having only one view/drawRect for each cell, this is probably what they're doing. Each method will have its challenges, but personally I think it would be easier to split the cell into 3 views (shadow, content, shadow). It would make it a lot easier to handle first/last cell situations.
I'd have to guess something along these lines
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_shadows/dq_shadows.html
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath* path = [UIBezierPath bezierPathWithRoundedRect:self.bounds cornerRadius:10.0f];
CGContextSaveGState(context);
CGRect leftRect = CGRectZero;
CGContextClipToRect(context, leftRect );
CGContextSetBlendMode(context, kCGBlendModeNormal);
// draw shadow
// Call the function CGContextSetShadow, passing the appropriate values.
// Perform all the drawing to which you want to apply shadows.
CGContextSetShadowWithColor(context, CGSizeMake(1.0f, 1.0f), 10.0f, [UIColor blackColor].CGColor);
CGContextAddPath(context, path.CGPath);
CGContextDrawPath(context, kCGPathStroke);
CGContextRestoreGState(context);
CGContextSaveGState(context);
CGRect middleSection = CGRectZero;
CGContextClipToRect(context, middleSection);
CGContextSetFillColorWithColor(context, self.backgroundColor.CGColor);
CGContextFillRect(context, self.bounds);
// draw opaque
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextRestoreGState(context);
My opinion is: Don't let Core Animation draw shadows using the various layer properties. Just draw a prerendered image to both sides, which is in fact a shadow. To factor variable height of a cell in a stretch draw may do the trick.
EDIT:
If the background is plain a prerendered shadow can be applied to both sides without know it is affecting visual appeal.
In case that is not applicable the tableview has to be shrunk to be of the size without the shadow. Then the shadow can be blended without doing it for every cell but just "on top". It really doesn't scroll. This will only work if the shadow is without any "texture", else one will notice it's just applied on top.
Related
My view overrides the drawRect method to render graphics.
And I've recently added a gradient background using CAGradientLayer and [view.layer insertSubLayer: atIndex:0]
However the CAGradientLayer gets drawn over my graphics instead of underneath.
Setting alpha of the gradient colours 0.5 shows that my graphics are still being drawn.
This is an app with a high graphics refresh rate, I cannot afford to redraw a gradient on every refresh so was counting on the CAGradientLayer's backing store to keep things performant.
How should I approach this?
Thank you!
Sublayers are always drawn on top of the base layer, much like subviews are always drawn on top of the base view.
What you could try is work at the superview's level, and add a gradient sublayer to it. It will be displayed below your view.
I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.
I have 2 UIImageViews that are shown on top of each other. One of them can be dragged around using a Gesture Recognizer.
Is there a way that the ImageViews can be rendered using a blend mode like Multiply? Such that when they move on top of each, they get rendered with that blend mode?
You have to override the drawRect: function on the parent view, in order to achieve something like this:
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
[image1.image drawInRect:image1.frame blendMode:kCGBlendModeMultiply alpha:1];
[image2.image drawInRect:image2.frame blendMode:kCGBlendModeMultiply alpha:1];
[super drawRect:rect];
}
What it does is grab the current graphicsContext, and draws the two images into it, using a multiply blend mode.
To be able to see this, you'll need to set the alpha of the two images to 0, or the newly drawn content will be obscured. Since the parent view is redrawing them, you'll see the resulting multiplied versions.
Also, whenever the images' positions get updated, you'll need to call setNeedsDisplay on the parent view, to force it to call drawRect once again.
I'm certain there are probably more efficient ways to utilize Quartz 2D to achieve what you want, but this is probably the simplest.
I'm working on a educational app involving complex scripts in which I paint parts of different 'letters' different colours. UILabel is out of the question, so I've drilled down into Core Text and am having a surprisingly successful go of painting glyphs in CALayers.
What I haven't managed to do is animate the size of my custom drawn text. Basically I have text on 'tiles' (CALayers) that move around the screen. The moving around is okay, but now I want to zoom in on the ones that users press.
My idea is to try to cache a 'full resolution' tile and then draw it to scale during an animation of an image bounds. So far I've tried to draw and cache and then redraw such a tile in the following way:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGContextRef context = UIGraphicsGetCurrentContext();
//do some drawing...
myTextImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then in [CALayer drawInContext:(CGContextRef)context],
I call [myTextImage drawAtPoint:CGPointZero].
When I run the app, the console shows <Error>: CGContextDrawImage: invalid context 0x0. Meanwhile I can perfectly while just continue to draw text in context in the same method even after that error is logged.
So I have two questions: (1) Why isn't this working? Should I be using CGBitmap instead?
And more important: (2) Is there a smarter way of solving the overall problem? Maybe storing my text as paths and then somehow getting CAAnimation to draw it at different scales as the bounds of the enclosing CALayer change?
Okay, this is much easier than I thought. Simply draw the text in the drawInContext: of a CALayer inside of a UIView. Then animate the view using the transform property, and the text will shrink or expand as you like.
Just pay attention to scaling so that the text doesn't get blocky. The easiest way to do that is to make sure the transform scale factors do not go above 1. In other words, make the 'default' 1:1 size of your UIView the largest size you ever want to display it.
I am creating an iOS user interface to allow a user to pick a rectangle within an existing image, dragging the corners of that rectangle to the desired size. I now have four custom UIButtons (30% alpha) and a custom view (also with 30% alpha) that draws the dashed lines between the four corner buttons.
To "improve" the interface, I would like my drawRect code to make the cropped portion of the image appear "normal" while everything outside the cropped region is washed out (filled with white color, which will give me the correct effect since the UIView is set to 30% alpha).
The obvious algorithm would be:
Fill the entire image with [UIColor whiteColor] fill
Draw the four dashed lines with a [UIColor clearColor] fill
When I do this, the clear fill isn't showing up. I believe this is because the "fill" of the clear color in step #2 isn't being seen because the pixels were already set to white in step #1. Perhaps there's a blend mode that will allow me to see the transparency of the second rectangle? I'm not sure about the various blend modes.
My second attempt, which works, does the following:
Draw the four dashed lines with [UIColor clearColor] fill
Draw four additional rectangles with [UIColor whiteColor] fill, each representing the portions to the left, right, above, and below the cropped region.
As I mention, this method works, but seems to me there should be a simpler way instead of me having to calculate these four additional rectangles each and every time.
There is a similar question on SO Create layer mask with custom-shaped hole that uses CALayer and masks, but this seems to be overkill for what I need.
Does anybody have any suggestions on how to improve this?
You can set the blend mode to kCGBlendModeCopy and use clearColor to reset a pixel's alpha to zero. You can presumably also use kCGBlendModeClear but I haven't tested that.
You can also set the clipping path to just contain the pixels you want cleared and call CGContextClearRect(gc, CGRectInfinite).
If you want to use a clipping mask with a hole in it, you can do so without using a CALayer, and you can build it a little more simply than in the answer you linked, by using the even-odd rule and CGRectInfinite:
CGContextSaveGState(GC); {
CGContextBeginPath(gc);
CGContextAddRect(gc, myRect); // or whatever simple path you want here
CGContextAddRect(gc, CGRectInfinite);
CGContextEOClip(gc);
// drawing code here is clipped to the exterior of myRect
} CGContextRestoreGState(gc);