I'm working on an iPhone app which will be displaying large scrollable and zoomable surface containing a grid of pictures and text labels over them. I need to be able to change the position of pictures individually and control the opacity level of labels. I tried to accomplish these goals by using UIScrollView and Core Animation.
The subview of UIScrollView contains two main sublayers: one for displaying pictures and one for the labels. Pictures are CALayers with their contents property set to CGImage and they are added as sublayers to the pictures layer. Labels are CATextLayers and they are sublayers of the second layer. Transparency of the labels layer changes depending on the zoom scale of the scroll view.
And here is the problem: everything works fine when the labels are fully opaque or fully transparent, but when they are semitransparent scrolling starts to be jerky and FPS drops to about 35. Obviously blending of these layers slows everything down, but I couldn't find a way to fix it. I will appreciate any ideas on how to improve the performance in this situation. Maybe there is a better way to draw text labels than using CATextLayer?
Is it possible for you to merge the two "main layers" of your UIScrollView into one? Also, is it possible for you to add layers directly to UIScrollView's layer instead of adding additional ones?
I find that I get huge performance wins by reducing the number of layers that exist for the sole purpose of containing other layers.
One solution is to add a shadow to the most background layers of both image layer and text layer.
There are number of shadow properties that you can tweak i.e. shadowPath, shadowColor, shadowOffset and shadowRadius - set each of them, don't miss any. Also set yourlayer.masksToBounds = NO.
Do not forget to add yourlayer.shouldRasterize = YES because this will have better performance impact.
Related
I have created a nice looking collection view with image cells inside. I wanted the cells to be round so I made maskToBounds equal to true and set the corner radius to half of the image's width. This all works fine but now I want to add shadow effects to the cells and there are 2 problems:
I want to have an outer shadow on some of the cells (which is currently being cropped off because I set masktoBounds to true...
I want t have an inner shadow effect on some other cells (they are supposed to look less highlighted), but I don't even know how to make an inner shadow...
Any help desperately appreciated,
I work with C# in Xamarin.iOS by the way, but I will also understand any other language.
Cheers!
Take a look to the UIView.Layer.ShadowX and play with the values of it I recommend to use an image instead to simulate the effect because the Layer effects will kill your device processor
Take a look at the XamSvg component. You create 2 svg with a gradient to mimics an inset and an outset. As it is an svg, it is stretchable and could be set as your cell's background to simulate your shadow.
Here is the iPad Simulator with four nested UIViews, drawing a custom background, and an inner UILabel. I am rotating the top UIView's CALayer, by getting it's layer and setting transform with a rotateY CATransform3D, animated on a separate thread (but the changes to transform are being sent on the main thread, naturally):
Note- this animation does not loop correctly, hence it appears to bounce.
The layers do animate as a whole, but curiously, the first child and it's descendants appear to be floating above the UIView with the transform applied!
The UIViews themselves are children in another UIView, which has a red background. There are no other transformations being applied anywhere else.
The positions for each UIView were set using setFrame initially at the start.
What is causing this strange behaviour, and how can I ensure the child UIViews transform with their parent, giving a flat appearance to the surface as a whole?
Well. Perhaps unsurprisingly I was doing something silly, but since I'd not used CALayer transforms before, I didn't know if things were acting up. I had overridden layoutSubviews on the UIViews I was creating, and the act of rotating the CALayer was triggering this call and then pushing the child components frame around, due to a bug.
The problem is that CALayers don't actually do 3D perspective by default. In order to do that you need to make a minor change to the layer's transform (which is of type CATransform3D)
You want to change the .m34 field of the transform to a small negative value. Try -1/200 to -1/500 as a starting range. If I remember correctly it should be the negative of 1 over the image height/width.
Change the .m34 property of the layer that you want to appear to "come off the page" and rotate in 3D. When you do that the Z setting of the layer does matter, and will both make closer layers bigger and also make layers that are further away disappear behind other things.
I suggest you do a Google search on "CATransform3D m34" for more information. There is a fair amount of information on the net about it.
I have multiple views with many UILabels on the views. (all constructed in Interface Builder).
I am then trying to create a "smaller" replica of my view when you pinch the screen.
To do this I apply:
view.transform = CGAffineTransformMakeScale(.5, .5);
and then I also adjust the frame of view.
The problem is that after the transformation, the text in all of my UILabels becomes "blurry". It doesn't stay pixel perfect as it is in full-scale view.
Is there a way to increase the pixelation of the labels after the transformation?
Applying a transform to a UIView or CALayer merely scales the rasterized bitmap of that layer or view. This can lead to blurriness of the resulting UI element, because they aren't re-rendered at that new scale.
If you really want your text or images to be crisp at the new scale factor, you're going to need to manually resize them and cause them to redraw instead of applying a transform. I described one way that I did this with a UIView hosted in a UIScrollView in this answer.
You might be able to create a single method that traverses your view hierarchy for your one main view, recursively reads each subview's frame, scales that down, and then forces a redraw of its contents. Transforms are still great to use for interactive manipulation or animation, but you can then trigger a full manual scaling and redraw at the end of the manipulation or animation.
I'm attempting to optimize my app. It's quite visually rich, so has quite a lot of layered UIViews with large images and blending etc.
I've been experimenting with the shouldRasterize property on CALayers. In one case in particular, I have a UIView that consists of lots of sub views including a table. As part of a transition where the entire screen scrolls, this UIView also scales and rotates (using transforms).
The content of the UIView remains static, so I thought it would make sense to set view.layer.shouldRasterize = YES. However, I didn't see an increase in performance. Could it be that it's re-rasterizing every frame at the new scale and rotation? I was hoping that it would rasterize at the beginning when it has an identity transform matrix, and then cache that as it scales and rotates during the transition?
If not, is there a way I could force it to happen? Short of adding a redundant extra super-view/layer that does nothing but scale and rotate its rasterized contents...
You can answer your own question by profiling your application using the CoreAnimation instrument. Note that this one is only available in a device.
You can enable "Color hits in Green and Misses Red". If your layer remains red then it means that it is indeed rasterizing it every frame.
Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.