I am working on an app that needs to change a lot of CATextLayers strings, but, only one or two characters of it(in general though, the strings are in length of about 2-5 characters).
At first I went with UILabels which were extremely slow, and because of that I tried out CATextLayer, which was a lot faster, but not fast enough, I am updating about 150 CATextLayers quite often, all at once , and it just doesn't cut it, I feel a lag.
I then tried out to do it even more low-level with CoreText, I tried drawing it with a CTLine, which had about the same performance of CATextLayer, so I got back to the CATextLayers because my positioning code for CoreText wasn't perfect.
I started thinking about caching for each string the first two characters(which are always constant), and only changing the other 3 characters, with smaller bounds, which I assume will be a bit faster, but, will it be faster? After all it will have it to composite it with the other text-layer, and it will have to be update all of the 150 text-layers.
Does anybody have any advice? How would you approach it?
Attached is a screenshot from instruments showing that the problem lies in the performance of CATextLayer:
Bitmap Fonts are probably the best way to solve this problem, as they're far and away more performant than anything else in terms of font drawing for something of this nature. But you need to pre-render them to the scale you desire to get the best out of them both visually and in terms of performance.
And you might be best off using Sprite Kit, as it has native handling of them. Here's a github repo with a useful thing to make it easier to use rendered bitmaps from a common tool for creating them: https://github.com/tapouillo/BMGlyphLabel
Related
let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.
I am taking a stab at John Conway's game of life [wiki] & [demo]. I have developed a small program in C to calculate the next state - using a 1D array (but with 2D array logic).
I am hoping to make a small iOS app out of this (to Objective-C!), and am wondering the best and fastest way to render a grid like seen in the video. Note, it would have to render every fraction of a second and would use an array of 1's and 0's to determine a "block's" respective colour.
Edit: I'm probably looking at around 10 frames/sec, but a very large grid. It'd be rendering out hundreds of thousands of squares. Of course, if this isn't physically possible with iPhone/iPad technology then I'll reduce the grid size. It is variable without issue, just looks more 'epic' on a grand scale.
Any suggestions will help out, never touched anything of this manner before.
The best way depends on your criteria. Fastest would probably be to use OpenGL. You might even be able to write a shader to do the entire simulation. However, OpenGL is hard. Really hard.
I suspect that using Core Graphics and implementing code in a view's drawRect method that renders the array of cells onto the screen would be fast enough. It depends on how many cells you have and how many frames/second you want to draw.
I am making a game, and it involves a sandstorm. I decided that the basic concept would be that I would make an image that looks roughly like a sandstorm, and then decorate it with some particles/whatever else it takes.
I ran into an issue at step one. I threw together a simple image for testing purposes:
I added that to my game, and the FPS dropped by 60%. I was surprised by the effect one image had, but I wasn't too worried about it. I cut the resolution of the image in half, and again, lots of lag.
Is spritekit/iOS really that bad at handling moderately sized images with alpha? I read on another question that the simulator is bad at rendering, but that can't be the entire problem.
Is there any hope for getting this to render without slicing my performance? The particles work well, everything else runs at 60fps just fine, but the addition of this image is apparently a severe drain on resources.
EDIT: I tested my game out on my phone, and I got no lag. So apparently, the simulator is just really bad at rendering after all. At the same time, I am curious as to how to speed up performance, as there is clearly some kind of lag going on.
I'm no expert on SpriteKit, but I had similar experiences with plain core animation and layering.
The issue is that an image with alpha, even for the "opaque" parts of it, it introduces a redrawing call on all the sublayers underneath it for every time it moves. First check if this is actually the problem, and then, try one of this, and see if it improves:
SKCropNode could prevent for rendering the underneath elements
Tile the image so only the border has alpha
Snapshot the underneath layers.
Reduce the amount of nodes being rendered, hide the ones that are "under the sandstorm".
And you should be using real devices to test performance of your game, you cannot rely on the simulator for that.
My first question on StackOverflow. So feeling kind of shy ...
I've been working and tweaking on an curstom control for some weeks now. It uses ±6 subclassed CALayers for some fancy animations to give the best possible user-feedback. Additionally there are 2 animated UIViews adding up to some heavy animation and redrawing during user interaction.
I managed to get the responsiveness and performance on an iPhone 5S up to +50fps. But on a iPhone 4, it really makes me cry: 8 ~ 15fps. I tried to figure out what causes this awfull performance, but till now I found nothing other than the fact I might be wanting to much from Core Animation.
Using layer.drawsAsynchronously = YES; on all CALayers increased the responsiveness by A LOT. And I also took out all unnecessary animations (including implicit animations). But it still isn't enough. The performance on an iPhone 4 is still not the way I want it.
I notice a lot of improvement when I switch to layer.opaque = YES; But due to the design of my interface, this really isn't an option.
Is there anyting you guys can suggest to look into? Are there any other "magic" properties, like .drawsAsynchronously I might want to try or look into?
Are there any resources you can suggest on how to debug/analyse the performance?
Any help is appriciated. Thanks in advance!
The quickest answer will be don't use drawRect: because it's very expensive.
CALayers are the best way.
If you need to draw something complex is good to consider Core Graphics because it use GPU instead of CPU which is much more effective. You can draw image with Core Graphics and add it to the view.
Have a look on this link
There is good explanation how UIView works and how to write most efficient code
I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.