iOS - Interface design, images or custom drawing? - ios

I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?

It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.

Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.

Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?

It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.

Related

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Advantage of Custom UI buttons

I have been interviewing for an iOS job and have been getting a lot of questions about custom UI, and more specifically custom UI buttons. I started trying to read up about it and found that Core Graphics is used to make these custom buttons.
I was wondering what the advantage of using custom buttons made with corE graphics is over using a UIImage, and images created on adobe or sketch, and then putting a UI button over that. Is there any specific advantage other then more customization over the process?
As an aside I was wondering if there were any good core graphics (Quartz 2d) tutorials out there for obj-c, I have found a good amount with swift, but not so many with obj-c.
It's interesting that it's an interview question!
You can design buttons in PaintCode which converts your drawings into code. Supposedly with Core Graphics, the performance is better, and it should look good regardless of the size of the device. PaintCode says about the Benefits: "Resolution independence & other benefits No more #2x resources. Future proof. Creating dynamic, parametric drawings is easy."
For the details, I would check out the FAQ, Question 2. Here are the first few paragraphs:
Using PNG images to draw user interfaces is tedious. PNG images are
not resolution-independent, so you have to provide many variants for
all kinds of displays. Some effects are also difficult (if not
impossible) to achieve using raster images. For exampe, you might want
to draw something with complex resizing behavior, or you might want to
alter the color of the drawing based on some outer conditions.
A better approach than using images is to use Objective-C or Swift
code to draw the user interface. The code is resolution-independent
and very flexible, so it works really well on all kinds of displays.
On a side note though, I find that it is a lot easier to use images rather than PaintCode. The positioning of elements, taking into consideration the insets in the image itself vs the inset in code causes a bunch of problems in practice. And PaintCode uses springs and struts as well to help with the sizing of images on different devices, but you have to be careful with when combining that with layout constraints in storyboard. There are things you can do in PaintCode to make your life a bit, but it takes some practice to really get the hang of it. Making the #2x and #3x versions of images is really not that bad - so if you can avoid PaintCode, I would just to avoid the headache.

Valid technique for scalable graphics on iOS?

A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.

Composed animations, sprites in iOS

let's say I want to display a customizable (2D, cartoon-like) character, where some properties e.g. eye color, hair style, clothing etc can be chosen from a predefined set of options. Now I want to animate the character. What's the best way to deal with the customization?
1) For example, I could make a sprite sheet for each combination of properties. That's not very memory efficient and not very flexible, but probably gives the best performance.
2) I could compose the character from various layers, where each property only affects one layer. Thus, I could make a sprite-sheet for the body, a collection of sprite-sheets for the eyes (one for each eye color) etc.
2a) In that case, I could merge the selected sprite-sheets in order to generate a single sprite-sheet containing the animation of the customized character.
2b) Alternatively, I could keep the sprite-sheets separate and try to animate them simultaneously as layers. I fear, that this might become a problem performance-wise.
3) I could try to modify the layers programmatically, e.g. use a sprite-sheet for the eyes as a mask and map some texture on it before merging it down to a single sprite-sheet. I would think this is a very flexible approach when it comes to simple properties like eye colors, but might become difficult for things like hair-style. I am aware that this depends much on the character and probably a general answer is difficult.
I assume that my problem is not new, so there is probably a standard approach to it.
Concerning the platform, I'm particularly interested in iOS and try to avoid OpenGL (well, I'm open-minded). Maybe there is a nice framework that can help me here?
Thanks!
Depending on what your working on, you might want to animate part/all of the animations outside in another tool, such as flash. It is much easier to work with a visual environment.
Then there are tools that take swf files, and create sprite sheets that you would then animate in cocos2d.
That is a common game creation workflow.
You problably want to take a look on how to create sprites at cocos2d.
Cocos2d comes with a set of tools that help you to animate single parts and offers abstractions to compose parts (like CCBatchNode or CCNode). Also, it comes with tools that helps you to pack sprites into sprite sheets (e.g Texture Packer) and develop levels (e.g Level Helper).
Cocos2d is an open source framework and it is widely used. You also have cocos3d but I never used it :).

Why is -drawRect faster than using CALayers/UIViews for UITableViews?

I can already hear the wrenching guts of a thousand iOS developers.
No, I am not noob.
Why is -drawRect faster for UITableView performance than having multiple views?
I understand that compositing operations take place on the GPU. But compositing is a one-time operation; once the layers are committed to memory, it is no different from a cached buffer that, from the point of view of the GPU, gets translated in and out of view. Compare this to using Core Graphics in drawRect, which employ an unknown amount of operations on the CPU to produce pixels that end up getting cached in CALayers anyway. What's the difference if it all ends up cached and flattened anyway?
Also, if you're handling cell reuse properly, you shouldn't need to regenerate views on each call to -cellForRowAtIndexPath. In fact, there may be a performance benefit to having the state data (font, font size, text color, attributes, etc) cached by UIView/CALayer objects than having them constantly recreated during -drawRect.
Why the craze for drawRect? Can someone give me pointers?
When you talking about optimization, you need to provide specific situations and conditions and limitations. Because optimization is all about micro-management. Otherwise, it's meaningless.
What's the basis of your faster? How did you measured it? What's the numbers?
For example, no-op or very simple -drawRect: can be faster, but it doesn't mean it always does.
I don't know internal design of CA neither. So here are my guesses.
In case of static content
It's weird that your drawing code is being called constantly. Because CALayer caches drawing result, and won't draw it again until you send setNeedsDisplay message. If you don't update cell's content, it's just same with single bitmap layer. Should be faster than multiple composited layers because it doesn't need composition cost. If you're using only small number of cells which are enough to be exist all in the pool at same time, it doesn't need to be updated. As RAM becomes larger in recent model, it's more likely to happen in recent models.
In case of dynamic content
If it is being updated constantly, it means you're actually updating them yourself. So maybe your layer-composited version would also being updated constantly. It means it is being composited again for every frame. It could be slower by how it is complex and large. If it's complex and large and have a lot of overlapping areas, it could be slower. I guess CA will draw everything strictly if it can't determine what area is fine to ignore. Unlike you can choose what to draw or not.
In case of actual drawing is done in CPU
Even you configure your view as pure composition of many layers, each sublayers should be drawn eventually. And drawing of their content is not guaranteed to be done in GPU. For example, I believe CATextLayer is drawing itself in CPU. (because drawing text with polygons on current mobile GPU doesn't make sense in performance perspective) And some filtering effects too. In that case, overall cost would be similar and plus it requires compositing cost.
In case of well balanced load of CPU and GPU
If your GPU is very busy for heavy load because there're too many layers or direct OpenGL drawings, your CPU may be idle. If your CG drawing can be done within the idle CPU time, it could be faster than giving more load to GPU.
None of them is your case?
If your case is none of situations I listed above, I really want to see and check the CG code draws faster than CA composition. I wish you attach some source code.
well, your program could easily end up moving and converting a lot of pixel data if going back and forth from GPU to CPU based renderers.
as well, many layers can consume a lot of memory.
I'm only seeing half the conversation here, so I might have misunderstood. Based on my recent experiences optimizing CALayer rendering, and investigating the ways Apple does(n't) optimize stuff you'd expect to be optimized...
What's the difference if it all ends up cached and flattened anyway?
Apple ends up creating a separate GPU element per layer. If you have lots of layers, you have lots of GPU elements. If you have one drawRect, you only have one element. Apple often does NOT flatten those, even where they could (and possibly "should").
In many cases, "lots of elements" is no issue. But if they get to be large ... or there's enough of them ... or they're bad sizes for OpenGL ... AND (see below) they get stored in CPU instead of on GPU, then things start to get nasty. NB: in my experience:
"enough": 40+ in memory
"large": 100x100 points (200x200 retina pixels)
Apple's code for GPU elements / buffers is well optimized in MOST places, but in a few places it's very POORLY optimized. The performance drop is like going off a cliff.
Also, if you're handling cell reuse properly, you shouldn't need to
regenerate views on each call to -cellForRowAtIndexPath
You say "properly", except ... IIRC Apple's docs tell people not to do it that way, they go for a simpler approach (IMHO: weak docs), and instead re-populate all the subviews on every call. At which point ... how much are you saving?
FINALLY:
...doesn't all this change with iOS 6, where the cost of creating a UIView is greatly reduced? (I haven't profiled it yet, just been hearing about it from other devs)

Resources