Lightest UI Element - ios

I'm trying to figure out what UI element would be the least significant in lowering the performance of an application when all I'm setting will be either the background color or an image. I won't need any user interaction from the element. I only need the content of the element to show on the screen.
From what I've learned so far, CALayers are very light and I'm comfortable using them, but are they the lightest UI element I can use for simple display use cases?
Thanks!

Essentially, I found that CALayers are the lightest for getting the job done in terms of performance, but UIView isn't much heavier at all, but it has a lot more functionality to it.
As for UIImageViews, they are much heavier, but they have a lot of built in methods to make image manipulation easy.
I hope this can help someone!

Related

Designing a gantt view with Konva

I am trying to build a gantt control with Konva (does it make sense to use Konva for this)? I have tried to sketch the control below:
I was thinking of breaking down the Konvas stage as follows:
One stage with 4 layers: activity names, timeline, activity views, and scrollbar view.
The scrollbar layer would contain a "custom control" mimicking a standard scrollbar control.
At this stage I have a couple if questions:
What would be the best approach for synchronizing the different layers from an event handling perspective? For example if the user click's on the scrollbar's down arrow shape, I would need to "scroll" all layers one unit down.
How does the Konva coordinate system work? Is the drawing of shapes done relative to the containing layer?
What's the difference between a layer and a group? Does it make more sense to use a group instead of layers?
I realize my questions are very broad in nature, but at this point I need to get the design right.
I am responding here rather than as a comment because I have more to say than a comment allows.
I have made Gantts with both HTML elements, and another canvas lib, and Konva. I used Divs with jquery first and it was viable but I felt it got quite complicated and it ran out of steam in the area of zooming the view. You can't hide from the complexity of course. Switching to HTML5 canvas I realised that a lib like Konva would accelerate production. And zooming in canvas is simple.
As per #lavrton's comment, the text is primitive on HTML5 canvas when compared to GDI, or other, more mature tech. My answer for the labels on tasks was to use off-screen text drawing then converting to images which works very well. For popup editing, I revert to HTML divs etc. I did not use animations in the Gantt but I have elsewhere and canvas should be fine - there are plenty of bouncy-ball / particle tests around to confirm that.
As a coding design suggestion, the data model and functionality of the Gantt is consistent whatever tech you use to draw it with. I recommend you consider proceeding with a layered approach where your interaction with drawing functions is wrapped as class methods in a drawing class so that you can switch out the drawing tech itself should you feel the need. You could insulate yourself from the choice of tech and/or library that way.
Turning to aspects of your question:
layers are a useful concept. Physically each layer is an HTML5 canvas element. So multiple layers in one diagram are really multiple canvases over the same stage. The benefit here is in redrawing specific layers instead of the entire canvas where there are performance savings. But mostly you can ignore the physical and just get on and use the concept which works well.
groups: a group is a collection of shapes on a layer. If you have to draw things made of many shapes, grouping them is very useful because you can move the group as a whole, hide it, delete it, etc. You might, for example, consider making each taskbar, composed of at least a rectangle and text, as being a group. One consideration for groups is that the location and size of the group is that of the bounding rectangle that encloses the shapes within it. This can cause some confusion until you work out an approach. You will find yourself using layers and groups, but mostly groups for drawing controls.
Zooming / scaling: this is easy with a canvas. Less easy is the math for how to change the offset to keep the same view as you zoom, but again it is achievable.
Synchronised scrolling layers is not going to take any time to develop - just set the layer y-position for each layer.
Drawing the grid of rows for activity and columns for days/weeks/months/etc should not be underestimated as a task, but as you develop it you will learn the fundamentals of working with Konva.
Final point - the docs and examples for Konva could be a bit better, but the community support here and at https://konvajs.github.io/docs/ is good, and the Konva source code is also at that site so you can delve right in to understand what is happening, though you do not need to do that at all if it is not your thing.

Advantage of Custom UI buttons

I have been interviewing for an iOS job and have been getting a lot of questions about custom UI, and more specifically custom UI buttons. I started trying to read up about it and found that Core Graphics is used to make these custom buttons.
I was wondering what the advantage of using custom buttons made with corE graphics is over using a UIImage, and images created on adobe or sketch, and then putting a UI button over that. Is there any specific advantage other then more customization over the process?
As an aside I was wondering if there were any good core graphics (Quartz 2d) tutorials out there for obj-c, I have found a good amount with swift, but not so many with obj-c.
It's interesting that it's an interview question!
You can design buttons in PaintCode which converts your drawings into code. Supposedly with Core Graphics, the performance is better, and it should look good regardless of the size of the device. PaintCode says about the Benefits: "Resolution independence & other benefits No more #2x resources. Future proof. Creating dynamic, parametric drawings is easy."
For the details, I would check out the FAQ, Question 2. Here are the first few paragraphs:
Using PNG images to draw user interfaces is tedious. PNG images are
not resolution-independent, so you have to provide many variants for
all kinds of displays. Some effects are also difficult (if not
impossible) to achieve using raster images. For exampe, you might want
to draw something with complex resizing behavior, or you might want to
alter the color of the drawing based on some outer conditions.
A better approach than using images is to use Objective-C or Swift
code to draw the user interface. The code is resolution-independent
and very flexible, so it works really well on all kinds of displays.
On a side note though, I find that it is a lot easier to use images rather than PaintCode. The positioning of elements, taking into consideration the insets in the image itself vs the inset in code causes a bunch of problems in practice. And PaintCode uses springs and struts as well to help with the sizing of images on different devices, but you have to be careful with when combining that with layout constraints in storyboard. There are things you can do in PaintCode to make your life a bit, but it takes some practice to really get the hang of it. Making the #2x and #3x versions of images is really not that bad - so if you can avoid PaintCode, I would just to avoid the headache.

Valid technique for scalable graphics on iOS?

A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.

Need to make a Gantt chart like control in iOS, to draw or to subview?

I'm about ready to begin to create a Gantt chart like control in iOS for my app. I need to show a timeline of events. Basically a bunch of rectangles, some lines/arcs for some decoration, possibly a touch point or two to edit attributes. It will basically be a "full screen" control on my phone.
I see two basic paths to implement this custom UIView subclass:
Simply implement drawRect: and go to town using CoreGraphics calls. Most likely split over a bunch of private methods that do all the drawing work. Possibly cache some information as needed, to help with any sub region hit detection.
Rather than "draw" the graphics, add a bunch of UIViews as children using addSubview: and manipulating their layer properties to get them to show the different graphic pieces, and bounds\frame to get them positioned appropriately. And then just let "drawing" take care of itself.
Is one path better than the other? I may end up trying both in the long run just to see, but I figured I'd seek the wisdom of those who've gone before first.
My guess is that the quicker solution would be to go the drawRect: route, and the subview approach would require more code, but maybe be more robust (easier hit detection, animation support, automatic clipping management, etc). I do want to be able to implement pinch to zoom and the like, long term.
UPDATE
I went with the UICollectionView approach. Which got me selection and scrolling for free (after some surprises). I've been pretty pleased with the results so far.
Going with CoreGraphics is going force you to write many more lines of code than building with UIViews, although it is more performant and better on memory. However, you're likely going to need a more robust solution for managing all of that content. A UICollectionView seems like an appropriate solution for mapping your data on to a view with a custom UICollectionViewCell subclass. This is going to be much quicker to develop than rolling your own, and comes with great flexibility through UICollectionViewLayout subclasses. Pinch to zoom isn't supported out of the box, but there are ways to do it. This is also better for memory than using a bunch of UIViews because of cell reuse, but reloading can become slow with a few hundred items that all have different sizes to be calculated.
When it comes to performance, a well written drawRect: is preferred, especially when you would potentially have to render many many rects. With views, an entire layout system goes to work, much worse if you have autolayout, where an entire layout system goes to town and kills your performance. I recently upgraded our calendar views from view-based to CG-based for performance reasons.
In all other aspects, working with views is much preferred, of course. Interface Builder, easy gesture recognizer setup, OO, etc. You could still create logical classes for each element and have it draw itself in the current context (best to pass a context reference and draw on that), but still not as straight forward.
On newer devices, view drawing performance is quite high actually. But if you want these iPhone 4 and 4S devices, if you want these iPad 3 devices, which lack quite a lot in GPU performance, I would say, depending on your graphs potential sizes, you might have to go the CG way.
Now, you mention pinch to zoom. This is a bitch no matter what. If you write your drawRect: well, you could eventually work your way to tiling and work with that.
If you plan on letting the user move parts of the chart around I would definitely suggest going with the views.
FYI, you will be able to handle pinch to zoom with drawRect just fine.
What would push me to using UIView's in this case would be to support dragging parts of the chart, animating transitions in the chart, and tapping on elements in the chart (though that wouldn't be too hard with drawRect:). Also, if you have elements in your chart that will need heavy CPU usage to render you will get better performance if you need to redraw sub portions of your chart with UIView's since the rendering of the elements is cached to a layer and you will only need to redraw the pieces you care about and not the entire chart.
If you chart will be VERY big AND you want to use drawRect: you will probably want to look at using CATileLayer for you backing so that you don't have the entire layer in memory. This can add added challenges if you only want to render the requested tiles and not the entire area.

iOS - Interface design, images or custom drawing?

I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.

Resources