I'm using Path's FastImageCache library (https://github.com/path/FastImageCache) in order to have pre-resized images cached ready for use in UIImageViews.
To use FIC, you define FICImageFormats which include a bunch of data, including the image size. In order for best performance, this image size should be identical to the size of the UIImageView that the image will be used in.
This gives rise to a chicken-and-egg sort of problem: should the code that sets up FIC (in the AppDelegate or wherever you do the rest of your basic init work for your app, presumably?) know the sizes of the UIImageViews in the rest of your app? This has the obvious downside of very tight coupling of your app's startup code with UI implementation details.
An alternative is that you could have your UIs implement a protocol that defines a method such as
+(NSArray *)imageFormats;
which would return an array of FICImageFormat objects representing all image formats that would be required by that bit of UI. Then the startup code would only have to know which classes implement that protocol in order to get a full list of image formats required for the app.
This second approach has the downside of potential duplicate FICImageFormats. It would be non-optimal to have two (or more!) image formats for the same image format family that also have the same dimensions. Then you'd be caching the exact same data more than once.
Any other approaches you can think of? Best practices? All thoughts are welcome!
I think what could help here is to start at the design level - you should agree with the designers for a certain set of image formats for any image family and then you would have a central list of all your image formats and families in your style guide anyway.
It doesn't matter then so much if your code is in the AppDelegate or in the View layer, because neither the view layer nor the startup code really determine which formats exist, but your style guide. It's not an implementation detail anymore, but part of an external specification.
Related
I have been interviewing for an iOS job and have been getting a lot of questions about custom UI, and more specifically custom UI buttons. I started trying to read up about it and found that Core Graphics is used to make these custom buttons.
I was wondering what the advantage of using custom buttons made with corE graphics is over using a UIImage, and images created on adobe or sketch, and then putting a UI button over that. Is there any specific advantage other then more customization over the process?
As an aside I was wondering if there were any good core graphics (Quartz 2d) tutorials out there for obj-c, I have found a good amount with swift, but not so many with obj-c.
It's interesting that it's an interview question!
You can design buttons in PaintCode which converts your drawings into code. Supposedly with Core Graphics, the performance is better, and it should look good regardless of the size of the device. PaintCode says about the Benefits: "Resolution independence & other benefits No more #2x resources. Future proof. Creating dynamic, parametric drawings is easy."
For the details, I would check out the FAQ, Question 2. Here are the first few paragraphs:
Using PNG images to draw user interfaces is tedious. PNG images are
not resolution-independent, so you have to provide many variants for
all kinds of displays. Some effects are also difficult (if not
impossible) to achieve using raster images. For exampe, you might want
to draw something with complex resizing behavior, or you might want to
alter the color of the drawing based on some outer conditions.
A better approach than using images is to use Objective-C or Swift
code to draw the user interface. The code is resolution-independent
and very flexible, so it works really well on all kinds of displays.
On a side note though, I find that it is a lot easier to use images rather than PaintCode. The positioning of elements, taking into consideration the insets in the image itself vs the inset in code causes a bunch of problems in practice. And PaintCode uses springs and struts as well to help with the sizing of images on different devices, but you have to be careful with when combining that with layout constraints in storyboard. There are things you can do in PaintCode to make your life a bit, but it takes some practice to really get the hang of it. Making the #2x and #3x versions of images is really not that bad - so if you can avoid PaintCode, I would just to avoid the headache.
Before clarifying my question, please just consider these two generative portraits by Sergio Albiac:
Since I really like this kind of portraits I wanted to find a way of producing them myself.
I don't have much for now, the only things I can deduce from these examples are:
each portrait takes at least two inputs, one target image (the
portrait) and one or more source images (pictures of text) whose parts are used to
generate a stylized portrait
matching the parts from source images with the target image is
done using template matching
What I'd like to know is how to proceed, what things to learn and look for? What other concepts should I consider before trying to make this work?
Cheers
The Cover Maker plugin for Fiji/ImageJ does a similar thing.
It first builds a database from your source images indexed according to color/intensity. These source images are then used to build your target image. (Contrary to your example images, it only works with a constant tile size throughout the image, though.)
Have a look at the python source code for details.
EDIT: If you want to avoid the constant tile size, you could use e.g. a quadtree segmentation or a k-means segmentation to get regions of similiar intensity/texture in your target image, and then do the template matching for the segmented regions.
A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.
I have a set of images and they overlap one another.
So if I were to put them together I would have a single larger image than any one of them, and it would show me the "full picture". It could be used for a set of images for instance, that would make a full panorama.
I can do this by hand, but then.. that's quite some work.
Is there any freely available program out there that does this?
Especially one that is free of charge and with relaxed license restrictions and works on Unix?
I have tried googling it to no avail and I have checked through ImageMagick's functionalities and haven't found this there.
The main work in a functionality like this would be to figure out first how the images need be put together in order to make a single coherent image.
I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.