Supporting Multiple Views on the Fast Compositing Path - ios

Is it possible to have multiple, distinct views on the fast compositing path (as described here and here), thereby allowing an app to render each view at the native screen resolution on the iPhone 6+ and iPhone 6s+ and avoiding an extra scaling step?
I'm trying to do so, and though I'm able to get one view at a time on the fast path, as indicated in Instruments and by virtue of the fact that my test pattern doesn't show any scaling artifacts, but I can't convince all n of my views to work that way: at most one seems to be handled correctly at any given time, and when I reshuffle the views in my app, view that's on the fast path changes.
Ideally, I'd like to get things to the point where all the views skip the extra scaling step, but for now I'd be happy just to hear whether this is possible in the first place.

It's not possible. The said compositing path can be used by at most one layer at a time.

Related

How to restrict findTransformEcc to a partial affine transform with scale but without shear?

I built a stereoscopic camera mobile app which performs automatic alignment using findTransformEcc and the app is working pretty well with it. I know I should probably be using rectifyStereoUncalibrated preceded by keypoints and descriptors etc. etc. but I get bad results from that despite many different approaches attempted and I'm super frustrated. So instead, I'm sticking with findTransformEcc (at least for now). At the moment I'm using MotionType.Euclidean (restricted to translations and rotations) but I would like to change that.
So far, the app has worked by having the user take one picture and move to the side to capture the next (chacha method). But now I'm adding the ability to have two phones capture simultaneously. The problem is that the focal length and sensor size (angular field of view) may be different between the two cameras, so in order to align the two pictures I need to allow scaling/zooming. However, if I want to do that with findTransformEcc I can only step up from Euclidean to Affine, it seems like I can't go between. That is, it seems I cannot allow scaling without also allowing shearing, and I don't want shearing.
As another way to explain this, I'd like to get the type of transform that you can get from estimateRigidTranform(array,array,FALSE) (partial affine) but rather than using keypoints as that function does, I want to use findTransformEcc because from my experimentation it just seems to be more reliable.
(https://github.com/KRA2008/crosscam/blob/develop/AutoAlignment/OpenCV.cs is the auto-alignment code if that helps at all)
Take a look at Fourier-Mellin transform based approach: https://github.com/Smorodov/LogPolarFFTTemplateMatcher
It will give you offset, scale and rotation parameters, nothing more.

Building for multiple screen sizes without size classes

I am currently writing an iPhone app (my first one) and never used
Size Classes
Auto Layout
Stack Views
To set the views/images/labels etc. on the screen. didn't use storyboard as well,
Every thing that is shown is coded.
I didn't start using these tools because back at the time (a couple of months ago) they seemed to be more of a pain than a true gain for my development process,
But looking back i feel i made a mistake and now I'm way too deep in code to start it all over again with the traditional/designated tools.
So, Having said that, are there any known frameworks and/or best practices to make sure that all that's printed are scaled exactly the same and have relevant ratio for each device size?
Something like: calculating the height/width ratio and somehow applying it to all the views/images etc?
is there any good way to get out of this mess?

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Valid technique for scalable graphics on iOS?

A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.

Custom tabbar items in MonoTouch

I would like to create custom tabbar items similar to the ones shown here:
I assume these have to be designed and created first in Photoshop or a similar application. Are there any resources or tutorials available that demonstrate the creation of such items in Photoshop and how these are then used in MonoTouch?
Creating and bundling bitmaps is one option - and likely the most common one (for which googling should turn up several tutorials). Now in order to get optimal quality you need to supply multiple sets for the old iPhone/iPod, the newer retina iPhone/iPod, the iPad (1,2) and the retina iPad 3. This can takes a lot of space to cover each case with beautiful icons (or it won't look as good).
An alternative is to create the bitmaps at runtime, e.g. using the CoreGraphics API. This might seems counterproductive (and can surely be in many cases) but it has the advantage of requiring less (storage) space and/or getting better quality (see note).
Why ? because if you create them at runtime then you'll only create the ones for the specific device you're executing on. You can even cache them and re-create them when missing (e.g. if iOS flush your application cache).
If you're not an artist (and I'm not) you might want to look at easily licensable vector icons. The ones from your screenshot looks monochrome and could even use (bundle or extra the outlines from a) custom font - like the one provided by FontAwesome (CC BY 3.0) or similar sources.
Note: Maybe you noticed (I know I did) that some iPad applications looked beautiful (compared to others) on the iPad3 even if they were released months before the hardware become available. Vector graphics wins ;-)
UPDATE: Someone already made a script to convert the FontAwesome characters to iOS tab bar icons. However since it's done outside the app you'll need multiple versions of each bitmap to get the best look on every devices.

Resources