Briefly, I want to properly use autolayout (and maybe size classes) to get a column of UIViews to rotate in place when the orientation changes. I can get the correct before and after looks (see images) using a UIStackView, but the transition is not what I want: each little image-label combination should rotate individually. I have tried embedding the image-label pair in a wrapper view, setting constraints in various plausible and implausible combinations... going crazy. Would prefer to use IB as much as possible, but willing to go to programmatical if necessary.
It's so easy to visualize this that one thinks it should be comparably easy to implement. I know enough linear algebra to do it all by hand, but really can't I stand on some tall shoulders?
Related
Summary, in iOS how to have a view that modifies the pixels of all the views behind it.
Say you have a view, well any views, but let's consider a collection view which happens to just be some color blocks:
Say we added a view on top, CleverView, which either just blocks that view (so, white - trivial) or even "cuts a hole" in that layer (relatively easy to do, google).
No problem so far: so here's CleverView just a white square:
But what if we want to do this:
CleverView is changing all the saturation below it,
Or perhaps this:
CleverView is changing the hue or whatever below it.
Note that in the examples it's working in a pixel-wise fashion, it's not ("just") flagging each collection view cell to change all of the cell color.
So ideally CleverView would do this to anything at all that happens to be behind it (ie, whatever bunch of views it covers or partly covers, hence the collection view example which is just 'many views).
Naturally both the underneath stuff, and the shape of CleverView, can be animating, moving, in real time.
Is there a way this could be done in iOS?
(In that specific example, what I do is just, have two of the collection views: the bottom one and the top one has the new color values. Simply with care clip the top one to achieve the effect. But obviously that's not as Clever as a view that actually "modifies the values of all the pixels behind it".)
{Note too that, obviously, you can just make basically a screen shot, munge that image, and show it; not really a great solution.}
The CALayer has a property backgroundFilters where you could normally add a CIFilter that would do the job. But, documentation states that
Special Considerations This property is not supported on layers in
iOS.
That's annoying, but that's all that we have. Probably, it's due to performance considerations.
I would suggest to look into SceneKit, there the primitives are very similar to CoreAnimation, also animatable with CAAnimation, but provide advanced tools to configure and control many more aspects of the rendering.
For example, SCNNode has filters: https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters?language=objc
I'm creating a rather complex animation, using UIViewPropertyAnimator, where I have the need to make precise movements like this one:
I was wondering what's the difference if I achieve this while animating the constraints, or animating the transform?
I achieved, what you see in the gif, using transform animations. But I got curious, and I got few more screens like this to animate so I wanted to know if there is a preferred way, or any 'animation rules'.
I'm mostly curious if there is a performance difference, but I'd also like to hear if someone got stuck using one of the mentioned ways.
I am trying to think of a way to render a sub-hierarchy that is somewhat complex, and would look good (crisp) when drawn at multiple scales.
I would expect there to be some container view with a size, and this subview could be constrained within it and drawn properly.
If it helps think of a hierarchy like a calculator keypad or computer keyboard even:
You would have a keyContainer view, and a bunch of key views (or layers) of various sizes and positions.
Suppose I wanted this to be drawn in a container that was 320 x 320. I was thinking if I knew the ratio of w:h for each key and I knew the ratio of a key's width to the width of the whole container, then I could work out the sizes and positions for the key. I've found in practice the math seems to introduce a lot of rounding errors and you end up on off-pixel boundaries so the final rendering does not look great.
So how would you tackle a problem like this?
Thanks for any ideas.
I'm having a problem with scale transformation I have to apply to UIViews on Swift (but it's the same in objective-c too)
I'm applying a CGAffineTransformMakeScale() to multiples views during a gestureRecognizer.
It's like a loop for a cards deck. I remove the one on top and the X others behind scale up and a new one is added in the back.
The first iteration works as expected. But when I try to swipe the new front one, all the cards reset to their initial frame size because i'm trying to apply a new transform, which seems to cancel the previous one and reset the view to its initial state.
How can I apply definitely/commit the first transform change to be able to apply a new one after that based on the UIView resulting new size ?
I tried a UIView.commitAnimations() but no change.
EDIT :
Here's a simple example to understand what I try to do :
Imagine I have an initial UIView of 100x100
I have a shrink factor of 0.95, which means next views behind will be 95x95, then 90.25, then 85.73, etc
If I remove the top one (100x100), I want to scale up the others, so the 95x95 will become 100x100, etc
This is done by applying the inverse of the shrink factor, here 1.052631...
First time I apply the inverse factor, all views are correctly resized.
My problem is, when I trigger again by a swipe on the new front UIView a new resize of all views (So, for example, the 90.25x90.25 which became 95x95 should now scale to 100x100).
At this moment, the same CGAffineTransformMakeScale() is apply to all views, which all instantly reset to their original frame size (so the now 95x95 reset to 90.25x90.25, and then begin to apply the transformation on this old size).
As suggested here or elsewhere, using UIView.commitAnimations() in the end of each transformation don't change anything, and using a CGAffineTransformConcat() is like powering over and over the scaling by himself and of course views become insanely big...
I hope I made myself more clear, that's not easy to explain, don't hesitate to ask if something is wrong here.
After a lot of reading and consulting colleagues who know better than me about iOS programmation, here's my conclusion :
Applying a CGAffineTransformMakeScale() only modify visually a view but not its properties, and since it's difficult (and costly) to modify afterward the bounds and/or frame of a view, I should avoid to try to make a transform, update bounds, make another transform, etc.
Applying the same CGAffineTransformMakeScale() only reset the effect and not apply to the previous one.
Applying a CGAffineTransformScale() with the same values on top of the previous CGAffineTransformMakeScale() (or with a CGAffineTransformConcat()) has some unpredictable effect and will be very difficult to calculate precisely the new values to apply each time to get the effect I want.
The best way I can go with this is only applying one CGAffineTransformMakeScale() that I will keep updating scales values all along the view's life.
It implies now for me to rework all my implementation logic in reverse, but that's the easiest way to do this right.
Thanks all for your tips.
So I'm looking into UIKit Dynamics and the one problem I have run into is if I want to create a UIView with a custom drawRect: (for instance let's say I want to draw a triangle), there seems to be no way to specify the path of the UIView (or rather the UIDynamicItem) to be used for a UICollisionBehavior.
My goal really is to have polygons on the screen that collide with one another exactly how one would expect.
I came up with a solution of stitching multiple views together but this seems like overkill for what I want.
Is there some easy way to do this, or do I really have to stitch views together?
Dan
Watch the WWDC 2013 videos on this topic. They are very clear: for the sake of efficiency and speed, only the (rectangular) bounds of the view matter during collisions.
EDIT In iOS 9, a dynamic item can have a customized collision boundary. You can have a rectangle dictated by the frame, an ellipse dictated by the frame, or a custom shape — a convex counterclockwise simple closed UIBezierPath. The relevant properties, collisionBoundsType and (for a custom shape) collisionBoundingPath, are read-only, so you will have to subclass in order to set them.
If you really want to collide polygons, you might consider SpriteKit and its physics engine (it seems to share a lot in common with UIDynamics). It can mix with UIKit, although maybe not as smoothly as you'd like.