OL3 - ol.layer.Group - zIndex - why? - openlayers-3

Does the z-index of a layer group serve any functionality?
As far as I can see, this property doesn't have any effect at all. When rendering a frame, the layer states is a flattened list, then sorted by z-index.
For me this is perfectly fine behavior. Z-index ordering should be completely independent from the the grouping hierarchy. So I would rather see this property move from the ol.layer.Base class to the ol.layer.Layer class. Does this make sense?
Just want to make sure, that when I build onto this functionality, behavior is as I expect, and is expected to remain this way in future versions :).

Related

How do i place a button in the background?

I have quite a few labels and buttons on my view controller and I have an image and I have set it to the back in the layers part on the left. How ever when I make it bigger it seems to go to the front and block the images. I have tried multiple codes from other people who have the same problem, but it still doesn't work. I am using Xcode 8, Swift, iOS.
In interface builder you can change the order of what is in front or behind by changing the order of the views as they appear in the Document Outline...
The higher up the list they appear, the further towards the back they are.

UIAccessibility - containers

There's a "Containers" rotor option in Voiceover which allows the user to quickly navigate through "high level" sections of the screen via single finger swipe up and swipe down actions. For example, the Calendar app has three high level items: navbar, contents and toolbar.
My app uses custom UIView subclasses and, no matter what I try to do, all my views seem to belong to a single container. I can't split them into logical sections. I tried putting them in separate views implementing the UIAccessibilityContainer protocol and setting a few of the accessibility properties on the parent views.
Does anyone know how to create multiple containers?
I did some digging on this issue and think its a private trait Apple is using. First I noticed the only containers recognized are standard UIKit type objects like UITableViews, UITabBars, UINavigationBars, etc. So next I used the debugger to inspect the value of the accessibility traits for these components. They're all 0x200000000000. Just to be sure I didn't miss an UIAccessibilityTrait I checked all of their values. None of them match the value. Furthermore if you set your views accessibility traits to this mysterious value it'll work just like you want! I tried determining the location of this constant but didn't have much luck. If you want to do more digging it looks like apple stores accessibilityTraits using an NSObject category that uses associated objects with some constant value named AXTraitsIdentifier.
Practically speaking you could do something like the below but since its not defined in a public API its functionality could change in the future
//Note the navBar has to be run through a voice over pass before the value is set :( or you can just directly set the value to 0x200000000000.
myContainerView.accessibilityTraits = navBar.accessibilityTraits;
I'd love to hear if anyone one else has info on this? So far I haven't found an ideal solution.
I have been able to make the views in my app reachable by single finger swipe up and swipe down actions when the "Containers" rotor option is selected by setting the accessibilityContainerType property of my views to semanticGroup.

Using size classes programmatically

I (hopefully) watched all the relevant WWDC2014 session videos and read the docs, so this question is mostly to confirm my suspicions, but please educate me.
What I want to do is animate views using Auto Layout. That in itself is not a problem. But these animations' endpoints change with different orientations. I thought I might be able to use size classes to move the views automatically on rotation, but Apple's developer guide says that animations have to be done programmatically, and from what I can gather, size classes are an Interface-Builder-only thing.
Another idea I had was using custom layout guides like the top/bottom ones IB provides, but those seem to be hardcoded.
The last thing I could do is update constraints by hand after listening to rotation events, but that is nothing new, and I feel like size classes should be useable for more than just static interfaces. Am I overestimating their purpose?
TLDR: Given two points A and B that a view can have its origin at (due to animations), how can I move both points using size classes or something similar?
After some more digging in the docs I have finally found something useable. The UIContentContainer protocol defines willTransitionToTraitCollection(:withTransitionCoordinator:), and that method's first parameter (a UITraitCollection) contains horizontal and vertical size classes as well as a UIUserInterfaceIdiom (that can be used to know whether the app is running on a iPhone or iPad, although size classes should be used for most things).
Additionally, since iOS 8 hides the status bar in landscape view, traitCollectionDidChange(previousTraitCollection:) is the corresponding method that gets called after the change happened, so the value of UIApplication.sharedApplication().statusBarHidden has changed when this method is called. Can be useful for UIScrollView's contentInset for example.
Lastly, if you need the exact screen sizes (in points, of course, but the above mentioned trait collection also knows about pixel density), there is viewWillTransitionToSize(:withTransitionCoordinator:).
Hope this helps someone else as well.

how to add a UITextField to a CALayer and make it work

I searched on google and I found this. but I'm not sure the answer is true.
So what I need is to add UITextField into a CALayer and make it work I used this method but its not working
[layer addSubLayer:textField.layer];
Any ideas to solve this problem?
The short answer is, you can't. Layers are lower-level objects. Views have layers, but layers can't have views.
If you were to add another view's layer as a sublayer of your current view's layer, the view level would have no knowledge that it's layer was being hosted somewhere else. The view would not be part of the view hierarchy, so it would not think it needs to draw itself. And, unless you maintain a strong reference to it somewhere, it would be deallocated, causing the layer that you've added to be deallocated, and creating a zombie.
Views contain views. Layers contain layers. Keep them separate. If you want text contents in layers, you can use a CATextLayer, although they don't look as good as text UI objects because they don't do sub-pixel anti-aliasing. I tried using text layers and ended up abandoning them because they looked really bad.

Scrolling items on screen [iOS cocos2d]

OK so in my game I need the users to scroll between items, just like you scroll a web page in Safari. Is there any way to do that? If not, maybe scrolling them to the side, like you do in the spriboard? Thanks.
I am not really sure I have understood what you would like to do, but there is a cocos2d extension that seems appropriate to it: CCScrollLayer.
CCLayer subclass that lets you pass-in an array of layers and it will then create a smooth scroller. Complete with the “snapping” effect.
If you are looking for a generic scrolling within your view, I suggest this tutorial or this topic rom cocos2d list.
EDIT:
I have never done it, but I think it should be possible to scale the CCScrollLayer to the size you need.
Otherwise, you might change the contentSize of the layer, or even put the CCScrollLayer into a clipping node.
Anyway, I think that it is much easier to start from this and find a way to adapt it to your specific requirements than start from scratch.

Resources