iOS app as a staged play - ios

This may not be a unique question, but I don't know how to phrase it because my Google skills are insufficient.
I am writing an app framework and I am spending a lot of time writing a system to store the positions and other properties of UIView elements on the screen.
What I want to know is whether something like this already exists, since even though my system works well, I am concerned about memory usage with a large number of elements.
Basically, I subclassed UIControl and added a "state" property that stores position, alpha, colorscheme data and all forms of transforms that apply to a certain state in the interface. This is akin to actors positioned on a stage. When a scene change happens (a button is pressed, or something) the actors (UIViews) know exactly where to be and how to look from stored data.
This means that with a single button press and NSNotification, I can broadcast a simple integer that identifies the state to be in, and all necessary properties will be animated accordingly.
Am I wasting my time? Does something like this already exist. It does not appear to be included in the tools that Apple provides.

Related

react-native Mark items as they leave the screen in scroll view

I am writing an iOS app (just iOS for now, so I don't need to consider other platforms at the moment) in React Native.
I have a screen in my app that is a ScrollView of items that was retrieved from a server and I'd like to mark each item as "read" as it passes out of the top of the screen for the first time (i.e. mark as read on scroll type functionality). Once an item has been marked as read there you can mark it as unread but through other actions not related to scrolling.
I cannot for the life of me figure out a good way to do this. Ideally the items themselves would be able to detect whether or not they have disappeared off the top of the screen and just update the server that way, but I can't seem to find if that's possible (I easily could have missed something in the docs but I don't think so).
At the moment my solution is calculate how far down the ScrollView is, divide that by the height of each item (which is static for now... I don't know what I'll do when it becomes not static, if ever), and that's how many items I need to mark as read. At that point I do logic to determine if the local item has already been marked as read and if not I update the local item and send an update to the server.
A previous solution was to just update the server on each item, but that seemed like it got out of hand too quickly because you can scroll pretty fast and each item needs to be marked as read accurately.
The server api calls are idempotent, so sending multiple updates for the same item, while not great, is also not the end of the world. Also, I am running this in the emulator on my Mac and I haven't yet tested it with a real device (I have one, but I am still in kind of early stages of development).
I am happy to provide any other information needed!
The onViewableItemsChanged prop will return a list of items who's visibility has changed. Keep in mind that this visibility is decided by the viewabilityConfig prop
https://reactnative.dev/docs/sectionlist#onviewableitemschanged

Saving temporary data in userdefaults?

I've got views on my screen that are placed on the screen based on firebase data. When the view is originally planted on the screen it pulls from the saved x and y from firebase and uses that to place itself on the screen.
The views are draggable and so I'm wanting when the user drags the views, leaves the page and comes back for the views to no longer pull from the old firebase data but some locally saved new x /y. Before what I was doing was just updating the x /y on firebase on touches end, but that's super impractical for lots of users doing that at once plus it just seems unnecessary.
The super sloppy idea I had was saving the x / y in the userdefaults when the touches end on one of these views by doing something like:
1) Each view has a firebase UID tied to it, so grab that
2) grab the X and Y positions of the button on touches end
3) Have the key name be the String(UID + "xCoord") and one for Y as well and save the values under there
4) When I'm looking for the x / y values for the view to be drawn in check to see if there's a userdefault set for that UID's Xcoord + Ycoord, otherwise go to firebase for it.
Then to clean up my userdefaults I could check to see if there's any UID the exists for views I'll ever load up again, and if not I can clear out those coordinates (not sure if that's even necessary).
Is this an abysmal way of doing this? Is there a better way to do it? I'd rather not get into core data because I've avoided it like the plague and this seems simple enough to not need it.
Any ideas on how to make this better?
You've basically described pretty much how macOS saves the locations of windows. The feature there called "autosave." User defaults is a fine place to put this.
I'd map just one property for the window, rather than two. You can easily store a CGPoint in user defaults with an NSValue(cgPoint:). I'd probably prefix something on the property name (like "windowPosition:") rather than just using the UID, but it probably doesn't matter that much. You could also store all window locations in a single property that stores a dictionary. Really, whatever is reasonably convenient. This is more-or-less what user defaults are for, especially in iOS where the user can't directly interact with them. Storing small pieces of data between launches of the program is its whole point.

Detect UI changes

I have a function that continuously takes screenshots of an UI element to then draw it itself. The UI element can change, so I take a screenshot in very short intervals to not have the second drawing lag behind (please don't question this and just assume that redrawing it is the right way. The use case is a bit more complicated).
However, taking the screenshot and invalidating the previous drawing to redraw it is quite an expensive operation, and most often not needed as the UI element doesn't update that often. Is there a way to detect when a UI element changes in such a way that it needs redrawing, including when it happens to one of its subviews? One solution would be to copy the state and the states of all its descendents and then check that, but that doesn't seem like a good solution either. iOS must know internally when it needs to redraw/update the views, is there any way to hook into this? Note that I tagged this UIKit and Core-Animation, I suppose the way to go for this is Core-Animation, but I'm open for a solution that uses either of these.

Actual difference between UIAccessibilityLayoutChangedNotification and UIAccessibilityScreenChangedNotification?

I’m trying to ascertain what exactly happens differently when posting a UIAccessibilityLayoutChangedNotification, and a UIAccessibilityScreenChangedNotification. From what I can see, I can use them interchangeably everywhere and nothing different happens.
The Apple documentation simply says to use LayoutChanged when (for example) an element has been hidden or shown, and to use ScreenChanged if the entire screen changes, but I’m interested in what THEY do when I provide this information, and what I should see differently when using one or the other.
Can anyone give a clear explanation of implementation differences between the two?
These two notifications are for dynamic content on views, and communicating these changes to VoiceOver for screenreader users. There is little difference between these two notifications, except for their default behavior, and the silly little "boop beep" for ScreenChange notifications.
In both instances, the argument
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, arg);
Represents a string to be read out, or an on screen element, which VoiceOver will shift its focus to. In the event of dramatic context changes, it is important to send focus to a place that makes sense, or announce that such changes have taken place. Either approach is acceptable from an accessibility point of view, though I prefer approaches that involve the least amount of change possible. In the event of simple layout changes, it is almost always best just to announce the context change, and leave focus where it was. Though sometimes, the element that caused the context change is hidden, and then it is clearly necessary to direct voiceover to highlight new content, because the default behavior in this case is undefined, or perhaps deterministic, but determined by a framework that knows absolutely nothing about your app!
The difference between the two events, given that they both do exactly the same thing, is in their default behavior. If you supply nil to the UIAccessibilityLayoutChangedNotification it is as if you have done nothing. If you supply a nil argument to the UIAccessibilityScreenChangedNotification it will send focus to the first UIObject in your view hierarchy that is marked as an accessibilityElement, once all view hierarchy changes and drawings are complete.
UIAccessibilityLayoutChangedNotification
A good use case example for UIAccessibilityLayoutChangedNotification is for dynamic forms. You want to let users know that, based on decisions they've made in the form, new options are available. For example, if in a form you select that you are a Veteran, additional areas of the form may pop up to provide more input, but these areas may have been hidden to other users who did not care about them. So you could shift focus to these elements after user interaction:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, firstNewFormElement);
Which would shift focus to the provided element, and announce it's accessibilityLabel.
Or just tell them that the new form elements are there:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, #"Veterans form elements available");
Which would leave focus where it is, but VoiceOver would announce "Veterans form elements available".
Note: This particular behavior is bugged on my iPad (8.1.2).
Or finally you could do this:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil);
Which does absolutely nothing :). Seriously, I don't even think the a11y framework backend cares. This particular line of code is a complete waste!
UIAccessibilityScreenChangedNotification
A good use case example for the UIAccessibilityScreenChangedNotification is customized tabbed browsing situations. When the entire screen, with the exception of your navigation area, changes. You want to let voiceover know that essentially the entire screen changed, but NOT to focus the first element (your first tab) but to focus the first content element.
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, firstNonGlobalNavElement);
Which would play the "boop beep" sound and then shift focus to just beneath your global navigation bar. Or you could do this:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, #"You're on a new tab");
Which would wait for the new tab to load, play the "beep boop" sound, announce "You're on a new tab" in voiceover, then shift focus to the first element on the screen, then announce the accessibilityLabel for that element. (PHEW! That's a lot! This is jarring for screen reader users. Avoid this scenario, unless absolutely necessary).
And finally you can of course do this:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, nil);
Which is equivalent to:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, firstA11yElement);
Both of which will play the "beep boop" sound, shift VoiceOver focus to the first element on the screen, and then announce it.
Finally
In a comment somebody mentioned caching, and I occasionally comment in my answer about things the A11y Backend may or may not care about. While it is certainly possible that there is some backend magic happening, I don't believe in either of these circumstances, the back end cares at all. The reason I say this is because:
If you've ever used the UIAccessibilityContainer protocol, you can watch as your container of views gets queried. There is no caching going on. Even the accessibilityElementCount property gets pinged each time VoiceOver changes focus to a new AccessibilityElement within your container. Then it goes through the process of checking which element it is on, asking for the next element, and so on. It is designed at its core to handle dynamic situations. If you were to insert a new element into your container after interaction, it would still go through all of these queries and be just fine about it! Furthermore, if you override the properties of the UIAccessibility protocol, in order to provide dynamic hints and labels, you can also see that these functions get called every time! As such, I believe that the A11y Framework backend gleans ABSOLUTELY ZERO information from these notifications. The only information VoiceOver needs to do its job is it's currently focused Accessibility Element, and this elements Accessibility Container. The notifications are simply there for you to make your app more usable for VoiceOver users.
Imagine if this weren't the case how many times Safari would post these notifications!!!! :)
These particular statements can only be confirmed by someone with backend knowledge of the framework, who works with the code, and should be viewed as conjecture. It could be the case that this is highly version/implementation dependent. Definitely open to discussion on these points! The rest of this post is pretty concrete.
For Your Reference
Most of this comes from experience working with the frameworks, but here is a useful reference if you wish to dig further.
https://developer.apple.com/documentation/uikit/accessibility/uiaccessibility
https://developer.apple.com/documentation/uikit/uiaccessibilitylayoutchangednotification
https://developer.apple.com/documentation/uikit/uiaccessibilityscreenchangednotification
And finally, an open source repo of the silly little app I put together to test all this stuff.
https://github.com/chriscm2006/IOS-A11y-Api-Test
UIAccessibilityScreenChangedNotification is to indicate that the whole screen has changed and VoiceOver should reset.
UIAccessibilityLayoutChangedNotification is to indicate that one or more, but not all, elements on the screen have changed.
when your UI changes dramatically. Usually when a user moves into a different part of your app (navigates to a different screen). VoiceOver notifies the user with a tone, and it clears its caches and does other preparations to deal with a new set of accessibility data. It alerts VoiceOver that the screen has changed and there may be new elements on the screen so VoiceOver will rebuild it's index of accessibility elements.
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, nil);
If some part of your UI changes, but the user hasn’t necessarily jumped to an entirely different part of your app. (Example: in the iTunes Store app, tapping on the price label ($0.99, etc.) next to a song changes it to a “Buy” button.) This notification tells VoiceOver to re-read the current state of all accessible items that are on-screen, and by doing this it figures out what has changed and informs the user of those changes. It alerts VoiceOver that the layout has changed and that it's current index is out of date because the items on the screen have reordered themselves.
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil);

Corona SDK - Managing Game State/Objects/Inventory System/Sprite Animation

I am attempting to determine the best way and most efficient way to handle the following tasks in a game written with the Corona SDK. It seems like there are so many ways to do things that it becomes quite confusing, so I am hoping someone here can help!
I am creating an adventure type game that will have an inventory system/puzzles etc. The thought process I've developed so far involves using separate "classes" to handle each specific aspect of the game. Such as InventoryManagement.lua, ObjectManagement.lua, PuzzleManagement.lua etc.
Just a side note - this game really doesn't involve Physics, but I would like to have static images with animations that occurs (Think opening a door or picking up an object):
Here is an example of what I am trying to accomplish:
Say you start a new game and it loads the first scene. I need to setup the player's inventory, the objects in the room, their state, images for these things etc. which I assume could be defaulted on first game load, and loaded subsequently ...
Then the player clicks on a key to pick it up - at this point the key needs to appear in their inventory so now it will be removed from the scene, added to their inventory (Via InventoryManagement?), and the scene will be updated (Via SceneManagement?)...
From now on the key should no longer show in the scene.
Now say they click the key and use it on a door, the door should animate open and remain open from now on.
If the player leaves the room and comes back, the key should not appear.
Now to me it makes sense to load/unload the scene every time you enter/leave the scene, but ... won't this become memory intensive etc. if you do it this way? ... Is there a better way to handle a scene if it has say, 30 objects on screen?
Hopefully that is clear - It is hard to find specific information related to each one of these elements. Everything seems to be related to physics games and I can't seem to find something on how to "Add a key to the scene if, but not if, and if it's been used then animate that door" :(
Thanks!
Had the same problem with unexpected results when jumping between storyboards. One thing I found out was
Always make sure to Runtime:remove and stop the timers you use. One alternative when you want to reset everything is to make a reset.lua file and on enterScene you use storyboard.removeScene or storyboard.purgeScene (If you just want to get rid of all display objects you have rigged up in memory in scene:createScene).
Also check out storyboard.RemoveAll() or purgeAll()
You need to make some stuff in Global space (using _G) to use them between scenes.
To change between scenes, "Storyboard" is very nice.
But mind you, you will need to use some manual memory hacking if you make things too much memory intensive, mainly, you might need to before loading a scene, do several calls of "collectgarbage" after cleaning the last scene, otherwise you will hit a curious problem:
You loaded scene "A".
You switch to scene "B".
The game 'unloads' "A", but the garbage collector don't, then loads "B", the end result is memory use of "A" + "B".
On this adventure game I was making in Corona, it resulted in several crazy crashed until I figured the "collectgarbage" trick.

Resources