UIAccessibility - Trigger for element creation - ios

Is there a way to make a trigger (in Swift) that fires and does something whenever a certain element (like when app.staticTexts["sometitle"]) exists and is accessible by Accessibility. In other languages, "stand-by" loops are considered bad practice, so there are latches and callbacks (on creation). Is there something similar in Swift?

While not exactly what you're looking for, UIView provides a pair of methods, -willMoveToWindow and -didMoveToWindow:, that might serve your purposes. Working from the other end, you might consider implementing -didAddSubview: on the containing view or window. For more information, refer to the documentation for UIView.

Related

iOS background tasks synhronization app flow

I'm programming in iOS not so long. I was mainly programming UI related stuff like animations, custom UIControls etc.
I need in my new app to:
Display loading activity indicator and in the same time:
load some remote data from server parse them and store in local core data
load some data from local core data
get user position from location service
After this I have all data needed to display next view controller and dismiss loading indicator.
Question is how can I do this all? I need to support iOS9, iOS10, 11, 12. I understand that this needs to be done in background threads and then I need to merge all data from each task and switch to next view controller. I can't use any external libraries like rx-swift or promise-kit. Maybe there is any experienced iOS developer who can give me some main guidelines how to approach to this kind of application flows? I can imagine there is a lot of ways I can do it some of them are better and some of them are worse. Any guidelines would be very helpful for me. Thanks.
It's a very complex question and as you said it's possible to solve all this problems in several ways. But for sure i can give you some core-hints about which steps is better to follow:
Run in a separate thread the management of all stuff regarding to the Network communication. Maybe you can run it on a separate queue using the class DispatchQueue(). Once you received the data, in the same thread, maybe you can directly convert these information and store them inside a CoreData database.
To store into CoreData you need at first to know how it works, so basically search for some really easy tutorial about how to create from zero your first database inside XCode. After you have been able to run and execute a very simple one you will be able to pass to the second step and so try to integrate it with the data you have previously downloaded from the network. Here a good article for you: https://www.raywenderlich.com/7569-getting-started-with-core-data-tutorial .
To get the location is a separate field of study, because you have to study which background modes are allowed in iOS (And actually are allowed just a few). After that you will need to figure out in which category of background-location application your software belongs. After that you have to dig deep and discover how protocol and delegates works inside Swift/Objective-C in order to properly manage the last location value retrieved by the sensors. Here is a good article for you: https://www.raywenderlich.com/5247-core-location-tutorial-for-ios-tracking-visited-locations.
At the end when you interconnected all this flows you can think about how to display the loading indicator. Basically you need to drag and drop it from the tools into the storyboard, interconnect it by using the IBAction or IBOutlet, depending on when you wanna show it and in which specific case. And then use the relative method startAnimating or stopAnimating in the right code flow (It really depends on how you have structured all the previous bullet points).
Since your question was very general and it includes a lot of sub-steps, basically it really needs to be thorough studied and analysed.
I've tried to sum up as much as possible the most important bullet points. I hope the links i suggested to you will help a little bit. Good luck.

Binding technology to use for iOS app

I am a novice iOS app developer. Basically i need to listen for any change in the UI (e.g., doing something on text change in textviewUI), and update UI to reflect any model change. I was looking into different technologies available for binding and am bit overwhelmed.
I know this is very subjective, but i would like to hear your suggestions on it (for swift and objective c both)
Also any pointers to the best practices would be really helpful.
Welcome to iOS! Cocoa (Touch for iOS)- Apple's framework- has some design patterns you will want to learn...
These include delegation and target/ action. Quickly summarized: delegation is handing some responsibilities off to an object that can take care of them. This provides methods you can use to get stuff (like changes) to that object. Read this link to learn more about delegates: https://useyourloaf.com/blog/quick-guide-to-swift-delegates/
Now, back to the text field example. A handy method in the UITextField delegate called textField(_:shouldChangeCharactersIn:replacementString:) will be called whenever the text field is going to change. As for target/ action, that's google-able. Is that a word? As for any pointers, learn MVC. Have fun!

Sub-classing custom classes

I’ve been a procedural programmer for a long time and most of my iOS code is written with lots of if statements instead of sub-classing. I think I finally understand how to write object oriented code but I have a few questions.
I have a class, ScoringToolbar.m that is used in all of my games. It creates buttons for the bottom of the screen. Which buttons are created vary depending on the game and the options in the game. Here’s a typical screen.
Right now it is a long series of if statements. In addition to being hard to read, it’s definitely not proper object oriented programming. What I’d like to do is convert the class into a superclass and add a subclass for each game. My first question is: Is there a convention for naming the superclass?
Also, I’d like to keep the ScoringToolbar.m name for each of my sub-classes. If I create one sub-class for each of my apps (or group of similar apps) I can move the code from the if statements into it. Then each app would call its own subclass and create the buttons it needs. If I do that I won’t have to change any of the calling code. However, if I have lots of .m files with the same name, what do I do with the .h files. Do I have just one and make sure it works with all of the .m's. Or is there a way to tell Xcode to use a specific .h file in an app?
Or is this the wrong approach altogether?
I'm not sure subclassing is your best option. Your question about multiple .m files with the same name suggests this approach might get confusing. You might want to think of your ScoringToolbar as a control that your apps configure for their needs. In this way it wouldn't be much different than a UIButton. Each app would be responsible for creating an instance of the ScoringToolbar and setting it up to suit. It could do this in a method in an existing class or in a helper class. The ScoringToolbar takes care of rendering the UI (icons, colors) while the calling app indicates what options it needs (up/down votes, number correct/incorrect, etc).
I think subclassing is not a good option for you problem. You will end up with code which is hard to maintain and modify when your apps or no of apps grow big.
Have a look at some design patterns, if I got your problem correctly, builder pattern would be one of the options.
Or you can create configuration file for each game. For example you can have an array of dictionaries in a plist. Each dictionary will represent a UI element in the toolbar. For example you can store the image name, order, selector, position(?) and etc in the plist. When loading the application you can create the toolbar elements at run time using the dictionary options.
These are just a starting point but based on your requirements and extendability of your apps you can find better solutions.

BlackBerry - setBanner() vs setTitle()

As with my previous "vs" question at BlackBerry: Overriding paint() vs subpaint() I am wondering if this has to do mostly with convention, style, or if there are some real hard n fast rules.
The way I've seen it so far is that MainScreen.setBanner(Field) and MainScreen.setTitle(Field) have almost exactly the same functionality. I have used the case of being able to call setTitle(String) in a simple UI. However I am working across iOS, droid, BB, and try to make the UIs similar - my title/banner is a 'pretty' custom manager.
The only difference I can see is the little style element that is inserted automatically under a title.
Is this the only reason I would have to choose between using each of these methods?
Perhaps there are stylistic or convention reasons to use one over the other? Perhaps RIM has some intentions with these methods that I cannot yet see as a new BB developer? Am I making a mistake by treating these methods as direct substitutes?
If you use both, the banner is laid out above the title. My understanding is that if you are using only one, then they are pretty much interchangeable -- the reason to have both is that you can get some stacking behavior if you want to add more information to the top of the screen.
There is an article: "MainScreen explained" that goes into detail on this and other features of MainScreen.

Can subviews count as using undocumented APIs in iOS?

Does using (for example) UIWebView's subviews count as using undocumented APIs? There is no documentation on the fact that the first subview of a UIWebView is a UIScrollView. Does that mean that I am not allowed to add children to this UISCrollView?
I'm not using any private calls, but it isn't documented anywhere. In iOS 3.1 the first subview of a UIWebView is an instance of a class called "UIScroller", which is almost identical to the UIScrollView, but not documented anywhere. What is allowed exactly?
From UIView Class Reference
For complex views declared in UIKit
and other system frameworks, any
subviews of the view are generally
considered private and subject to
change at any time. Therefore, you
should not attempt to retrieve or
modify subviews for these types of
system-supplied views. If you do, your
code may break during a future system
update.
From App Store Review Guidelines:
Apps that do not use system provided
items, such as buttons and icons,
correctly and as described in the
Apple iOS Human Interface Guidelines
may be rejected
Taken together, I read these as saying: You can look at the subviews of standard components, but mess with them at your own peril -- things will change with no notice, and you'll have nobody to blame but yourself when they break. Furthermore, if you do modify a standard component in a way that's out of keeping with what Apple designed and users expect, your app will likely be rejected.
Modifying the private subviews of UIWebView seems like a poor plan.
I don't think it counts as using private APIs (correct me if I'm wrong), but I wouldn't recommend it. As you say: "in iOS 3.1" it works that way, but it's not guaranteed to work the same on other versions. It may change with an update and the application will break.
By the way, very few use iOS 3.1, so I would recommend looking at how it works in 4.3.
EDIT: I have never uploaded an app to app store where I've done this, but I can tell you this much: You do not submit any source code to Apple. They run your executable in a tool that will detect if you call any private API methods.
Looping through subviews is allowed. So is adding subviews. They don't even mention this in the guidelines. I can't make any guarantees, since I'm not involved in Apple's review process, but I would be very surprised if they would reject your app for this reason.
If you feel like it would add value to your app I would go ahead and do it. If you submit your app to the app store and get it approved, please come back and leave a comment.

Resources