Our team has been working with React Native for almost a year now, and one of the problems we ran into early on is the rendering speed. There are multiple components in our application that the user needs to drag, and state changes (combined with shouldComponentUpdate) were simply not fast enough to accomplish this goal.
We've gotten around this in two ways, and I was wondering to what extent these methods are kosher vs. hacks.
Direct Manipulation and setNativeProps - We've used setNativeProps so often, that we've created a Redux-like framework; however, instead of modifying the state, it uses setNativeProps for speed. The purpose of this was to expand setNativeProps beyond its use exclusively within the component; however, we do still use state changes whenever possible.
Scrollview and TextInput - We've managed to rotate and orient Scrollviews and TextInputs in such a way to work with setNativeProps, so that dragging content is smoother and more native, and text can be modified faster than a state change would allow.
We were wondering how kosher all of this is, as the React Native website only notes its complexity:
setNativeProps is imperative and stores state in the native layer (DOM, UIView, etc.) and not within your React components, which makes your code more difficult to reason about.
Should we remodel our app?
Calling setNativeProps isn't recommended but I don't consider it a massive red flag if it's letting you achieve the quality you want. As the docs warn, it does make your code harder to reason about since it sets state in your native views that isn't in your React component hierarchy.
Two ways to tame this complexity are to reduce the number of different pieces of code that can set props via JSX or setNativeProps on a given component, and to gradually move away from setNativeProps and reduce rendering with shouldComponentUpdate and PureComponent instead.
Immutable data structures make it easier to implement shouldComponentUpdate as well as Reselect selectors if you're using Redux. If your frequently rendered components have render() methods that create many elements, refactor groups of those elements out into pure components that don't necessarily need to be rendered as often. The constant elements Babel transform will remove most of the cost of elements that never change by creating them only once; when React sees the same element across two consecutive render() passes it won't re-render the element the second time.
Related
I would like to learn what is the best practice in regards to UI Design for coding-ease but above all, app responsiveness.
As touch screens have grown, the overall area that users can reach with ease (particularly Portrait mode) has shrunk in proportion to screen size.
Area Of Efficiency (AOE) is the portions of the screen which are reachable by users thumbs (w/o straining) when holding device in both hands. Think of the thumbs as windshield wipers originating from bottom corners of the screen.
When trying to keep all of my UI Input Objects (buttons,textfields etc) in the AOE, I have found myself using the same buttons to do many different actions at different stages of “App Usage” in fear that populating view with many buttons(hide/show as needed) would make my App slow or use up memory. In order to achieve this, my 2-3 buttons each have a few conditional checks of current App stage in order to fire the desired actions.
So which is theoretically more efficient at runtime?
Many buttons on top of each other, performing their own single action, & hide/show as needed at App stages.
Fewer buttons, performing many actions conditional through “if else” or “switch”.
Note: although I would prefer to code as efficiently (less lines) as possible, I care 1000x more about App responsiveness and performance. Coder codes once, but User uses repeatedly.
According to this page, https://docs.unity3d.com/Manual/iphone-basic.html, too many UnityGUI elements is considered bad, but what is too many if my game runs entirely on the canvas? At the moment, my UI will contain about 100 objects, most are buttons and 80 of the objects, use full or portions of 3 textures to display the objects.
Does this mean that uGUI cannot or should not be used for iOS games?
That post is talking about something totally different and you are confusing yourself with uGUI and UnityGUI/IMGUI.
UnityGUI/IMGUI is an old UI System. That's what the article is talking about. Don't use it. I've been warning new users about that too due and they use it due to old tutorials they are following.
The only time you should use this is when you are writing an Editor script to test your game in the Editor but this should never be deployed to your mobile device or used as a standalone build.
How to know when you are using UnityGUI/IMGUI or which tutorials to avoid? When you see OnGUI() anywhere in the code, then stop.
The latest UI System in Unity is simply called uGUI. I don't know if the name has changed but this was the original name when it came out. It is only available from Unity 4.6 and above. You can find this from the UnityEngine.UI; namespace.
This is the link you should be reading for the new UI and here for UI tutorials.
Does this mean that uGUI cannot or should not be used for iOS games?
uGUI should be used for all your UI work.Again, I am not talking about the UI from the article. I am talking about the UI from the UnityEngine.UI; namespace.
my UI will contain about 100 objects, most are buttons and 80 of the
objects
uGUI uses Canvas to drive the UI and they are parent GameObject of UI components.. You may want to separate them into different Canvas. For example, MainMenu Canvas, PauseMenu Canvas, GamePlay Canvas.... Under each Canvas, you can then have your components such as Buttons and Texts.
When you are on the main menu, you enable the MainMenu Canvas and disable the rest. You can do this for your other Canvas in your scene depending on the mode of your game. I can't think of any scenario where you need 80 UI components at the-same time, on the scene. You must separate them.
I have a function that continuously takes screenshots of an UI element to then draw it itself. The UI element can change, so I take a screenshot in very short intervals to not have the second drawing lag behind (please don't question this and just assume that redrawing it is the right way. The use case is a bit more complicated).
However, taking the screenshot and invalidating the previous drawing to redraw it is quite an expensive operation, and most often not needed as the UI element doesn't update that often. Is there a way to detect when a UI element changes in such a way that it needs redrawing, including when it happens to one of its subviews? One solution would be to copy the state and the states of all its descendents and then check that, but that doesn't seem like a good solution either. iOS must know internally when it needs to redraw/update the views, is there any way to hook into this? Note that I tagged this UIKit and Core-Animation, I suppose the way to go for this is Core-Animation, but I'm open for a solution that uses either of these.
So I have a printing component that serves a Silverlight application. Other modules in this program have the ability to signal the printing component and pass it a UIElement, which the printing component will then draw to the screen. All well and good. The problem arises when I try to manipulate the UI Element in order to better format it to fit the user's selected paper size or anything to that effect; it seems that the UI element that is passed in is frequently the exact same instance of the one on the screen, and the screen element changes itself to match the 'print-only' changes I have made. For now, I can manually save the previous values, make my changes, and restore the previous values, but it would be easier/more robust/more efficient/more flexible if I had a way to, given the UI element, make a copy of the element, and manipulate that freely, without worrying about alterations or state on the original UI element. How can I programatically copy an instance of a UI element such that I have another instance with the same visual appearance?
I know 2 ways you can try:
Save the object to a xaml string and recreate it from it.
(XamlWriter.Save and XamlReader.Parse)
Save the object with the serializer to a memorystream and recreate it from that - it is possible that not all objects are marked serializable so the other option might be the one to use.
It might seem a bit much - but there are not so many ways to create a deep copy - and not any standard c# method that I know of.
New to iOS, coming from the Java / Swing world, where I'm used to creating UIs programmatically, letting components size themselves and using various clever layout managers to arrange things.
It already seems clear that the iOS way is to make heavy use of Interface Builder, with a lot of fixed sizing and positioning. I'm not sure IB is ever going to come naturally, but I guess fixed layouts make sense given that you're working with limited space and a fixed window size.
It still seems like I'm writing a lot of boilerplate, though, and violating DRY, and so on.
Can somebody point me to a good primer on laying out iOS UIs, particularly programmatic UIs?
You don't really need to use IB to write MonoTouch apps. I almost never do. The CocoaTouch API is fairly simple and straightforward to develop on.
I haven't really found any writeup on UI development other than the apple documentation (which is really good, by the way, worthy reading), so here goes a couple of tips, based on my experience:
Inheritance is key to maintaining the code clean. You can inherit from basically any class in the API, like buttons, controllers, views, etc. Inherit and add your customizations in those classes. Don't shove everything in the AppDelegate like many examples show. You'll thank me later on.
Have I mentioned inheritance already?
The one thing iOS doesn't have is a layout manager, so if you're used to Java like you mentioned, this will sound a little strange. Different from what Java people think, this is not a big deal. UITableViews help tremendously with this (vide next point).
A lot of iphone apps are built on top of the UITableViewController, even apps that don't look like tables. It's a great framework to do anything related to scrolling. Learn to use it well. Almost anything that scrolls vertically is a UITVC. Follow the guidelines that define when you create and when you dispose cells and objects.
Be careful every time you add a Frame location in your control. Instead of setting hardcoded values, try using offsets from other locations (x+40, for example) whenever possible.
Make sure you add your views to the proper container as necessary. For example, if you're adding a global "Loading" view, add it to the Window object, while if you're adding a image on the left side of a table cell, use the ContentView. iOS changes those special views automatically all the time (resizing screen to fit "on call" bar at top, or rotating phone).
Miguel de Icaza has created a great framework for managing forms and tables, called MonoTouch Dialog. Take a look, and enjoy.