According to this page, https://docs.unity3d.com/Manual/iphone-basic.html, too many UnityGUI elements is considered bad, but what is too many if my game runs entirely on the canvas? At the moment, my UI will contain about 100 objects, most are buttons and 80 of the objects, use full or portions of 3 textures to display the objects.
Does this mean that uGUI cannot or should not be used for iOS games?
That post is talking about something totally different and you are confusing yourself with uGUI and UnityGUI/IMGUI.
UnityGUI/IMGUI is an old UI System. That's what the article is talking about. Don't use it. I've been warning new users about that too due and they use it due to old tutorials they are following.
The only time you should use this is when you are writing an Editor script to test your game in the Editor but this should never be deployed to your mobile device or used as a standalone build.
How to know when you are using UnityGUI/IMGUI or which tutorials to avoid? When you see OnGUI() anywhere in the code, then stop.
The latest UI System in Unity is simply called uGUI. I don't know if the name has changed but this was the original name when it came out. It is only available from Unity 4.6 and above. You can find this from the UnityEngine.UI; namespace.
This is the link you should be reading for the new UI and here for UI tutorials.
Does this mean that uGUI cannot or should not be used for iOS games?
uGUI should be used for all your UI work.Again, I am not talking about the UI from the article. I am talking about the UI from the UnityEngine.UI; namespace.
my UI will contain about 100 objects, most are buttons and 80 of the
objects
uGUI uses Canvas to drive the UI and they are parent GameObject of UI components.. You may want to separate them into different Canvas. For example, MainMenu Canvas, PauseMenu Canvas, GamePlay Canvas.... Under each Canvas, you can then have your components such as Buttons and Texts.
When you are on the main menu, you enable the MainMenu Canvas and disable the rest. You can do this for your other Canvas in your scene depending on the mode of your game. I can't think of any scenario where you need 80 UI components at the-same time, on the scene. You must separate them.
Related
We are currently developing a multitouch multiuser application which runs on a big touchscreen table.
Obviously the application has a lot of animations and transformations included, i.e. moving and rotating UI elements around using touch gestures, menu opening animations, touch rotations etc.
To better understand what kind of Application this is going to be, here is a Video of a similar application with kind of the behaviour we are targeting (minutes 0:16 - 0:27): https://www.youtube.com/watch?time_continue=13&v=qIOR9FiL97w
Back in the days we used Adobe Flash for such applications, since it was the only way of getting good performance and interactivity but since flash is dead, what is the current best way of implementing such an application?
We tried game engines like Unreal Engine 4 and Unity but the support for multitouch is very meager and most of the gestures etc. have to be implemented yourself and also most of the common controls like scrollviews, carousel controls etc.
We also tried UWP which has good touchscreen support and implements most of gestures and behaviour you are expecting of a touchscreen application and almost all common user controls are already implemented but we got a huge problem regarding performance, especially on higher resolutions like 4K or 8K. The more controls and grids and layout we put in, the slower the application became and moving around and rotation etc. are very slow and only run on one thread. We had a look into the new Composition API but its very complicated and its not very clear whats the best way to combine XAML controls and composition API, also there is Win2D but it only provies basic functionality like drawing rectangles and also no touchscreen support, only through XAML.
So, is there any good Framework or API to implement applications like in the video I linked? Or is Flash still the best way? Is there a Qt library for such things or even Javascript? Or is even Android the way?
https://kivy.org
Kivy - Open source Python library for rapid development of applications
that make use of innovative user interfaces, such as multi-touch apps.
Cross platform, GPU Accelerated, comes with more than 20 widgets, all highly extensible.
I am making an iOS application using unity.
We have 3 options for UI elements such as button, user's text key in and image, logo display etc.
I have about 8 pages in my iOS app. The first page is user login, second page is selection buttons for different game levels, third page is display of previous results, etc.
I am making UI pages and I am wondering which option to choose in terms of game's response. As this is my first app using Unity, I like to have opinions from expertises.
Which options would be best for game response and unity design among the followings.
(1)All pages are designed on different scenes and I will have 8 scenes. Each scene has its own UI elements.
(2)One scene, but different canvases for different pages. So each canvas has its own UI elements for different pages.
(3)One scene, one canvas, but different panels for different pages. So each panel has its own UI elements for different pages.
What could be the best option for my app?
If somebody discuss advantages and disadvantages for different options, it would be great.
I mostly use combination of 1 and 3 or only 3.
Passing data between different scenes is more difficult than passing data between gameobjects within the same scene (i.e. requires static members/persistent data/gameobjects that won't be destroyed when the scene is being unloaded).
In single scene options, handling the game state becomes more complex (i.e. which gameobjects are active and visible in different phases of game flow).
It's easier to make transition effects with single scene option especially if two views are visible at the same time.
I'd go with number 2 because you can just deactivate irrelevant canvases and draw only those currently active. This will be more performant than option 3 and afford smoother transitions than option 1. If you're running out of memory then split the scene in half, etc. With only 8 pages your state machine doesn't sound like it would be too difficult to manage. You can read about canvas performance here: https://unity3d.com/learn/tutorials/topics/best-practices/fill-rate-canvases-and-input?playlist=30089
Our team has been working with React Native for almost a year now, and one of the problems we ran into early on is the rendering speed. There are multiple components in our application that the user needs to drag, and state changes (combined with shouldComponentUpdate) were simply not fast enough to accomplish this goal.
We've gotten around this in two ways, and I was wondering to what extent these methods are kosher vs. hacks.
Direct Manipulation and setNativeProps - We've used setNativeProps so often, that we've created a Redux-like framework; however, instead of modifying the state, it uses setNativeProps for speed. The purpose of this was to expand setNativeProps beyond its use exclusively within the component; however, we do still use state changes whenever possible.
Scrollview and TextInput - We've managed to rotate and orient Scrollviews and TextInputs in such a way to work with setNativeProps, so that dragging content is smoother and more native, and text can be modified faster than a state change would allow.
We were wondering how kosher all of this is, as the React Native website only notes its complexity:
setNativeProps is imperative and stores state in the native layer (DOM, UIView, etc.) and not within your React components, which makes your code more difficult to reason about.
Should we remodel our app?
Calling setNativeProps isn't recommended but I don't consider it a massive red flag if it's letting you achieve the quality you want. As the docs warn, it does make your code harder to reason about since it sets state in your native views that isn't in your React component hierarchy.
Two ways to tame this complexity are to reduce the number of different pieces of code that can set props via JSX or setNativeProps on a given component, and to gradually move away from setNativeProps and reduce rendering with shouldComponentUpdate and PureComponent instead.
Immutable data structures make it easier to implement shouldComponentUpdate as well as Reselect selectors if you're using Redux. If your frequently rendered components have render() methods that create many elements, refactor groups of those elements out into pure components that don't necessarily need to be rendered as often. The constant elements Babel transform will remove most of the cost of elements that never change by creating them only once; when React sees the same element across two consecutive render() passes it won't re-render the element the second time.
Several years ago I was curious about creating some objects (spoon, ball, tv, ...) in 3d modeling program, export the textures, and then have a screen in iOS app, that can open one object at a time with a possibility to rotate and zoom it. This seemed quite basic and most used case but I didn't find any simple and ready to use solutions/libraries/plugins, just raw OpenGL ES (GlKit), so I refused to use it, as it would require too much knowledge and time as I haven't done any 3d stuff before and my primary work is not related with 3d.
There are also Unity and Cocos3d engines, and it looks like they allow to extend the code by using iOS plugins (xibs/storyboards, navigation with view controllers and etc), but this means you have to make your app project as Unity/Cocos3d first, and only then add your usual UIKit stuff as a plugin. Now that is not acceptable because the project should be written using UIKit first, and I expect to add 3d viewing stuff as a separate component that encapsulates all the necessary stuff inside it as a black box, because I don't want to mess my project up, as this 3d stuff is an optional feature.
Now, after several years I'v searched for the thing again looking for simple 3d viewing plugins/solutions for UIKit, but the situation is pretty much the same imho. I saw iOS8 there will add Scene Kit, but I'm not sure will it be something close to what I expect. So, still I'm not sure is there any solution that would require minimum time efforts, or is OpenGL ES the best solution for this need.
Check out the CC3DemoMultiScene demo app in the latest version of Cocos3D. It demonstrates how to include a Cocos3D scene in a standard UIKit storyboard, and to have the GL view only a component of a larger UIView.
What commonly expected user-visible design idioms need to change from an iPad app to a Mac app for an app, that is to provide basically identical functionality, to seem at least reasonably Mac OS X native?
Some of these changes, commonly expected by users, might include:
Move the Settings button and Info button to Menu selections for Preferences... and About...
Move the Settings view and Info view or popover to their own independent Preferences and About windows instead of being views in the main window.
Add some menu items and menu keys for commonly used buttons (like the forward and back buttons in a browser).
Support arrow keys for scrolling any custom view items.
Support mouse-over for help popups or dynamic menus.
If the app supports "documents", allow more than one document to be open at a time, each in their own windows.
What else? What's the minimum change required for a simple generic 2D game?
Added clarifications:
Note that I do not consider re-coding similar UI classes to NS classes (for instance UIButtons to NSButtons), with similar look, positions and behaviors, to be a significant change. Those changes are pretty much invisible to the user.
The goal is to change as little as possible so that a user who purchased app X to do Y on an iPad might purchase app X to do Y on their Mac, as a Mac application, but with as close to zero learning curve as possible. But it seems that some changes need to be made, or the app would not seem to be a Mac app (for instance, a missing About... menu item would seem a bit strange.)
to provide basically identical
functionality, to seem at least
reasonably Mac OS X native?
You've gone off the rails right there. Consider adding this to your list:
Forget everything you know about how your iPad app works. Step back and consider that a user's interaction with and expectation of a desktop application are very different from those of a tablet. Re-think what you're able to do and what the user will want to do with a faster processor, more power, significantly more available storage, less mobility, much faster text entry, and a different user interface model.
We are in the same boat and faced the same question.
Our conclusion is to start with a "fresh" real application for Mac and make it look similar, i.e. using the same or similar UI components and graphics. The app should be otherwise developed as if there was no iPad version.
First, there will be many users that don't have the iPad version. Those users expect a full-blown Mac application and it doesn't make sense to make it feel iPad in any way.
Second, users coming from the iPad version will feel ripped of if the Mac app is just a pure clone of the iPad version with no added value. Think of the first transitions from iPhone to iPad - paying again for nothing but pure upscales is frustrating and might harm your business in the long run.
Starting out designing a fresh streamlined UI and then think of what you can reuse and make similar. Functionality may differ in one direction or the other. Your model code should work in all places anyway.
Not exactly an answer to your question, but take a look at Chameleon. It's essentially a port of UIKit to the Mac. It was created by The Icon Factory to make it easy for developers to port their iOS apps to the Mac. IIRC Twitterific was ported to the Mac using Chameleon.
So here's what I did to create a Mac app from an iPad app, and have it accepted into the Mac App store.
Ignored the suggestions to completely redesign the app (users reasonably liked the iPad design).
Create a Mac app project and include a branch of all the iOS source code.
Manually recode all the UI elements with their corresponding NS elements. Resize them to Mac UI guideline sizes. Check that they all show up in some reasonable place when the main window is resized. Deleted iPad only delegates, such as rotation handlers, etc. This resulted in completely new view controller code, but almost all the code was just a parallel translation of the other paradigm.
Set the view coordinates to flipped so the Y coordinates won't have to be recalculated for any Core Graphics drawing routines. (The Model and CG drawing code pretty much ported straight over without change, except for scale factors for window size, and such.)
Remove settings and help views from the main window view controller(s). Implement a Preferences window xib and a Help window xib, and put all the settings and pref views and controls there. Add one more top level controller to show/hide the 3 windows.
Add some menu selections with hotkeys for equivalent UIButton actions that a user might want to hit without reaching for the mouse/trackpad.
Add a credits.html file.
Add an outline shape and transparency masks to the icon design, and stuff into an .icns file.
Pad the one window screen shot out to the much larger required size.