What Actionscript features do not work on iOS? - ios

There seems to be a lot of conflicting information out there. It might be that support has increased recently, or changes to adobe.com/air have made some information difficult to find - but I can't track down a definitive list of things to avoid.
I know that actionscript won't run in loaded SWFs, I know that some people say that filters and blendmodes and halo components won't work. I've also read many posts saying they will (at least that blendmodes will, and that halo will run, but slowly so still use spark)
I have a large amount of AS3 code to plan for upgrading to work on iOS, but at the moment I have no idea what things will break (or what things will break when those things have been fixed!)
Is there a list of unsupported APIs, or iOS dos and don'ts?
Thanks
:S

First, yes. Externally loaded SWF's will not run. You can however embed SWF's/SWC's into your project and include them inside of your package.
As far as Flex components, stay away from Halo. You should use Flex 4.6 and stick to components with mobile skins. I recommend downloading Tour de Flex http://www.adobe.com/devnet/flex/tourdeflex.html to get an idea of whats available.
As far as blend modes go, I'm not really sure. I haven't used them in mobile yet. However filters are supported but they are expensive. For drop shadows on rectangles there is something called RectangularDropShadow. This is actually a component and therefor less expensive. However it can only be used on rectangular groups.
You should have access all of the AIR API's. You will however be restricted when using some of File related classes since I don't believe you can leave your Appliaction Storage Directory.
One big performance tip I can give is to use AS3 over MXML whenever possible, ESPECIALLY when creating item renderers. Use BitmapImage over Image whenever possible, again especially in item renderers. Use cacheAsBitmap whenever you have images that don't change often. And stay away from any Flex component that doesn't have a mobile skin.
You may also want to read up on View and destruction policies.
http://www.adobe.com/devnet/flex/articles/flex-mobile-development-tips-tricks-pt1.html
This link also has some more performance tips
http://www.adobe.com/devnet/flex/articles/flex-mobile-performance-checklist.html

Related

Embed Unreal Engine 4 project into another app

I've been trying to work on a proof of concept (POC) where I can embed a UE4 project into an existing application (in my case NativeScript) but this could just as easily apply to Kotlin or ReactNative.
In the proof of concept I've been able to run the projects on my iPhone launching from UE4 pretty easily by following the Blueprint and C++ tutorials for the FPS. However the next stage of my POC requires that I embed the FPS into an existing NativeScript application, this application will manage the root menu, chat, and store aspects of the platform in the POC.
The struggle I'm running into is that I cannot find how to interact with the xcode project generated from the blueprint tutorial and the C++ tutorial generates a xcode project that i'm unsure where the actual root is that I need to wrap.
Has anyone seen a project doing this before and if so are there any blogs or guidance that you can point me to? I've been Googling and looking around for a couple weeks and have hit a dead end. I found a feedback post here from April of 2020, that was referring to a post in January 2020 that talked about how Unity has a way to embed into other applications additionally a question from 2014 here. But other than that it's a dead end.
A slightly different approach
Disclaimer: I'm not an UE4 developer. Guilty as charged for seeing an unanswered bounty too big to ignore. So I started thinking and looking - and I've found something that could be bent to your needs. Enters pixelstreaming.
Pixelstreaming is a beta feature that is primarily designed to allow for embedding the game into a browser. This opens a two way communication between a server where the GPU heavy computations happen and a browser where the player can interact with the content - the mouseclick & other events are sent back to the server. Apparently it allows some additional neat stuff, however that is not relevant for the question at hand.
Since you want to embedd the Unreal application into your NativeScript tool(menu of some kind if I understood correctly), you could make your application a from two separate parts:
One part would run the server.
The second part would handle the overlay via the pixelstreaming.
This reduces the issue of embedding the UE4 into an application to the(possibly easier) issue of embedding a browser into your application. (Or if your application is browser based - voila, problem solved.)
If you don't want to handle the remote communication, just have the server-side run on the localhost.(With the nice sideeffect of saving bandwidth.)
Alternatively, if you are feeling adventurous, you could go and write your own WebRTC support on the application side to bypass the need for the browser alltogether. It might not be worth the effort though.
Side note: The first of the links you provided is a feature request which hints at the unfortunate fact that UE4 doesn't support embedding. This is further enforced by the fact that one of the people there says somethig along the lines "Unity can to this, it would be nice if UE4 could as well."
Yet a different approach:
You could embedd and use a virtual display to insert the UE4 part into your controller - you would be basically tricking UE4 into thinking that the desired display device is a canvas inside your application.
This thread suggests a similar approach:
In general, the way to connect two libraries like this would be through a platform dependent window handle, e.g. a HWND under Windows. Check the UE api if you find any way to bind the render target to a HWND. Then you could create a wxWindow in wxWidgets and tell UE to render into that window. That would be a first step.
I'm not sure if anything I've listed will be of much help but hey, at least I tried :-). Good luck with your game.
At the same time, the author suggests to:
Reverse the problem:
Using the UE4 slate framework and online subsystem. You would use the former to create the menus you need directly in the UE4 and then use the latter to link to the logic you want to have outside of the UE4. However that is not what you asked for so I'm listing it only for the completeness sake.

Is there a programmatic way to see what graphics API a game is using?

For games like DOTA 2 which can be run with different graphics API's such as DX9, DX11, Vulkan, I have not been able to come up with a viable solution to checking which of the API's its currently using. I want to do this to correctly inject a dll in order to display images over the game.
I have looked into manually checking what dll's the games have loaded,
this tool for example: https://learn.microsoft.com/en-us/sysinternals/downloads/listdlls
however, in the case of DOTA, it loads in both d3d9.dll and d3d11.dll libraries if none is specified in launch options on steam. Anyone have any other ideas as to how to determine the correct graphics API used?
In Vulkan, a clean way would be to implement a Vulkan Layer doing the overlay. It is slightly cleaner than outright injecting dlls. And it could work on multiple platforms.
In DirectX, screencap software typically does this. Some software adds FPS counter and such overlays. There seems to be open source with similar goals e.g. here: https://github.com/GPUOpen-Tools/OCAT. I believe conventionally the method is to intercept (i.e. "hook" in win32 api terminology) all the appropriate API calls.
As for simple detection, if it calls D3D12CreateDevice then it likely is Direct3D 12. But then again the app could create devices for all the APIs too and proceed not to use them. But I think the API detection is not particularly important for you if you only want to make an overlay; as long as you just intercept all the present calls and draw your stuff on top of it.

ios - Best way to create own level editor

I've created a core mechanic of my game and want to create a level editor for it. my game is not a tile-based one, so my needs are quite specific. Game is written using Swift and Cocos2d-swift, but i dont think i can figure something out with Sprite Builder.
What you can advice me? Can I for example create a level editor with c# and then use it from swift code?
And what data structure is the best?
I mean is it possible to serialize classes on desktop Swift application and then just load them on ios from file or I'll need to use json/xml?
It might be an old question but I ended up with using a .Net powered solution. I choosed as It has all controls that you need for creating a rich user interface and also, it has a lot of built-in and third party solutions for serializing levels in any way you want. And the c# syntax is very similar to the Swift one.
The only problem is that you might run windows to work with it.
My game in development also needed a non-tile level editor. A few months ago I took some time to make my choice.
Since my project still uses cocos2d 2.x, I don't use the whole new 3.x and SB system. After some investigation I found out that it would be too time consuming to adjust my whole project to the new system and adjusting SB to my needs, mainly because my game engine has been in development for quite some time and is close to finished. Further more, I couldn't find the right information to actually make it work for my game (it needed some odd level architecture I guess).
Finally, I didn't find any other good alternative so I decided to create my own level editor. In this way I had full control and I knew exactly how everything worked which was a huge advantage for me.
Right now my level editor has been finished for some time and works like a charm. I still think in my situation I made the right choice. Also because I learned a lot this way, building everything from the ground up. Having said that, for my next game I will probably go with the main stream and use SB from the start. Also for you I advice you to still check out SB and take some time for it before making an alternative choice...
I'll explain how I did mine. Disclaimer: it has some oddity's which only worked in my situation, but hopefully it helps a bit with choosing your own way to go, that's the goal I'm aiming at...
I used:
max/msp
Although it's developed to make music and audio based software, I used max/msp because it's very easy and fast for creating visual and interface based software as well.
More importend: I happened to be very experienced in it, which shortened the development time tremendously.
javascript
Inside the max/msp patch is a javascript file running. This file is like a bridge between the interface and the visual representation of the level being edited, and the database in which the level
is saved in. 70% of the editor development went into this file I think.
sqlite
All the data is written in a sqlite database. Again, this has been mainly the choice because it saved a great amount of development time in my case. I could have used xml files for instance, but my game was already using a sqlite database and because of this I felt comfortable using it, I had no experience in xml. Also all the code was already in place for a big part, which speeded up the whole proces a lot.
I'm very happy with the end result. It does everything I need, it's easy too use and since I made everything myself from the ground up I know exactly how every works.
Good luck with you choices.

Creating PDFs from iOS text fields

I'm working on the requirements & specifications for a new iOS app intended for use by certain professionals working "in the field". All day long for weeks on end, these folks have a sizable reporting burden to their superiors using standardized forms that track all different kinds of information. Traditionally, those forms are in PDF, and are simply printed and filled out in ink and then shared with the dozens to hundreds of others working the same operation. Sometimes they'll use a PDF with form fields so the data can be typed and then printed as part of the form. Either way, given their workflow, time and stress pressures, and other factors, it's not a very productive way to get the standardized reporting forms done.
The app we're spec'ing would offer an iOS (and Android, if possible -- but secondary or even tertiary requirement at this point) user interface for tracking the data they enter in the field, organizing it in a logical manner for each individual user, and with the press of a button, take all that data and automatically create a PDF file of it using the standardized form.
Of course, the forms are STRICTLY and rigidly standardized in this industry, and any deviation in format, structure, or presentation is simply not tolerable.
So I was approaching the project by thinking the app would maintain an internal repository of the original standardized forms from the accrediting organization, with each possible data area defined as a field. The app would:
open the necessary PDF form for the task at hand;
parse its dictionary to identity the specific data fields;
for every single field, identify the relevant data from the iOS app's own user interface and data tables, and assign that data to the corresponding field from the PDF/dictionary
export the PDF to a NEW PDF file, which the app would either email or store through iCloud, Dropbox, or some other form of file sharing.
The catch with #4 is that that PDF file must remain editable by standard PDF applications on Windows and Mac (Acrobat, Preview, etc.), so all the fields need to remain. And the PDF should be viewable just the same on either Windows or Mac.
Now, at NO time will the PDF (neither the original nor the exported final document) EVER need to be displayed inside the iOS app, nor would it make much sense to be able to do so.
I don't know if any of this is possible. This is our first iOS project, and we've been leaning towards building the app using Moai or Corona or some other framework to save development time and make porting across platforms easier. That said, if it cannot be done using Lua and one of these frameworks (I remain skeptical...they seem HIGHLY geared towards games), we're not opposed to doing it directly in Objective C and building an Android version some time down the road.
But either way, I'm at a loss in assessing whether this is even a practical undertaking. Our requirements are clear, and frankly if this can't be done, the project won't be pursued any further. But I could definitely use some help from you folks in identifying what my options are, whether I can do it in Lua, and what SDK(s) would be most useful in accomplishing this.
Based on what you've said, it seems that there is little reason to do the PDF-based part of the work on the mobile device itself since:
you don't need to display it on the ipad
you plan to email it or store it in the cloud
if you write this for iOS you will have to write again for Android as you've mentioned
Can you simplify the mobile part of your requirement by focusing on the data-collection and validation, then firing off to a server to do the document production? That will give you a lot more flexibility in the tools that you can use to merge the data into PDF docs. If so you could look at creating PDFs or populating the fields from code using something like iText (C# or Java). If you don't want to build your own back end server you could try something like Docmosis Cloud - but that might not allow you to get your precise layouts.
Certainly the catch you mentioned - needing to keep the PDFs editable with their fields is a significant gotcha in all cases. If you could convince the stakeholders that it is better to generate the final documents from your system (generate draft, review, update data, generate again etc) - rather than generating editable documents that you then lose control and tracability over, then you will be miles ahead.
Hope that helps.
Did you consider just generating a new pdf using an image of the form as the background to the pdf and just writing the user's data into the required areas over the form image. Would reduce the complexity of trying to parse the original form PDFs.
That's a point of worthwhile discussion, but one we don't have an ideal answer on. I tend to think of that as the almost perfect scenario -- it'd be considerably easier to develop. There are two key issues with this approach that have made us table it except as a very last resort:
The users of this product would be working in the field. That field could be quite literally anywhere--the streets of Manhattan, a disaster-stricken area with infrastructure that's been severely damaged or even destroyed, or the most war-ravaged third world country. If it were the streets of, say, Manhattan, there's no problem--their iOS or Android device will have 3G or Wi-Fi access just about anywhere they go. In the latter two scenarios (which are arguably more common in this industry), that connectivity may be very limited. The concern is whether the end user's ability to be productive or to see and share data with their colleagues will be too greatly restricted if they don't have a decent signal. To be fair though, even today they often aren't even using mobile devices, forcing them to go back to a headquarters type location or use radios to share information, effectively negating my point here. But if we're not going to significantly increase their productivity in the field, it just gives us pause to think through whether or not we have enough of a value proposition to ask them to fairly significantly change their methods of doing things.
To your latter point, no there's no convincing the stakeholders that this new system is the better approach. Even if there were, it would take years to do so. These forms are a part of a well-defined, decades-old standard used by literally thousands of organizations.

Creating own iOS-controls with Flex

I was looking into creating iOS-apps (especially for the iPad) with the Adobe-Flex framework. It looks very promising to code apps this way.
Is it possible to create own controls/widgets? In the far future I might want to create my own kind of gantt-calender or whatever. Is something like this possible and are there any good tutorials/book out there?
Thanks in advance!
UPDATE: I want to create iOS Components that I can use in Flex. Controls, that are not aviable by default in Flex. Is that possible? By derivating or something?
UPDATE 2: In the meanwhile I found FlexLib to be useful. How hard is it to create stuff like this on your own? Especially for mobile devices. Are there any good tutorials, books, etc. out there?
Yes, you can create your own Controls in Flex. They are commonly called Components. I suggest you start by reading the Flex Docs on how to do so. There are also plenty of other resources out there. One is a screencast series that I created for The Flex Show. Here is episode 1.
You had asked:
How hard is it to create stuff like this on your own?
It depends on what the component wants to do. The commercial components we've built at www.flextras.com have taken from three to twelve months to build. Our Calendar is built from scratch, but most of the other components extend existing Flex Framework components.
The Flextras stuff are architected for reuse. A "single use" component for a specific app can be built in 1 hour [and up].
Once again, the purpose of a component will affect how long it takes to build.
#chiffre
Ok, maybe I am guessing wrong but "iOS controls" makes me think not to "flex controls".
Anyway with Flex 4.5.1 you can add any controls you want, the only thing that you must count (and this counts a lot) is performance.
Read especially about item renderers since scrolling list is not so fast on iOS and how you can make use of cacheAsBitmap.
Also keep in mind to always use light controls when needed if not extend base controls like UIComponent or Sprite and not Button if you just need a rectangle.
Here are some links
http://blogs.adobe.com/flashplayer/2011/06/adobe-air-2-7-now-available-ios-apps-4x-faster.html
http://www.adobe.com/devnet/flex/articles/mobile-development-flex-flashbuilder.

Resources