Monodroid (and Monotouch) look like a great way to develop at least the non-ui part of an app cross-platform and use a common language.
However how much overhead does monodroid add for app size and CPU usage?
On the trial (emulator only) download it seems to install 27Mb of Mono, plus 12Mb of platform support but the faq says only ~4.4Mb will be added to an app in the final appstore?.
For running does Mono run a CLR VM in a Dalvik VM (i.e. is there any significant CPU overhead for something like writing games)
To make debugging quicker, MonoDroid installs the Mono runtime and full set of class libraries to the device instead of packaging and transferring them with your application code every time you make a change.
When you change your project to Release mode, the Mono runtime and the class assemblies your application actually uses are placed in the apk. Additional, a linker pass is run to remove classes and methods from those assemblies that your application does not use.
As the FAQ says, the current overhead is ~4.4MB.
The CLR VM runs separate from the Dalvik VM. (You can run native C code on Android.) The 2 interact any time you use something in the Mono.Android namespace.
Related
Background
In AOT compiling Dart for iOS Android (Dart Developer Summit 2016):
iOS restriction: Can't JIT
And also, from reading an article from the dart team: Flutter: Don’t Fear the Garbage Collector, I read:
In debug mode, most of Dart’s plumbing is shipped to the device: the
Dart runtime, the just-in-time compiler/interpreter (JIT for Android
and interpreter for iOS), debugging and profiling services.
Question
I wonder what the differences are between these 2 concepts, for dart specifically. Why can't iOS support JIT compilation but it can support a Dart interpreter?
Non-question
This is not about AOT vs JIT compilation, which is the more common question. You can find out about that here.
I also already know the difference between an Interpreter and a Compiler: Interpreter executes the code step by step in a higher level form, instead of compiling it to machine code, like JIT and AOT execution.
There aren't any technical reasons for iOS not to support the Dart JIT compiler, it's due to a security policy by Apple. Also this limitation is not Dart specific, it affects every JIT compiler out there.
A JIT compiler needs to create new machine code on the fly and write it to executable pages, but on iOS that can only be done by apps with the "dynamic-codesigning" entitlement. This entitlement is granted for example to 1st party apps using JavaScriptCore, but not 3rd party apps.
This means App Store apps cannot write to executable memory, and so a JIT cannot be embedded in them. You can run JIT accelerated JavaScript using a WKWebView (which runs in a separate process with 1st party entitlements) but you cannot run your own JIT compiler on iOS.
An interpreter w/o a JIT on the other hand never generates new machine code so it can run without problems on iOS (though the App Store review process still imposes restrictions on what scripts it is allowed to run.)
Over the years there have been several exploits to get a JIT working, allowing for example to switch pages from +w to +x and back, but never both at the same time, but it seems currently none of those work and using a JIT in a 3rd party app is simply not possible. Even if exploits are still available, the Dart team wouldn't rely on iOS vulnerabilities that could be fixed at any time, or worse, cause apps to be flagged as malicious.
Note you'll find articles online talking about how iOS 14 allows 3rd-party apps to use a JIT compiler, but that seems to be limited to apps sideloaded via developer tools so it's not relevant for Dart / Flutter, which target devs who want to distribute their apps via the App Store.
I've been using dart/flutter for a few projects, and I'm really enjoying it.
I've read that when building a mobile app, dart builds a native app with native code. But I've also read that dart has its own VM for performance.
What I'm trying to understand is if that VM is what is used when you build a mobile app, or is it building other code that it compiles for the native app. And if its doing something else, what is the dart VM still used for?
Short answer: yes, Dart VM is still being used when you build your mobile app.
Now longer answer: Dart VM has two different operation modes a JIT one and an AOT one.
In the JIT mode Dart VM is capable of dynamically loading Dart source, parsing it and compiling it to native machine code on the fly to execute it. This mode is used when you develop your app and provides features such as debugging, hot reload, etc.
In the AOT mode Dart VM does not support dynamic loading/parsing/compilation of Dart source code. It only supports loading and executing precompiled machine code. However even precompiled machine code still needs VM to execute, because VM provides runtime system which contains garbage collector, various native methods needed for dart:* libraries to function, runtime type information, dynamic method lookup, etc. This mode is used in your deployed app.
Where does precompiled machine code for the AOT mode comes from? This code is generated by (a special mode of the) VM from your Flutter application when you build your app in the release mode.
You can read more about how Dart VM executes Dart code here.
When the Dart VM is used in release mode, it is not really a VM (virtual machine) in the traditional sense of a virtual computer processor implemented in software, which has its own machine language that is different from the hardware's machine language.
This is what causes the confusion in the original question. In release mode, the Dart VM is basically a runtime library (not much different than runtime libraries required by all high level languages).
The Dart VM is perfectly good for server-side applications, particularly using dart:io to access local files, processes, and sockets.
I am writing an F# port of a program I wrote in native code in the past. I used BenchmarkDotNet to measure its performance. I also placed a native EXE in the application's output directory.
I set my native program as the baseline benchmark and saw it was 5x faster than my F# program. Just as I expected!
However, the native program is posted on GitHub and distributed as a Win64 binary only. In case somebody using another OS tries to run it, it will crash.
So, how to specify that this benchmark will only run on 64-bit Windows?
In BenchmarkDotNet, there is a concept of Jobs. Jobs define how the benchmark should be executed.
So, you can express your "x64 only" condition as a job. Note that there are several different 64x jit-compilers depends on runtime (LegacyJIT-x64 and RyuJIT-x64 for the full .NET Framework, RyuJIT-x64 for .NET Core, and Mono JIT compiler). You can request not only a specific platform but also a specific JIT-compiler (it can significantly affect performance), e.g.:
[<RyuJitX64Job>]
member this.MyAwesomeBenchmark () = // ...
In this case, a user will be notified that it's impossible to compile the benchmark for required platform.
Unfortunately, there is no way to require a specific OS for now (there is only one option: current OS). So, in your case, it's probably better to check System.Environment.Is64BitOperatingSystem and System.Environment.OSVersion at the start and don't run benchmarks on invalid operation systems.
I have read an article from Xamarin, and came across a particular computer science word : Ahead of Time.
According to some google search result, this AOT does not allow for code generation during run time.
Does it mean, it does not support dynamic stuff?
I know this question may stupid and I have 0 knowledge in IOS, hopefully can get some answer from here. thanks
First, what is the definition of dynamic? For the general public, dynamic code mean the application can change functionality at run-time. For the iOS platform, the binaries are signed to prevent malware. And Apple don't like apps that can load functionality at run-time.
An ahead-of-time (AOT) compiler has nothing to do with dynamic code per se. It's has to do with intermediate language that are just-in-time compilation (JIT). The biggest example of intermediate language is Java bytecode; compile once, run anywhere. When a Java application is executing, the compiled code is JIT to native machine code. AOT compiler is just doing it ahead of time, to save time.
For the iOS platform, Xcode compiles Objective-C to a native binary for the device.
Another way of looking at this is with an example...
In .Net, you can use the Reflection.Emit namespace to generate and compile code at runtime.
Eg you could create an "IDE" with a textbox that accepts C#. When you click a button that C# could be compiled by the .Net framework into a custom library that's loaded dynamically or a fully-fledged executable which is launched as a new process.
This is insanely powerful when combined with the rest of the System.Reflection namespace. You can examine objects at runtime and compile custom code based on any criteria you like.
That said... The problems usually outweigh the benefits in most cases. Mainly, it's a massive security issue, especially when running on a consumer device.
It would be possible to create an app that wouldn't have anything close to malicious code, get it audited by apple, then have the app download code from your webserver, compile it, and execute it. This new code wouldn't be audited...
There really is no good reason to be doing this in a consumer app.
Is there any migration analysers available for MonoTouch ?
I have seen one for Mono, but not for MonoTouch.
Short answer: No, there is none at the moment.
Long answer
The situation is a bit different from Mono. In general you test a complete and compiled (against a specific version of the framework) .NET application with MoMA, to get a report of what pieces are missing (or incomplete) in Mono that could affect the execution of your application on other platforms (e.g. OSX and Linux).
Testing a complete applications for MonoTouch would reports tons of issues - since the UI toolkit is totally different. E.g. anything about System.Windows.Forms, WPF... would always missing.
However if your assemblies are already split into (something like) an MVC design it would be possible to test some (the non-UI parts) of them against definitions based on the MonoTouch base class library.
Finally if someone has an immediate need (or looking for a nice project) MoMA is available as open source and the evaluation versions of MonoTouch contains all the assemblies needed to build the definitions files. A bit of extra filtering could make this into a very nice tool.
Alternative
You can see a list of the assemblies that are part of MonoTouch and some platform restrictions (compared to .NET) you should be aware.