Background
In AOT compiling Dart for iOS Android (Dart Developer Summit 2016):
iOS restriction: Can't JIT
And also, from reading an article from the dart team: Flutter: Don’t Fear the Garbage Collector, I read:
In debug mode, most of Dart’s plumbing is shipped to the device: the
Dart runtime, the just-in-time compiler/interpreter (JIT for Android
and interpreter for iOS), debugging and profiling services.
Question
I wonder what the differences are between these 2 concepts, for dart specifically. Why can't iOS support JIT compilation but it can support a Dart interpreter?
Non-question
This is not about AOT vs JIT compilation, which is the more common question. You can find out about that here.
I also already know the difference between an Interpreter and a Compiler: Interpreter executes the code step by step in a higher level form, instead of compiling it to machine code, like JIT and AOT execution.
There aren't any technical reasons for iOS not to support the Dart JIT compiler, it's due to a security policy by Apple. Also this limitation is not Dart specific, it affects every JIT compiler out there.
A JIT compiler needs to create new machine code on the fly and write it to executable pages, but on iOS that can only be done by apps with the "dynamic-codesigning" entitlement. This entitlement is granted for example to 1st party apps using JavaScriptCore, but not 3rd party apps.
This means App Store apps cannot write to executable memory, and so a JIT cannot be embedded in them. You can run JIT accelerated JavaScript using a WKWebView (which runs in a separate process with 1st party entitlements) but you cannot run your own JIT compiler on iOS.
An interpreter w/o a JIT on the other hand never generates new machine code so it can run without problems on iOS (though the App Store review process still imposes restrictions on what scripts it is allowed to run.)
Over the years there have been several exploits to get a JIT working, allowing for example to switch pages from +w to +x and back, but never both at the same time, but it seems currently none of those work and using a JIT in a 3rd party app is simply not possible. Even if exploits are still available, the Dart team wouldn't rely on iOS vulnerabilities that could be fixed at any time, or worse, cause apps to be flagged as malicious.
Note you'll find articles online talking about how iOS 14 allows 3rd-party apps to use a JIT compiler, but that seems to be limited to apps sideloaded via developer tools so it's not relevant for Dart / Flutter, which target devs who want to distribute their apps via the App Store.
Related
I understand that there could be many reasons but when the developer community has already adopted ES6 and is working hard to make it better then why dart and not JS?
Is there anything special which makes dart such a good fit for Flutter?
That's an FAQ and already answered extensively
https://flutter.io/faq/#why-did-flutter-choose-to-use-dart
https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf
I'm not sure why you wrote ES6 and "dart js".
Flutter has nothing to do with JavaScript.
While Dart can be compiled to JavaScript, Flutter doesn't use this feature.
For Flutter Dart is compiled to native binary code.
I did a little research after being asked the question by a couple of colleagues and thought it would help by summarising what I have read and thought about (it's a very important question for my colleagues)
Language requirements for Flutter
AOT and JIT compilation for fast reload and fast released code
A good garbage collector to clean up after creating and destroying many objects
Single threaded to avoid locks and therefore jank
An arm compiler to avoid having another engine running the code on the device (aka React Native)
Dart meets all these requirements. JS (I think) meets all the above pretty closely too, apart from the AOT and JIT compiler part.
Why didn't Flutter choose JS and build a JIT and AOT compiler? (initially they did choose JS but they switched) I could guess at the following...
Dart was built with to-binary compilation in mind from the beginning
Dart already had a to-binary JIT compiler, it added the AOT compiler later
Dart is more structured and simpler (it is type safe and has no eval)
JS could implement new language features that might jeopardise flutter dev
Dart can be optimized for Flutter without needing to worry about other uses of JS
Historically long wait times for new JS functionality (last 3 years has been better)
The Dart and Flutter teams can work together closely
Saying all of that I can imagine that a JS solution could happen but it might be costly and a more complicated solution. Dart is pretty good and Dart2 has really improved things with inherent type safety.
Dart has a declarative and programmable layout that is easy to read and visualize. Hence, Flutter doesn't require a separate declarative layout language like XML. It is easy for Flutter to provide advanced tooling since all the layout in one language and in a central place
Dart is much faster than JavaScript, as it can be compiled both AOT and JIT which helps building apps in several ways as using JIT compilation can speed up development and AOT compilation can be used during the release process for better optimization. This technique has been used in Flutter.
follow the link
https://medium.com/hackernoon/why-flutter-uses-dart-dd635a054ebf
https://insights.daffodilsw.com/blog/why-flutter-uses-dart
I've been using dart/flutter for a few projects, and I'm really enjoying it.
I've read that when building a mobile app, dart builds a native app with native code. But I've also read that dart has its own VM for performance.
What I'm trying to understand is if that VM is what is used when you build a mobile app, or is it building other code that it compiles for the native app. And if its doing something else, what is the dart VM still used for?
Short answer: yes, Dart VM is still being used when you build your mobile app.
Now longer answer: Dart VM has two different operation modes a JIT one and an AOT one.
In the JIT mode Dart VM is capable of dynamically loading Dart source, parsing it and compiling it to native machine code on the fly to execute it. This mode is used when you develop your app and provides features such as debugging, hot reload, etc.
In the AOT mode Dart VM does not support dynamic loading/parsing/compilation of Dart source code. It only supports loading and executing precompiled machine code. However even precompiled machine code still needs VM to execute, because VM provides runtime system which contains garbage collector, various native methods needed for dart:* libraries to function, runtime type information, dynamic method lookup, etc. This mode is used in your deployed app.
Where does precompiled machine code for the AOT mode comes from? This code is generated by (a special mode of the) VM from your Flutter application when you build your app in the release mode.
You can read more about how Dart VM executes Dart code here.
When the Dart VM is used in release mode, it is not really a VM (virtual machine) in the traditional sense of a virtual computer processor implemented in software, which has its own machine language that is different from the hardware's machine language.
This is what causes the confusion in the original question. In release mode, the Dart VM is basically a runtime library (not much different than runtime libraries required by all high level languages).
The Dart VM is perfectly good for server-side applications, particularly using dart:io to access local files, processes, and sockets.
My understanding is that Xamarin's Ahead-of-Time (AOT) Compiler compiles Xamarin.iOS applications directly to native ARM assembly code (How Xamarin works).
What I don't get, however, is why it needs to be called "Ahead-of-Time" as opposed to just being an ordinary compiler. Is there any distinction between Xamarin's AOT compiler and a traditional compiler or is this just a marketing term?
How AOT compares to a traditional JIT compiler
Ahead-of-Time (AOT) compilation is in contrast to Just-in-Time compilation (JIT).
In a nutshell, .NET compilers do not generate platform specific assembly code, they generate .NET bytecode, instructions which are interpreted by the .NET virtual machine. This bytecode is portable, any .NET VM can run it, be it Windows Phone, Mono on Linux, or a JavaScript-based implementation. Unfortunately, because the code has to be interpreted by the VM it is slower than native code which can be executed by the processor itself. That's where JIT and AOT come in.
When a .NET application starts up, the JIT compiler analyzes the bytecode, identifies areas that could be sped up by being translated to native code, and compiles them. During execution, the compiler can also identify hot paths for compilation.
Unfortunately for .NET, Java, and any platform that would benefit from JIT, dynamic code generation is disallowed by the App Store terms of service. Since Xamarin can't perform JIT on the device and they know they're shipping to ARM devices, they can run a JIT-type compiler ahead of time (AOT) and bundle it into the binary.
How AOT compares to a machine code compiler
As mentioned above, AOT translates part of an interpreted bytecode to machine code. It doesn't eliminate the need for a virtual machine bytecode interpreter. The VM will run just as it would if, but occasionally see an instruction that says "Execute this chunk of machine code".
Is this just a marketing term?
No. The message that Xamarin was conveying in that paragraph was that their code performs faster than a simple byte code based language. For both iOS and Android, they are able to execute native code on hot code paths to improve performance. The terms AOT and JIT are technical details about how they do that.
In Xcode 4.3, now you can enable using LLDB as the debugger for iOS targets.
What advantages does it have over using the good old GDB? GDB still works with LLVM and I cannot see any obvious differences in "everyday" debugging tasks.
The most notable advantage is that LLDB understands dot syntax in properties:
po self.property
A quote from LLVM project blog:
LLDB supports basic command line debugging scenarios on the Mac, is scriptable, and has great support for multithreaded debugging. LLDB is already much faster than GDB when debugging large programs, and has the promise to provide a much better user experience (particularly for C++ programmers). We are excited to see the new platforms, new features, and enhancements that the broader LLVM community is interested in.
Another quote from LLDB homepage:
LLDB is a next generation, high-performance debugger. It is built as a set of reusable components which highly leverage existing libraries in the larger LLVM Project, such as the Clang expression parser and LLVM disassembler.
Why a new debugger
In order to achieve our goals we decided to start with a fresh architecture that would support modern multi-threaded programs, handle debugging symbols in an efficient manner, use compiler based code knowledge and have plug-in support for functionality and extensions. Additionally we want the debugger capabilities to be available to other analysis tools, be they scripts or compiled programs, without requiring them to be GPL.
There seems to be a lot of confusion regarding deploying Adobe Air apps to ios after the restrictions were lifted. Before apple lifted the restrictions you had to go through the process documented here: http://blogs.adobe.com/cantrell/archives/2010/09/packager-for-iphone-refresher.html using the Packager for iPhone. But now that the restrictions have been lifted and the Air 2.7 update we can use the same ADT tool in the flex SDK that we use with all air applications.
My understanding is that the old Packager for iPhone (PFI) some how converted actionscript code into native objective C in order to be accepted by apple.
If that is correct does the restrictions being lifted mean that the ADT tool is not converting to objective C and is only bundling the AS3 .swf and air player together when creating the .ipa app file?
What exactly changed in the Air deployment process after apple lifted its restrictions?
If anyone could point me to some documentation on how the .ipa file is being created behind the scenes I think this would really clear some confusion.
Thanks for the help
Nothing really changed; apple just lifted the ban. The ban wasn't just on flash-created apps, it was on any tool that created any kind of intermediary language or used a virtual machine, etc. What the PFI does: it actually uses the LLVM compiler to statically compile actionscript 3 BYTECODE (not AS3 source) into native ARM assembly. So essentially when you're deploying an IPA it's the same idea as publishing a SWF to an exe (as in the publish settings) in the sense that both your SWF application and the flash virtual machine are bundled together, except instead of being an exe where the code inside is x86 ASM with AS3 bytecode executed along the VM, it's ARM. The PFI and all its classes were simply merged into the ADT tool. The PFI contained a LLVM dll which is accessed through various LLVM java classes that were added to the internal adobe version of the ASC or actionscript compiler. These LLVM classes and other associated classes, however, are not open source, which adobe is allowed to do, even though the ASC is open source because it's licensed under the MPL or mozilla public license, which permits the use of the open source code in proprietary closed source applications without sharing your changes.
For proof of all that I've told you, just download the new flex SDK that includes the ADT with the PFI merged in and you'll find the LLVM dll's etc. Further, you can decompile the ADT jar and see all the LLVM classes. The LLVM classes ( I believe ) intercept the ASC bytecode through the class GlobalOptimizer, or at least it did back in the day... they've probably changed that. The only other thing that has changed is that apparently Adobe has optimized the PFI (now merged into ADT) quite a lot. More info here:
http://blogs.adobe.com/cantrell/archives/2010/09/packager-for-iphone-refresher.html
http://www.leebrimelow.com/?p=2754
Update
Here is an official Adobe article confirming the things I've told you:
http://www.adobe.com/devnet/logged_in/abansod_iphone.html. I also should clarify that I've really over-simplified the process behind the scenes and appear to me mistaken in one of my points. I guess somehow the PFI actually merges the AS3 bytecode and the VM into a single seamless executable that doesn't use JIT compilation, and thus would technically not be a virtual machine? Not sure on that point, but the above article does seem to imply this:
"When you build your application for iOS, there is no interpreted code and no runtime in your final binary. Your application is truly a native iOS app."