What is the Dart Runtime? - dart

I studying a lot about Dart and Flutter and got some questions regarding the Dart Runtime and VM:
What is the Dart Runtime?
What is the difference between the Dart Runtime and the Dart VM?
The dart code compiled to AOT can be considered the same process of Java compiling to Bytecode, since it needs a VM (or runtime) to run on a machine?

Our video from I/O 2019 explains a lot of this – https://www.youtube.com/watch?v=J5DQRPRBiFI

Related

Dart VM initialization failed: Snapshot not compatible

I just run dart2aot main.dart main.aot to compile my dart file to binary and it runs fine on windows, but when I try to dartaotruntime main.aot on another machine, with Linux OS (Ubuntu) it doesn't work, giving me that error:
VM initialization failed: Snapshot not compatible with the current VM configuration: the snapshot requires 'product use_bare_instructions no-"asserts" no-causal_async_stacks no-bytecode x64-win' but the VM has 'product use_bare_instructions no-"asserts" no-causal_async_stacks no-bytecode x64-sysv
How can I solve that? There is a way for me on a windows machine generate the aot for linux?
Answer from Vyacheslav Egorov on the Dart SDK team:
We currently don't support cross compilation of AOT snapshots using Dart SDK. To generate a snapshot for Linux you would need to run Linux version of the Dart SDK. (e.g. you could try using https://learn.microsoft.com/en-us/windows/wsl/install-win10 for that)

Decompiling .dart.snapshot into Dart source code

According to dart-lang/sdk:
Starting in 1.21, the Dart VM also supports application snapshots, which include all the parsed classes and compiled code generated during a training run of a program.
$ dart --snapshot=hello.dart.snapshot --snapshot-kind=app-jit hello.dart arguments-for-training
Hello, world!
$ dart hello.dart.snapshot arguments-for-use
Hello, world!
Now,how can i decompile this hello.dart.snapshot file to hello.dart?
In android Apk that written by java language we can decompile apk and get jar file from class.dex using dex2jar tools, but when application developed by flutter framework(written with dart)how can decompile this application and get application dart classes?
This image show snapshot files that generated in apk assets file.
In release mode, Flutter compiles the Dart code to machine code, currently only ARMv7 (this procedure is called AOT - Ahead-Of-Time compilation). Unlike native Android apps, in which the Java is compiled to byte-code named Smali, which can be (pretty easily) decompiled to Java back again.
Most of the machine code is compiled to the file "isolate_snapshot_instr", which is written in a special format, and the flutter engine (flutterlib.so, also found inside the app), loads it into the app memory in run time. Therefore, you have 2 reasonable options:
Reading the app code at runtime (the .text segment). You can use
frida dump for that, and extract the compiled Dart code that
you need
Pacthing/Using the Flutter engine in order to deserialize the machine code
If you have ipa (IOS app), that could be easier, because all of the code is found in App.Framework.

Is it possible to use TensorFlow C++ API on Windows?

I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.

Packaging F# program for Mono

I am currently learning F# and preparing to write my first program. I will be using Visual Studio 10 in Windows 7 to write the code, because the F# support for MonoDevelop is a few versions behind.
My normal day-to-day development environment is Mac Os X 10.7. I have Mono and MonoDevelop installed. After I finish writing my masterpiece, how do I package it for running on Os X? What DLLs do I need to send to other Windows users so that they can run my .exe file? How do they install those DLLs?
In the Java world (where I usually live), I just package my Java code with any dependencies into a monolithic UberJAR that I can send to anyone who has the appropriate version of Java (usually 6) and they can run my code by typing
java -jar MyUberJar.jar
I routinely write code in Scala and include the Scala library, along with any other dependencies.
Is there any easy way to do something similar for .NET, and specifically for F#?
One alternative is to use the --standalone flag to fsc which will statically compile all the DLL's you need into a single large EXE. The people you send it to will still need to install Mono, but there are no other dependencies.
I think this is what most people use:
http://wix.sourceforge.net/
I say "I think" because at work we've got a release team that builds the installer package for us.

Erlang OTP release compiles with HiPE?

After reading this question Is Erlang the C of the clustered computing world? , I am wondering the official Erlang OTP compiles with HiPE?
In other words, when I compile my .erl source with the OTP release R13 (as example), does it produce "object code" BEAM?
Looking at http://www.it.uu.se/research/group/hipe/ , it does not appear that a standalone HiPE compiler is maintained anymore.
By default HiPE is not used to compile OTP. It is known, however, that OTP libraries can be successfully compiled using HiPE with usually some performance boost (though it depends on your application).
When you run a erlc on your .erl file it produces BEAM file, which is NOT compiled to native code with HiPE. To compile an .erl file to native code using HiPE just run erlc +native file.erl.
Standalone HiPE compiler is not maintained anymore, since it was included into core Erlang/OTP distribution.
I think this depends on what options you passed to the configure script when you compiled the Erlang compiler. It certainly can include it but whether it does by default or not is another issue.

Resources