Is Dart available for Raspberry Pi? - dart

Is there a standard way to run a Dart VM on a Raspberry Pi?
I haven't found any information about that in mailing lists.

Yes, DartVM is on R-Pi.
It can be downloaded from this link ( labelled as Linux ARMv7 ):
https://www.dartlang.org/install/archive

Dart ARMv7 builds are now available via the dart manual download page on the dart website https://www.dartlang.org/downloads/archive/
Old Raspberry Pi is an ARMv6 and so only the new Raspberry Pi 2, ARMv7 is supported by these dartlang SDK build.

Currently the arm port of the Dart VM is still being developed (though answers from Devoxx imply it is getting close). However nothing prevents you from running a web server on PI and serving up client side dart code and JavaScript compiled dart code for client side apps.

Standard... not yet as far as I know. So I went ahead and built it:
https://plus.google.com/115544353187538281728/posts/bDGc2BzHwZ5

Raspberry Pi builds are now built and published for each version of Dart.
They are not currently listed on the website but you can download them from here:
http://gsdview.appspot.com/dart-archive/channels/stable/release/1.12.2/sdk/

Related

How do I build and install an electron app for raspbian using appimage

I am working on creating an electron appimage for my raspberry pi 4 to use in my car. I want to be able to use auto-updates from electron-builder so that I wont have to take apart the R-PI every time I want to update it.
I have come across many articles,
https://itsfoss.com/use-appimage-linux/
https://www.youtube.com/watch?v=KiehhZ6Wb-4
saying that you can go to the file properties and check "execute file as program" but this is not the case for raspbian. Raspbian does not have this option in its file properties.
It could be how I am building and releasing my program. For more information, here is the project I am working on: https://github.com/bomeers/Piro/releases/tag/v0.0.3
and here is the source code: https://github.com/bomeers/Piro/tree/dev
Is it even worth using electron? Should I choose Qt (python) instead? Anything helps, thanks!
I have been building and running Electron Apps in AppImage format on Raspbian for quite a while and it (mostly) works without any issues. Some advice however:
If possible use the latest Raspbian "Buster" as previous versions can not properly build recent versions of Electron due to a glibc issue
Set the proper target armv7l, this (currently) still applies to the RPI 4
Use at least Electron version 5.0.10 as previous versions of the 5.x branch had a weird issue of AppImage format Apps crashing when you clicked any menu item
If you build your App using electron-builder you may need to manually add a working version of mksquashfs as described here
Other than that I never found any issues and it works just fine on Raspberry 3 / 3+ and 4.
* Edit *
An example how to configure the build target for Linux / Raspberry 4 in package.json:
linux: {
target: {
target: 'appimage',
arch: ['armv7l']
}
}

Can I use OpenCV on AIX?

I developed an image processing library with OpenCV and it works well in Windows, Android(Native) and iOS.
Now I want to build my library to run on AIX server. Unfortunately I couldn't find any guidance for building OpenCV for AIX.
Can you give me any guidance?
There is no official support for OpenCV on AIX. No community driven project either.
However there is another project maintained by IBM called IBM AIX Toolbox for Linux Applications.
This project is intended for developers and provides most Linux based, especially GNU based programming languages, tools & libraries to be run on AIX.
You'll have to go through setting up the environment / dependencies, though it must compile just fine. Linux tutorials for building OpenCV using GCC should work just fine.
You might ask the person at Perzl if he could build it. He must have a lot of knowledge, tools, and environment already. I also find it much better than the IBM AIX Toolbox so if you want to try to do it yourself, I would start with his versions instead of IBM's.
Group Bull use to have a similar set of built open source packages but I don't know where they disappeared to.

Is it possible to use TensorFlow C++ API on Windows?

I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.

Vala/Genie builds for Win/Mac?

Is the Vala/Genie compiler available on the Windows and Mac OS X platforms? I know that it is possible to use GLib and GTK on Windows and Mac OS X, but there are no official downloads of Vala for either platform.
Vala 0.28 is currently available on Mac OS X in just the same way as the rest of the GLib/GTK platform is. Here are the official instructions for setting up a GLib/GTK development environment on Mac OS X. To build the Vala/Genie compiler, run jhbuild build vala after completing those instructions.
I don't know the answer for Windows.
There are no "official" builds of Vala as such. Vala is officially released as source code only. The source is then built by various distributors who package and distribute the builds.
On Linux this is done by distributions like Fedora and Ubuntu. On Mac OS X probably the most relevant is Brew and on Windows MSYS2. For more details on all of these ways see the Installing Vala section of the Vala wiki.
There are several ways of getting Vala compiler to work on Windows. The easiest solution would be installing MSYS2 which always provides fresh version of vala as one of it's packages.

Intel Thread Building Blocks (TBB) in google native client (NaCl, PNaCl)

The current google native client port of OpenCV does not utilise TBB. It says here TBB can be built under NaCl.
Is there an official port, or has anyone successfully built TBB under NaCl?
Thanks :)
For now there is no official port of Intel TBB for NaCl, and the project team at Intel (which I work in) is unaware of any unofficial one either.

Resources