Unable to Load Dart SDK on Raspberry Pi Zero W - dart

I'm trying to get the Dart SDK on a Pi Zero W.
When I download the SDK archive, extract it, and put it in the /usr/lib folder manually, I get segmentation faults when I try to run any of the command line tools. I reflashed the memory card (32GB, so plenty large) from scratch from an x64 machine and pre-loaded the SDK as well to help ensure that there wasn't any funky Pi file corruption and got the same result.
Though I was sure it wouldn't work, I loaded the ARM7 version of the SDK, and got executable file format incompatibility errors that were not surprising.
I downloaded the .deb package, and got a warning that the file was not meant for my Pi and that I might break things, so I didn't try to install it.
I used the apt-get instructions from the Dart website and that failed with the error "E: Unable to locate package dart" which seems to indicate that I had the incorrect name for the package (note: I copied and pasted it directly from the Dart website). I tried to look through the repository contents, and assuming that I looked at the correct file, there were not any Dart entries in it, so the error is not surprising.
My Linux competence is suspect, so I could use any ideas. I'd prefer not to build the SDK from scratch as, in my experience, open source build instructions almost always assume that the user needs to know/do something that is not explicitly listed in the instructions, so that tends to be a 2-hour effort that ultimately fails (pretty sure I'm not the only one who's had that experience).
Thoughts, anyone?

That is not going to work. Your problem is that "Pi Zero W" is a "1GHz single-core ARMv6 CPU (BCM2835)" CPU which means it can only execute programs for the ARMv6 architecture or lower.
Dart does have a minimum requirement for ARMv7 since they removed support for ARMv6 early this year: https://github.com/dart-lang/sdk/issues/42069
The support was never that great for ARMv6 (I did have an old Raspberry Pi) and programs was running really slow with missing support for FFI. So my recommendation is to get a board which supports ARMv7 or ARMv8 (ARM64) which works really great.

Related

Is it required to build LLVM in order to build hipSYCL?

I'm running Centos 7 and am trying to build hipSYCL (see here)
The issue is that hipSYCL needs to have cmake info from the LLVM build (via the LLVM_DIR cmake variable).
This is problematic for me because building LLVM requires a massive 35Gb for the libraries and exes. I don't have that much memory to spare.
I did find a build of llvm-toolset-8.0 online for Centos 7 and installed it, but to my surprise, that didn't seem to work with LLVM_DIR because there's no cmake files (since I didn't build it locally).
So, my question would be, is there a way to build hipSYCL using pre-built LLVM-clang?
If I'm missing or misunderstanding something, I'd appreciate any help.
LLVM publishes the necessary cmake files, and the binary OS packages I've seen include it, generally in a directory called /usr/lib/llvm*/lib/cmake and in a package called something like llvm-*-dev.

Is it possible to use TensorFlow C++ API on Windows?

I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.

How to configure mono to use more than 4G memory?

I want to run an .NET executable that needs more than 4G RAM on OSX 10.9. I had Xamarin Studio installed but AFAIKT Xamarin doesn't come with a 64-bit mono build, so I decided to make a custom 64-bit mono with "with-large-heap=yes" configuration, and install it in a different location.
git clone https://github.com/mono/mono
cd mono
./configure --prefix=<my-local-dir> --enable-nls=no --with-large-heap=yes
make
make install
(I also built a 64-bit F# and installed in my-local-dir, following "Option 3" in this page.)
However, when I use the 64-bit mono to run the executable (an F# program built with the canonical "fsharpc" in Xamarin), it still crashes with System.OuOtfMemory exception. I tried this:
export PATH=$PATH:<my-local-dir>/bin
MONO_GC_PARAMS=max-heap-size=5g <my-local-dir>/bin/mono <my-executable>
And it gives a warning
Warning: In environment variable `MONO_GC_PARAMS': `max-heap-size` must be an integer.
(this error message is a bit misleading, I think it really means 5g is too large and not supported, because it doesn't complain if I put a "3g" there). And the program still crash with the same exception at the point when it exceeds the memory.
Did I miss any thing important? How do I configure mono to have more than 4G heap size?
You are still running the 32 bit version of mono (check your PATH env var). This also explains the parsing of 5g for max-heap-size (it will work correctly with 64 bit mono).
The default as with your configure command above is to install in /usr/local/bin, so just run your programs with /usr/local/bin/mono program.exe.

EmguCV - nvcuda.dll could not be found

I've been asked to build a real-time face recognition application, and after some looking around I've decided to try EmguCV and OpenCV as the facial recognition library.
The issue I'm having at the moment is trying to get the SDK installed and working. I've followed the instructions found here to try and get it running, but I still can't run the samples. Whenever I try and run them, I get the error
The program can't start because nvcuda.dll is missing from your computer.
Try reinstalling the program to fix this problem.
I've tried most of the usual fixes, such as adding the bin folder to my environment path and copying the dll's into my system32 folder, but none of it seems to work.
EmguCV version 2.4.2.1777-windows-x64-gpu
Windows 8
AMD Radeon HD 6700 series graphics card.
I'm assuming this is an issue with the fact that I dont have an nVidia graphics card, but I'm not sure what I can do about it. For now, I'm going to try recompiling the source rather than using the downloaded .exe, and seeing if that helps.
Any suggestions?
Had the same problem, EmguCV 2.4.2 (no matter if x86 or x64) is compiled with GPU and you have to had nvidia GPU with CUDA support. So, if you want for eg. Fisherfaces from 2.4 in C# - wait for non-GPU release or buy/borrow CUDA card ;)
I happen to have the exact same problem as you. Everything is working fine on my computer (WinXP 32-bit) but not on Win7 64-bit computers.
This was because on my computer I already have OpenCV 2.4.2 installed and when I execute my program the path to the OpenCV dll points to the OpenCV folder and not to the dlls in the EmguCV folder. The original OpenCV dll don't have this dependency on NVidia's driver.
I used Dependency Walker to help me find out what was happening, as suggested here.
This link says that only the -gpu packages have gpu processing enabled but as you say the latest version (2.4.2) only a gpu package and no no-gpu package...
I read here that all I needed was to download the latest NVidia drivers to get the nvcuda.dll file but I downloaded many packages and never found this file: gpu computing sdk, cuda toolkit, display drivers, device drivers...
My workaround, instead of using an older version of EmguCV/OpenCV is to use the original dll from OpenCV 2.4.2.
I just used nvcuda.dll from dll-files.com.
It seems the issue is that the latest version on the site does not contain a non-GPU enhanced download, and that the GPU enhanced download requires an nVidia graphics card for CUDA integration.
I successfully downloaded and run the previous version which does not have GPU enhancements.
I had similar problem.
When I compile and run my application on computer with NVIDIA gpu it works fine.
Problem was when I moved app to another computer.
This second computer has no NVIDIA gpu and it threw 'Emgu.CV.CvInvoke' exception.
After many attempts I fortunately solved this problem.
As you mentioned before for now there is only gpu package for version 2.4.2.
I didn't notice this before.
For me solution was:
Copy files: 'cudart64_42_9.dll' and 'npp_42_9.dll' into Debug (application) folder
Copy file 'nvcuda.dll' into System32 folder.
After this steps aplication works on all computers even without NVIDIA gpu/ CUDA.
Other solution might be using opencv universal gpu version (for now is alpha 2.4.9) link: http://sourceforge.net/projects/emgucv/files/emgucv/2.4.9-alpha/
You can download source EmguCV from GIT and compile it, i have done this and works :
http://www.emgu.com/wiki/index.php/Download_And_Installation#Building_from_Git
It generates a non-GPU version of dlls
Regards.
here's also another copy of the dll's:
http://www.kimchiandchips.com/files/vvvv/nvcuda/
so 2 solutions:
Get NVidia CUDA DLL's from the above link. Ideally rename the 64 or 32bit version to nvcuda.dll based on your required platform. Put next to your opencv dll's
Upgrade to 2.4.9 which has universal GPU support
I also had some problems when doing my dissertation using EmguCV for face recognition.
Try to use the stablest version libemgucv-windows-x86-2.4.0.1717.exe
Try not to use the gpu download, this version has the least bugs and the 32-bit is better than the x64.
when compiling for the first time use visual studio 2012.
With this version you wont need to install all the above mentioned. You can see this example to know it really works : http://sourceforge.net/projects/emgufacerecog/

Linux Version of Z3: Dependency On Old libgmp.so.3

Z3's dependency on libgmp.so.3 is unresolved in the linux package, leaving the user to provide this library. However, this library is very old and is not readily available.
Does anyone know a method for getting around this issue? I am currently running x86_64 and cannot get around this missing dependency without a great deal of hassle.
Is it possible the linux packages could be fixed such they include the expected library in the distribution?
You can get GMP3 by executing sudo apt install libgmp3-dev.
I'm not a Linux expert, but this is the command I used to install GMP before I compiled Z3.
When I installed the virtual machine for running Linux 64, I think I didn't find a package for the more recent versions of GMP.
I will try again. If it doesn't work, I will download the most recent GMP tar ball and build it from scratch.
BTW, the Z3 for Linux 32 comes with two .so files. One of them has GMP statically linked.
The trick I used for building this .so file didn't work for the 64 bit version.
As I said, I'm not a Linux expert, any suggestions on how to build a better Z3 library for Linux x86_64 users are welcome.

Resources