Compile a C++ project with Bazel for x86 (32 bit) - bazel

I got a C++ project for Bazel, which by default builds for 64-bit Windows on my machine. However, I want to create a 32-bit executable, which, according to the documentation, is supported.
I have tried these commands:
bazel build :knusperli --platforms #bazel_tools//platforms:x86_32
Target #bazel_tools//platforms:x86_32 was referenced as a platform, but does not provide PlatformInfo
bazel build :knusperli --cpu i386_windows
ERROR: No toolchain found for cpu 'i386_windows'.
I thought, since Visual Studio can build 32-bit executables, it would be easy in Bazel as well, but I can't find any information on how to actually do this.

Bazel does not support building 32-bit binaries out-of-the-box. It's possible to add support via a custom CROSSTOOL file.
See:
https://github.com/bazelbuild/bazel/wiki/About-the-CROSSTOOL
https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain
https://github.com/bazelbuild/bazel/wiki/Yet-Another-CROSSTOOL-Writing-Tutorial

Related

How to provide additional flags only to the native cc compiler?

In our project team we applications for micro-controller targets with arm-none-eabi-gcc. For running our testrunner we compile using the native cc compiler.
In our project team most use a Linux os, but some us a windows os (for reasons). The issue we run into is that we use the C11 keyword _Static_assert. When compiling on Linux it works, but when compiling under windows we get the following error: error LNK2019: unresolved external symbol _Static_assert referenced in function {}
This is due to the reason that the default MSVC compiler implements ANSI C89 which doesn't support the _Static_assert symbol. MSVC - C standards support
It also specifies that adding the /std:c11 or /std:c17 compiler options enable the support for _Static_assert.
We already have enabled --enable_platform_specific_config for other reasons.
Simply adding the build:windows --copt="/std:c11" to the .bazelrc solves the issue but also breaks the normal application build because arm-none-eabi-gcc compiler doesn't support the /std:c11 compiler flag.
Question: How can i add the /std:c11 compiler flag so that it only propagates to the native msvc compiler and not to the rest of my build targets?
When using MSVC for host builds (when building a tool that emits source code which is then #include'd, for example protocol buffer compiler) add build:windows --host_copt="/std:c11". When using MSVC for test builds, add test:windows --host_copt="/std:c11". Don't put it in build:windows --copt because that will apply even when running builds to the target.

Is it possible to use TensorFlow C++ API on Windows?

I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.

Compiling C Source for iOS

I have some existing source code that is written in C that I want to build and include in my iOS project. The entire source package is very large and is built using existing Makefiles and GCC. It is producing static libraries (.a files) that I would love to move over to my iOS project. However, the static libraries the Makefile produces is for x86 processors, which obviously won't work on iOS.
Is there a way I can switch GCC to build for ARMv7/ARM64 instead, without making changes to the existing source (in most cases)? I know there is the -march switch for GCC or you can download ARM specific GCC compilers, so I know the general concept of building for a different architecture than the build machine.
To build for ARM on Mac OS, will I have to download a different GCC compiler or is that capability built into the default GCC?
I'm sorry for the lack of understanding of basic concepts here; I'm primarily a Java and Objective-C developer, so building source for different architectures is a mostly foreign concept to me.
Whilst GCC supports a good many CPU architecture and platforms, it is usually built for a single one. To compile for ARM you generally need an ARM-cross-compiling GCC targeted appropriately.
The default system compiler for MacOSX and iOS for all architectures is clang and has been for some time (the last version of GCC apple shipped in dev tools is creaking and obsolete, and definitely won't support ARMv8).
The usual way of getting clang is to install Xcode (free from the App Store). There's a option in the installer (and in the UI of Xcode) to install the command-line tool package. This installs sym-links in /usr/bin to the compiler, and installs a bunch of other stuff you might expect such as make.
clang is (mostly) command-line compatible with gcc, and furthermore, you'll find that if you run gcc from the command-line on a Mac with dev-tools installed, you in fact get clang.
$ gcc --version
Configured with: -- prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
clang comes with ARMv7, ARMv8, i686, x86_64 on MacOSX, and can be configured to compile for any of these from the command line (See documentation)
Given the above, there's a fair chance your code will compile with minimal changes to compiler-flags using the existing makefile. You might want to read the documentation for lipo - which allows you to produce multi-architecture binaries.

How to build os image including gcc g++ tool chain for ARM platform?

I am trying to build an OS image for TI OMAP4 Pandaboard. The downloaded BSP can be built but very limited without gcc g++ compiler. I think it much difficult to add the tool chain in QNX Momentics IDE, because there are so many files to be added. Can I manually modify the buildfile to do it? If possible, please give me an example. Thanks in advance.
No, it is not possible to run g++ on your TI OMAP4 Pandaboard (unless you build g++ from sources for the ARM platform using the existing QNX toolchain running on an X86 platform).
Why not possible: QNX releases their build tools only for X86-based hosts. The currently supported host OS-es include some variants of Windows, Linux and QNX but the precondition is that the host hardware is X86-based.
Likely you do not actually want to build your library on the target hardware; it should not matter where you actually do the build (except in very special cases where you build some source code based on user input, etc.)
What you need to do is build your library on your development host using the ARM toolchain (QCC if you want to use the high-level tools; ntoarmv7-g++ if you want to use the familiar g++ interface). Once you have your binary you can include it in the .ifs file. You just need to include a line in the .build file, similar to the following example:
/path/on/targetfs/yourbinary=/path/on/buildmachine/yourbinary
If your build environment is configured so that mkifs finds your binary then you can omit the "path/on/buildmachine" part.
If you are fine with having the binary on your target under /proc/boot then you can omit the "/path/on/targetfs/ part as well.
For ease of development it would usually be more convenient for you to store your binary on the SD card with a FAT filesystem. Then you can just copy your binary to the SD without having to rebuild the .ifs file.
Finally, once you get experienced you will want to export a part of your host-machine's filesystem via CIFS or NFS and mount it directly from your target. This will save all the trouble of having to copy files (and, possibly, reboot the target) in each build cycle. But this is far off from your original question.
I think you are trying to get the QNX C/C++ compiler to run on your target board. Correct?
If so, rather than installing the Runtime Kit, you install the QNX Software Development Platform and you should be good to go.
You can also use the System Builder to customize your QNX OS, but this is going to be harder than just using the QNX SDP.
One other note: QNX uses qcc for C and QCC for C++ instead of gcc. They both use gcc under the hood, but to compile on QNX, use qcc instead of gcc.

Cross Compiling a library from intel to arm

I am using open source C++ library DCMTK from http://dicom.offis.de/dcmtk.php.en.
I have successfully compiled this library on Windows using VC++ IDE, MacOS Xcode, Mac iOS simulator.
But I am not able to compile this library on iOS device as it is ARM based architecture.
DCMTK library compiled very well on Intel architecture.
Now my problem statement is :-
I need to compile this DCMTK C++ library on ARM architecture by cross compilation.
I am using Ubuntu 64 bit machine for cross compilation.
I have installed binaries from GNU ARM tool chain from http://www.gnuarm.com/
I am using GCC toolchain 4.0 binutils-2.16.1, gcc-4.0.2-c-c++, newlib-1.14.0, insight-6.4, TAR BZ2 [65.5MB] binaries for Ubuntu 64 bit machine for ARM cross compilation.
After Installing these binaries on Ubuntu I have set PATH environment variable to
PATH=$PATH/gnu_arm/bin
For configuring the DCMTK C++ library I have run the following command on shell
CC=arm-elf-gcc CXX=arm-elf-g++ AR=arm-elf-ar RANLIB=arm-elf-ranlib ARFLAGS=cruv ./configure –prefix=$home_dicom –target=arm-elf –host=arm-elf –enable-std-includes –disable-threads
It creates a make file properly. Now I am trying to compile the code by using make command, but facing so many compilation errors like :-
1) I tried to compile my first dependent C++ library that is ofstd.
I got error for DIR*, struct dirent, opendir(), closedir() calls.
It includes for these calls, but I did not found any definitions for the above calls in this header file.
2) When I compile another library oflog I got the following errors like
error: nthos was not declared in this scope
error: ntohl was not declared in this scope
error: htons was not declared in this scope
error: htonl was not declared in this scope.
These calls are networking calls and are not defined in any of the header file from GNU ARM tool.
I tried to download the sources of ARM binaries and extracted the tar files and try to copy missing header files to installed GNU ARM on Ubuntu.
For some files it compiles after doing changes to copied header files, and for some again it gives compilation errors.. There is a loop of compilation errors for every file present under DCMTK library as some of the standard header files are missing.
Please suggest if there is any other tool chain available for ARM cross compilation on Ubuntu 64 bit machine.
Or any other good solution apart from this.
Thanks!!!
Amit
There are many areas for problems when it comes to cross compiling. There are three main flags for cross compiling. -host , -target, and -build. The -host flash is the machine in which the resulting binaries will run on. The -build flash is the system in which you will be compiling on. The -target flag is for building libraries that will be used in cross compiling. So if you were to build your own gcc tool chain. So in your case you won't set the target flag as we're not building a tool chain. the -host flag will be arm-elf. And the -build flag will be amd64.
Usually a cross compilation fails if there are inconsistencies between the regular c compiler and the cross compiler. I have compiled several libraries for the avr32 with a toolchain generated by buildroot, but in some cases (socat project for example) it hasn't been possible.
Your host, your target and the CXX flags look ok. I think it is not necessary to put the AR flag (that is the idea with the host and target option).
In other hand, this is an example for the expat libraries for the avr32:
./configure --host=avr32-linux --prefix=/home/juan/builds/build_expat/ CC=avr32-linux-gcc
make; make install
I can recommend you that tries to cross compile from an ia32 architecture. I had several problems with that ubuntu in the past.

Resources