In our project team we applications for micro-controller targets with arm-none-eabi-gcc. For running our testrunner we compile using the native cc compiler.
In our project team most use a Linux os, but some us a windows os (for reasons). The issue we run into is that we use the C11 keyword _Static_assert. When compiling on Linux it works, but when compiling under windows we get the following error: error LNK2019: unresolved external symbol _Static_assert referenced in function {}
This is due to the reason that the default MSVC compiler implements ANSI C89 which doesn't support the _Static_assert symbol. MSVC - C standards support
It also specifies that adding the /std:c11 or /std:c17 compiler options enable the support for _Static_assert.
We already have enabled --enable_platform_specific_config for other reasons.
Simply adding the build:windows --copt="/std:c11" to the .bazelrc solves the issue but also breaks the normal application build because arm-none-eabi-gcc compiler doesn't support the /std:c11 compiler flag.
Question: How can i add the /std:c11 compiler flag so that it only propagates to the native msvc compiler and not to the rest of my build targets?
When using MSVC for host builds (when building a tool that emits source code which is then #include'd, for example protocol buffer compiler) add build:windows --host_copt="/std:c11". When using MSVC for test builds, add test:windows --host_copt="/std:c11". Don't put it in build:windows --copt because that will apply even when running builds to the target.
Related
Any help is much appreciated. I'm having link errors when trying to use HDF5 libraries installed using vcpkg with Visual Studio 2019 on Windows 10.
I installed HDF5 1.12.0 on Windows 10 using vcpkg:
PowerShell: .\vcpkg install hdf5 hdf5:x64-windows
I then attempted to use Visual Studio 2019 to build my project that uses HDF5, but I keep getting the following LNK2001 errors.
unresolved external symbol H5T_IEEE_F64BE_g
unresolved external symbol H5T_STD_I64BE_g
unresolved external symbol H5T_C_S1_g
unresolved external symbol H5T_NATIVE_INT_g
unresolved external symbol H5T_NATIVE_DOUBLE_g
I tried to solve this by directly adding the additional library directories that are under the vcpkg/packages/ for HDF5, SZIP, and ZLIB, that were automatically installed as part of the HDF5 installation step given previously, and I also added the library files to the additional dependencies in the order prescribed by the HDF5 documentation:
hdf5_hl.lib
hdf5.lib
szip.lib
zlib.lib
But I still have the unresolved external symbol errors.
All those symbols are prefixed with H5_DLLVAR. As such you need to explicitly set the preprocessor definitionH5_BUILT_AS_DYNAMIC_LIBif you are not using CMake and only the MSBuild integration vcpkg provides. You could also open and issue with vcpkg since it should embedded that definition into the correct hdf5 header if the library is built dynamically.
The vcpkg command that I used installs the dynamic version of the libraries. Instead I installed the static version of the libraries using:
./vcpkg install hdf5:x64-windows-static
No manual inclusion of the library directories or libraries themselves is needed. Just be sure to run this command as well (when first installing vcpkg):
./vcpkg.exe integrate install
Once that was done, Visual Studio 2019 was able to properly use the HDF5 libraries for my project and the linker error was gone (binary produced).
Hope this helps someone in the future!
I got a C++ project for Bazel, which by default builds for 64-bit Windows on my machine. However, I want to create a 32-bit executable, which, according to the documentation, is supported.
I have tried these commands:
bazel build :knusperli --platforms #bazel_tools//platforms:x86_32
Target #bazel_tools//platforms:x86_32 was referenced as a platform, but does not provide PlatformInfo
bazel build :knusperli --cpu i386_windows
ERROR: No toolchain found for cpu 'i386_windows'.
I thought, since Visual Studio can build 32-bit executables, it would be easy in Bazel as well, but I can't find any information on how to actually do this.
Bazel does not support building 32-bit binaries out-of-the-box. It's possible to add support via a custom CROSSTOOL file.
See:
https://github.com/bazelbuild/bazel/wiki/About-the-CROSSTOOL
https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain
https://github.com/bazelbuild/bazel/wiki/Yet-Another-CROSSTOOL-Writing-Tutorial
When I compile my code using Borland C++Builder (it is necessary for me to use only the Borland compiler), bcc32.exe is able to compile the code successfully. When I build this same code with the cov-build command inside of cmd.exe, the build fails with errors like:
cannot open source file "iostream"
What is the possible reason behind this, and how do I debug it?
Here is the code
Coverity requires that you configure your compiler in the same environment that you build it in. If you fail to do so, the configuration probes will not pick up your include paths, amongst other things.
I have some existing source code that is written in C that I want to build and include in my iOS project. The entire source package is very large and is built using existing Makefiles and GCC. It is producing static libraries (.a files) that I would love to move over to my iOS project. However, the static libraries the Makefile produces is for x86 processors, which obviously won't work on iOS.
Is there a way I can switch GCC to build for ARMv7/ARM64 instead, without making changes to the existing source (in most cases)? I know there is the -march switch for GCC or you can download ARM specific GCC compilers, so I know the general concept of building for a different architecture than the build machine.
To build for ARM on Mac OS, will I have to download a different GCC compiler or is that capability built into the default GCC?
I'm sorry for the lack of understanding of basic concepts here; I'm primarily a Java and Objective-C developer, so building source for different architectures is a mostly foreign concept to me.
Whilst GCC supports a good many CPU architecture and platforms, it is usually built for a single one. To compile for ARM you generally need an ARM-cross-compiling GCC targeted appropriately.
The default system compiler for MacOSX and iOS for all architectures is clang and has been for some time (the last version of GCC apple shipped in dev tools is creaking and obsolete, and definitely won't support ARMv8).
The usual way of getting clang is to install Xcode (free from the App Store). There's a option in the installer (and in the UI of Xcode) to install the command-line tool package. This installs sym-links in /usr/bin to the compiler, and installs a bunch of other stuff you might expect such as make.
clang is (mostly) command-line compatible with gcc, and furthermore, you'll find that if you run gcc from the command-line on a Mac with dev-tools installed, you in fact get clang.
$ gcc --version
Configured with: -- prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
clang comes with ARMv7, ARMv8, i686, x86_64 on MacOSX, and can be configured to compile for any of these from the command line (See documentation)
Given the above, there's a fair chance your code will compile with minimal changes to compiler-flags using the existing makefile. You might want to read the documentation for lipo - which allows you to produce multi-architecture binaries.
I am using open source C++ library DCMTK from http://dicom.offis.de/dcmtk.php.en.
I have successfully compiled this library on Windows using VC++ IDE, MacOS Xcode, Mac iOS simulator.
But I am not able to compile this library on iOS device as it is ARM based architecture.
DCMTK library compiled very well on Intel architecture.
Now my problem statement is :-
I need to compile this DCMTK C++ library on ARM architecture by cross compilation.
I am using Ubuntu 64 bit machine for cross compilation.
I have installed binaries from GNU ARM tool chain from http://www.gnuarm.com/
I am using GCC toolchain 4.0 binutils-2.16.1, gcc-4.0.2-c-c++, newlib-1.14.0, insight-6.4, TAR BZ2 [65.5MB] binaries for Ubuntu 64 bit machine for ARM cross compilation.
After Installing these binaries on Ubuntu I have set PATH environment variable to
PATH=$PATH/gnu_arm/bin
For configuring the DCMTK C++ library I have run the following command on shell
CC=arm-elf-gcc CXX=arm-elf-g++ AR=arm-elf-ar RANLIB=arm-elf-ranlib ARFLAGS=cruv ./configure –prefix=$home_dicom –target=arm-elf –host=arm-elf –enable-std-includes –disable-threads
It creates a make file properly. Now I am trying to compile the code by using make command, but facing so many compilation errors like :-
1) I tried to compile my first dependent C++ library that is ofstd.
I got error for DIR*, struct dirent, opendir(), closedir() calls.
It includes for these calls, but I did not found any definitions for the above calls in this header file.
2) When I compile another library oflog I got the following errors like
error: nthos was not declared in this scope
error: ntohl was not declared in this scope
error: htons was not declared in this scope
error: htonl was not declared in this scope.
These calls are networking calls and are not defined in any of the header file from GNU ARM tool.
I tried to download the sources of ARM binaries and extracted the tar files and try to copy missing header files to installed GNU ARM on Ubuntu.
For some files it compiles after doing changes to copied header files, and for some again it gives compilation errors.. There is a loop of compilation errors for every file present under DCMTK library as some of the standard header files are missing.
Please suggest if there is any other tool chain available for ARM cross compilation on Ubuntu 64 bit machine.
Or any other good solution apart from this.
Thanks!!!
Amit
There are many areas for problems when it comes to cross compiling. There are three main flags for cross compiling. -host , -target, and -build. The -host flash is the machine in which the resulting binaries will run on. The -build flash is the system in which you will be compiling on. The -target flag is for building libraries that will be used in cross compiling. So if you were to build your own gcc tool chain. So in your case you won't set the target flag as we're not building a tool chain. the -host flag will be arm-elf. And the -build flag will be amd64.
Usually a cross compilation fails if there are inconsistencies between the regular c compiler and the cross compiler. I have compiled several libraries for the avr32 with a toolchain generated by buildroot, but in some cases (socat project for example) it hasn't been possible.
Your host, your target and the CXX flags look ok. I think it is not necessary to put the AR flag (that is the idea with the host and target option).
In other hand, this is an example for the expat libraries for the avr32:
./configure --host=avr32-linux --prefix=/home/juan/builds/build_expat/ CC=avr32-linux-gcc
make; make install
I can recommend you that tries to cross compile from an ia32 architecture. I had several problems with that ubuntu in the past.