Do you guys know why the AddressSanitizer would be taking a whole different set of libraries.
For instance, I was trying to recreate strcmp, when I was comparing my output with the standard strcmp from string.h but what I realized is that compiling it normally with gcc it outputs the difference, but with the -fsanitize=address flag added it gives me 1, 0, -1 outputs.
both gcc and clang behave the same way
I am on a OSX 10.11.6, btw.
Is this behavior unique to MACOS or other systems have similar effects?
Btw, from what I was reading, the strcmp of the GNU C library outputs the difference and the Apple version only has outputs of 1, -1 and 0.
So this is even more puzzling to me, because the gcc/clang in MACOS seems to be using the gnu libc by default, and somehow shifting to the apple's version of libc when using the -fsanitize=address flag.
If anyone can explain this to me I would very grateful.
btw, just in case, this is my configuration of gcc:
➜ gcc --version
Configured with:
--prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
-fsanitize=address forces your binary to link against Asan runtime which overloads a lot of standard functions (including strcmp). Overloading is done to check input arguments to these functions. Asan implementations are generally standard-compliant but don't follow all the nits for a particular platform so this may be the reason for differences that you see.
Related
I would like to create an LLVM Pass to optimize OpenCL kernel for NVIDIA Cards. I wonder if it is possible.
I have tried followings:
clang -Xclang -load -Xclang lib/simplePass.so main.c
It did not work, cannot alter the kernel code.
Separate compiling then linking.
It also does not work, gave me error that get_global_id is undefined.
Using offline compiler then clCreateProgramWithBinary
I followed Apple's example, It work on the with Intel GPU, however was not able to use an LLVM Pass. When I tried to use it, it gave me error:
LLVM ERROR: Sized aggregate specification in datalayout string
When I tried to adopt it into Xubuntu, it does not work.
Is there any another method that I can tried? I know I can use SPIR-V IR but Nvidia does not support OpenCL 2.2 currently.
Thank you for your time.
I am using clang to generate LLVM IR for Nvidia OpenCL and Cuda kernels, which i want to subsequently instrument, doing something like this for OpenCL:
clang -c -x cl -S -emit-llvm -cl-std=CL2.0 kernel.cl -o kernel.ll
and what's described here for Cuda.
What i am looking for is a way to go from the instrumented IR to an actual binary. For the case of Cuda i know i can use the NVPTX backend to generate PTX and JIT compile as described here (or perhaps use ptxas?). I was wondering if something similar is also possible for the OpenCL case, and if so, perhaps a minimal example. Thanks in advance.
You can in principle extract binaries for loaded and compiled OpenCL kernels by using clGetProgramInfo() with CL_PROGRAM_BINARY_SIZES and CL_PROGRAM_BINARIES.
As far as I'm aware, this will produce binaries in an entirely implementation-defined format. So if you're unlucky, you just get IR code back anyway. With any luck, it might contain PTX machine code on your platform, however.
I need compile OpenCL kernels in SPIR-V to use with Vulkan, I tried with Google CLSPV https://github.com/google/clspv, but the problem occur with vectorization, functions like vload8 doesn't work. So I need compile OpenCL kernels in SPIR-V using clang.
I'm the project lead for Clspv. Jesse is right overall.
The lack of support for vectors of length 8 and 16 is deliberately out of scope for now.
That's because Vulkan itself does not support that.
We haven't added the support to mimic such support, and don't have plans to do so even in the medium term.
There is more info on an old closed issue:
https://github.com/google/clspv/issues/8
Clspv is the only toolchain I'm aware of that compiles OpenCL C to Vulkan-compatible SPIR-V. You'll need to file an issue against Clspv; attaching a kernel that fails to compile properly would help a lot.
https://github.com/KhronosGroup/SPIR/tree/spirv-1.1
You can follow this Khronos project.
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=c++ -I
include kernel.cl -o kernel.spv #For OpenCL C++
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=CL2.0
-include opencl.h kernel.cl -o kernel.spv #For OpenCL C
I've been trying to make the switch to LLVM, since I'd like to get more into the whole 'software-dev' scene, and it seems like right now, LLVM is the future. I built LLVM/Clang/LLD/compiler-rt/libcxx from source several times now, both with GNU/GCC and LLVM/Clang.
The problem appears when I try to use the newly compiled compilers. From what I can see, clang is using GNU ld rather than LLVM's lld. Is this true?
LLD seems to be a very limited program from the lld -help output, but from what I have read, it is as full featured as ld. I cannot find documentation on how to use it anywhere -- does anyone know where I can find some kind of comprehensive manual on it?
Thank you.
Pass -fuse-ld=lld to clang to make it use lld for linking. By now, it's in very good shape.
You can pass -v or -### to clang to make it print which linker command it runs or would run.
There's no manual for the moment and depending on platform may work well enough for you. That said, if lld were "production ready" we'd have switched clang to using it by default for the various platforms. It's not there yet so I wouldn't suggest you use it for your day to day development.
The LLVM team say that is production ready because FreeBSD can compile and link a lot of things with LLD.
The documentation on the LLD project can be found on http://lld.llvm.org/.
It's written :
LLD is a drop-in replacement for the GNU linkers.
That accepts the same command line arguments and linker scripts as GNU.
So you can use same arguments than GNU LD.
I know this question is old, but there is a newer solution to it:
To use the ld.lld linker when building any llvm target, just pass -DLLVM_ENABLE_LLD=ON in the commandline to cmake.
//Use lld as C and C++ linker.
LLVM_ENABLE_LLD:BOOL=TRUE
For other cmake projects, pass: -DCMAKE_LINKER=/etc/bin/ld.lld
I'm confused about one aspect of LLVM:
For all the languages it supports, does it support compiling both to the intermediate code AND to straight binary?
For instance, if I write something in C, can LLVM (or Clang?) compile to either binary (like GCC) or intermediate code?
Or can only some languages be converted to intermediate? I guess it goes without saying that this intermediate requires some type of LLVM runtime? I never really hear bout the runtime, though.
LLVM is a framework for manipulating LLVM IR (the "bytecode" you're alluding to) and lowering it to target-specific binaries (for example x86 machine code). Clang is a front-end for C/C++ (and Objective C) that translates these source languages into LLVM IR.
With this in mind, answering your questions:
For all the languages it supports, does it support compiling both to
the intermediate code AND to straight binary?
LLVM can compile IR (intermediate code) to binary (or to assembly text).
For instance, if I write something in C, can LLVM (or Clang?) compile
to either binary (like GCC) or intermediate code?
Yes. Clang can compile your code to a binary directly (using LLVM as a backend), or just emit LLVM IR if you want that.
Or can only some languages be converted to intermediate? I guess it
goes without saying that this intermediate requires some type of LLVM
runtime?
Theoretically, once you have LLVM IR, the LLVM library can convert it to binary. Some languages require a runtime (say Java, or Python), so any compiler from these languages to LLVM IR will have to provide a runtime in one way or another. LLVM has some support for connecting to such runtimes (for example - GC hooks) but carries no "runtime of its own". The only "runtime" project related to LLVM is compiler-rt, which provides fast implementations of some language/compiler builtins and intrinsics. It's mainly used for C/C++/Objective C. It's not officially part of LLVM, though full toolchains based on Clang often use it.