I am using clang to generate LLVM IR for Nvidia OpenCL and Cuda kernels, which i want to subsequently instrument, doing something like this for OpenCL:
clang -c -x cl -S -emit-llvm -cl-std=CL2.0 kernel.cl -o kernel.ll
and what's described here for Cuda.
What i am looking for is a way to go from the instrumented IR to an actual binary. For the case of Cuda i know i can use the NVPTX backend to generate PTX and JIT compile as described here (or perhaps use ptxas?). I was wondering if something similar is also possible for the OpenCL case, and if so, perhaps a minimal example. Thanks in advance.
You can in principle extract binaries for loaded and compiled OpenCL kernels by using clGetProgramInfo() with CL_PROGRAM_BINARY_SIZES and CL_PROGRAM_BINARIES.
As far as I'm aware, this will produce binaries in an entirely implementation-defined format. So if you're unlucky, you just get IR code back anyway. With any luck, it might contain PTX machine code on your platform, however.
Related
I would like to create an LLVM Pass to optimize OpenCL kernel for NVIDIA Cards. I wonder if it is possible.
I have tried followings:
clang -Xclang -load -Xclang lib/simplePass.so main.c
It did not work, cannot alter the kernel code.
Separate compiling then linking.
It also does not work, gave me error that get_global_id is undefined.
Using offline compiler then clCreateProgramWithBinary
I followed Apple's example, It work on the with Intel GPU, however was not able to use an LLVM Pass. When I tried to use it, it gave me error:
LLVM ERROR: Sized aggregate specification in datalayout string
When I tried to adopt it into Xubuntu, it does not work.
Is there any another method that I can tried? I know I can use SPIR-V IR but Nvidia does not support OpenCL 2.2 currently.
Thank you for your time.
I need compile OpenCL kernels in SPIR-V to use with Vulkan, I tried with Google CLSPV https://github.com/google/clspv, but the problem occur with vectorization, functions like vload8 doesn't work. So I need compile OpenCL kernels in SPIR-V using clang.
I'm the project lead for Clspv. Jesse is right overall.
The lack of support for vectors of length 8 and 16 is deliberately out of scope for now.
That's because Vulkan itself does not support that.
We haven't added the support to mimic such support, and don't have plans to do so even in the medium term.
There is more info on an old closed issue:
https://github.com/google/clspv/issues/8
Clspv is the only toolchain I'm aware of that compiles OpenCL C to Vulkan-compatible SPIR-V. You'll need to file an issue against Clspv; attaching a kernel that fails to compile properly would help a lot.
https://github.com/KhronosGroup/SPIR/tree/spirv-1.1
You can follow this Khronos project.
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=c++ -I
include kernel.cl -o kernel.spv #For OpenCL C++
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=CL2.0
-include opencl.h kernel.cl -o kernel.spv #For OpenCL C
I have been trying to compile my code using -pg to enable profiling in the simulator and once I do that it gives me linker errors.
Compilation command
hexagon-clang++ main.cpp -o hello -mv62 -pg
Error
hexagon-clang++ main.cpp -o hello -mv62 -pg
Error: /tmp/main-924ac3.o(.text+0x30): undefined reference to `mcount'
Error: /tmp/main-924ac3.o(.text+0x130): undefined reference to `mcount'
Fatal: Linking had errors.
This is my first time to write code for DSP chip, specifically the hexagon 682. Are there any tutorials or references other than the programmer reference manual because they haven't been very useful in helping me understand how things work. Specially I don't understand how SIMD programming works. I am not sure what's the size of SIMD registers. Also it seems that using Floating point in DSP chips is not a great idea. So would it be better if I convert my code to use fixed point.
You can use hexagon-sim to generate the profiling data without rebuilding instrumented binaries.
hexagon-sim --profile ./hello will generate the gmon input file(s) necessary for hexagon-gprof to consume.
e.g. (taken from SDK 3.3.3 Examples/)
hexagon-clang -O2 -g -mv5 -c -o mandelbrot.o mandelbrot.c
hexagon-clang -O2 -g -mv5 mandelbrot.o -o mandelbrot -lhexagon
hexagon-sim -mv5 --timing --profile mandelbrot
hexagon-gprof mandelbrot gmon.t*
Note also that the SDK comes with hexagon-profiler, a richer tool that allows you to see in depth performance counters -- information beyond just which code was executed and how often.
See "Hexagon Profiler User Guide" (doc number 80-N2040-10 A) for details.
Are there any tutorials or references other than the programmer
reference manual because they haven't been very useful in helping me
understand how things work.
Specially I don't understand how SIMD programming works. I am not sure
what's the size of SIMD registers.
Hexagon's vector programming extension is called "HVX". There's a HVX-specific PRM that's available at https://developer.qualcomm.com/software/hexagon-dsp-sdk/tools -- it describes different 512-bit and 1024-bit vector modes.
Do you guys know why the AddressSanitizer would be taking a whole different set of libraries.
For instance, I was trying to recreate strcmp, when I was comparing my output with the standard strcmp from string.h but what I realized is that compiling it normally with gcc it outputs the difference, but with the -fsanitize=address flag added it gives me 1, 0, -1 outputs.
both gcc and clang behave the same way
I am on a OSX 10.11.6, btw.
Is this behavior unique to MACOS or other systems have similar effects?
Btw, from what I was reading, the strcmp of the GNU C library outputs the difference and the Apple version only has outputs of 1, -1 and 0.
So this is even more puzzling to me, because the gcc/clang in MACOS seems to be using the gnu libc by default, and somehow shifting to the apple's version of libc when using the -fsanitize=address flag.
If anyone can explain this to me I would very grateful.
btw, just in case, this is my configuration of gcc:
➜ gcc --version
Configured with:
--prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
-fsanitize=address forces your binary to link against Asan runtime which overloads a lot of standard functions (including strcmp). Overloading is done to check input arguments to these functions. Asan implementations are generally standard-compliant but don't follow all the nits for a particular platform so this may be the reason for differences that you see.
I don't understand how LLVM JIT relates to normal no JIT compilation and the documentation isn't good.
For example suppose I use the clang front end:
Case 1: I compile C file to native with clang/llvm. This flow I understand is like gcc flow - I get my x86 executable and that runs.
Case 2: I compile into some kind of LLVM IR that runs on LLVM JIT. In this case the executable contains the LLVM runtime to execute the IR on JIT, or how does it work?
What is the difference between these two and are they correct? Does LLVM flow include support for both JIT and non JIT? When do I want to use JIT - does it make sense at all for a language like C?
You have to understand that LLVM is a library that helps you build compilers. Clang is merely a frontend for this library.
Clang translates C/C++ code into LLVM IR and hands it over to LLVM, which compiles it into native code.
LLVM is also able to generate native code directly in memory, which then can be called as a normal function. So case 1. and 2. share LLVM's optimization and code generation.
So how does one use LLVM as a JIT compiler? You build an application which generates some LLVM IR (in memory), then use the LLVM library to generate native code (still in memory). LLVM hands you back a pointer which you can call afterwards. No clang involved.
You can, however, use clang to translate some C code into LLVM IR and load this into your JIT context to use the functions.
Real World examples:
Unladen Swallow Python VM
Rubinius Ruby VM
There is also the Kaleidoscope tutorial which shows how to implement a simple language with JIT compiler.
First, you get LLVM bytecode (LLVM IR):
clang -emit-llvm -S -o test.bc test.c
Second, you use LLVM JIT:
lli test.bc
That runs the program.
Then, if you wish to get native, you use LLVM backend:
llc test.bc
From the assembly output:
as test.S
I am taking the steps to compile and run the JIT'ed code from a mail message in LLVM community.
[LLVMdev] MCJIT and Kaleidoscope Tutorial
Header file:
// foo.h
extern void foo(void);
and the function for a simple foo() function:
//foo.c
#include <stdio.h>
void foo(void) {
puts("Hello, I'm a shared library");
}
And the main function:
//main.c
#include <stdio.h>
#include "foo.h"
int main(void) {
puts("This is a shared library test...");
foo();
return 0;
}
Build the shared library using foo.c:
gcc foo.c -shared -o libfoo.so -fPIC
Generate the LLVM bitcode for the main.c file:
clang -Wall -c -emit-llvm -O3 main.c -o main.bc
And run the LLVM bitcode through jit (and MCJIT) to get the desired output:
lli -load=./libfoo.so main.bc
lli -use-mcjit -load=./libfoo.so main.bc
You can also pipe the clang output into lli:
clang -Wall -c -emit-llvm -O3 main.c -o - | lli -load=./libfoo.so
Output
This is a shared library test...
Hello, I'm a shared library
Source obtained from
Shared libraries with GCC on Linux
Most compilers have a front end, some middle code/structure of some sort, and the backend. When you take your C program and use clang and compile such that you end up with a non-JIT x86 program that you can just run, you have still gone from frontend to middle to backend. Same goes for gcc, gcc goes from frontend to a middle thing and a backend. Gccs middle thing is not wide open and usable as is like LLVM's.
Now one thing that is fun/interesting about llvm, that you cannot do with others, or at least gcc, is that you can take all of your source code modules, compile them to llvms bytecode, merge them into one big bytecode file, then optimize the whole thing, instead of per file or per function optimization you get with other compilers, with llvm you can get any level of partial to compilete program optimization you like. then you can take that bytecode and use llc to export it to the targets assembler. I normally do embedded so I have my own startup code that I wrap around that but in theory you should be able to take that assembler file and with gcc compile and link it and run it. gcc myfile.s -o myfile. I imagine there is a way to get the llvm tools to do this and not have to use binutils or gcc, but I have not taken the time.
I like llvm because it is always a cross compiler, unlike gcc you dont have to compile a new one for each target and deal with nuances for each target. I dont know that I have any use for the JIT thing is what I am saying I use it as a cross compiler and as a native compiler.
So your first case is the front, middle, end and the process is hidden from you you start with source and get a binary, done. The second case is if I understand right the front and the middle and stop with some file that represents the middle. Then the middle to end (the specific target processor) can happen just in time at runtime. The difference there is the backend, the real time execution of the middle language of case two, is likely different than the backend of case one.