It's said that clang support CUDA recently. But I can't find the usage in the document.
Can't find what usage in what document? Is this what you are looking for? http://llvm.org/docs/CompileCudaWithLLVM.html
Related
I would like to create an LLVM Pass to optimize OpenCL kernel for NVIDIA Cards. I wonder if it is possible.
I have tried followings:
clang -Xclang -load -Xclang lib/simplePass.so main.c
It did not work, cannot alter the kernel code.
Separate compiling then linking.
It also does not work, gave me error that get_global_id is undefined.
Using offline compiler then clCreateProgramWithBinary
I followed Apple's example, It work on the with Intel GPU, however was not able to use an LLVM Pass. When I tried to use it, it gave me error:
LLVM ERROR: Sized aggregate specification in datalayout string
When I tried to adopt it into Xubuntu, it does not work.
Is there any another method that I can tried? I know I can use SPIR-V IR but Nvidia does not support OpenCL 2.2 currently.
Thank you for your time.
I am using clang to generate LLVM IR for Nvidia OpenCL and Cuda kernels, which i want to subsequently instrument, doing something like this for OpenCL:
clang -c -x cl -S -emit-llvm -cl-std=CL2.0 kernel.cl -o kernel.ll
and what's described here for Cuda.
What i am looking for is a way to go from the instrumented IR to an actual binary. For the case of Cuda i know i can use the NVPTX backend to generate PTX and JIT compile as described here (or perhaps use ptxas?). I was wondering if something similar is also possible for the OpenCL case, and if so, perhaps a minimal example. Thanks in advance.
You can in principle extract binaries for loaded and compiled OpenCL kernels by using clGetProgramInfo() with CL_PROGRAM_BINARY_SIZES and CL_PROGRAM_BINARIES.
As far as I'm aware, this will produce binaries in an entirely implementation-defined format. So if you're unlucky, you just get IR code back anyway. With any luck, it might contain PTX machine code on your platform, however.
I need compile OpenCL kernels in SPIR-V to use with Vulkan, I tried with Google CLSPV https://github.com/google/clspv, but the problem occur with vectorization, functions like vload8 doesn't work. So I need compile OpenCL kernels in SPIR-V using clang.
I'm the project lead for Clspv. Jesse is right overall.
The lack of support for vectors of length 8 and 16 is deliberately out of scope for now.
That's because Vulkan itself does not support that.
We haven't added the support to mimic such support, and don't have plans to do so even in the medium term.
There is more info on an old closed issue:
https://github.com/google/clspv/issues/8
Clspv is the only toolchain I'm aware of that compiles OpenCL C to Vulkan-compatible SPIR-V. You'll need to file an issue against Clspv; attaching a kernel that fails to compile properly would help a lot.
https://github.com/KhronosGroup/SPIR/tree/spirv-1.1
You can follow this Khronos project.
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=c++ -I
include kernel.cl -o kernel.spv #For OpenCL C++
clang -cc1 -emit-spirv -triple=spir-unknown-unknown -cl-std=CL2.0
-include opencl.h kernel.cl -o kernel.spv #For OpenCL C
How do I use clang to generate a call graph of C++ code? I understand I need to use this, but I can't find any examples.
I already tried using python bindings for this, but they seem to be lacking some important functions.
Can anyone provide a minimal example of using current clang's API for this task?
For C++ (For C just use clang)
clang++ -Xclang -analyze -Xclang -analyzer-checker=debug.ViewCallGraph <file to analyze>
This will get you images.
Does Open64 has something equivalent to Intel Short Vector Math Library Operations.
Thank you.
OK, I more or less figured it out. AMD OpenMP ships with AMD's math library ACML. ACML has functions similar to those in Intel's library.
Open64 is just a compiler, so you can link what ever library you needs to the object file that open64 compiler generates.