CUDA shared array not getting values? - memory

I am trying to implement simple parallel reduction. I am using the code from CUDA sdk. BUt somehow there is a problem in my kernel as the shared array is not getting values of the global array and its all zeroes.
extern __ shared __ float4 sdata[];
// each thread loadsone element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i= blockIdx.x*blockDim.x+ threadIdx.x;
sdata[tid] = dev_src[i];
__syncthreads();
// do reduction in shared mem
for(unsigned int s=1; s < blockDim.x; s *= 2) {
if(tid % (2*s) == 0){
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if(tid == 0)
out[blockIdx.x] = sdata[0];
EDIT::
ok I got it working by removing extern keyword and making shared array a constant size like 512. I am in good shape now. Maybe someone can explain why it was not working with extern keyword

I think I know why this is happening as I have faced this before. How are you calling the kernel?
Remember in the call kernel<<<blocks,threads,sharedMemory>>> the sharedMemory should be the size of the shared memory in bytes. So, if you are declaring for 512 elements, the third parameter should be 512 * sizeof(float4). I think you are just calling as below, which is wrong
kernel<<<blocks,threads,512>>> // this is wrong
Hope that helps

Related

How to correctly manipulate a CV_16SC3 Mat in a CUDA Kernel

I am writing a CUDA Program while working with OpenCV. I have an empty Mat of a given size (e.g. 1000x800) which I explicitly converted to GPUMat with dataytpe CV_16SC3. It is desired to manipulate the Image in this format in the CUDA Kernel. However trying to manipulate the Mat does not seem to work correctly.
I am calling my CUDA kernel as follows:
my_kernel <<< gridDim, blockDim >>>( (unsigned short*)img.data, img.cols, img.rows, img.step);
and my sample kernel looks like this
__global__ void my_kernel( unsigned short* img, int width, int height, int img_step)
{
int x, y, pixel;
y = blockIdx.y * blockDim.y + threadIdx.y;
x = blockIdx.x * blockDim.x + threadIdx.x;
if (y >= height)
return;
if (x >= width)
return;
pixel = (y * (img_step)) + (3 * x);
img[pixel] = 255; //I know 255 is basically an uchar, this is just part of my test
img[pixel+1] = 255
img[pixel+2] = 255;
}
I am expecting this small kernel sample to write al pixels to white. However, after downloading the Mat again from the GPU and visualizing it with imshow, not all the pixels are white and some weird black lines are present, which makes me believe that somehow I am writing to invalid memory addresses.
My guess is the following. The OpenCV documentation states that cv::mat::data returns an uchar pointer. However, my Mat has a data type "16U" (short unsigned to my knowledge). That is why in the kernel launch I am casting the pointer to (unsigned short*). But apparently that is incorrect.
How should I correctly proceed to be able to read and write the Mat data as short in my kernel?
First of all, the input image type should be short instead of unsigned short because the type of Mat is 16SC3 ( rather than 16UC3 ).
Now, since the image step is in bytes and the data type is short, the pixel index ( or address ) should be calculated taken into account the difference in byte width of those. There are 2 ways to fix this issue.
Method 1:
__global__ void my_kernel( short* img, int width, int height, int img_step)
{
int x, y, pixel;
y = blockIdx.y * blockDim.y + threadIdx.y;
x = blockIdx.x * blockDim.x + threadIdx.x;
if (y >= height)
return;
if (x >= width)
return;
//Reinterpret the input pointer as char* to allow jump in bytes instead of short
char* imgBytes = reinterpret_cast<char*>(img);
//Calculate row start address using the newly created pointer
char* rowStartBytes = imgBytes + (y * img_step); // Jump in byte
//Reinterpret the row start address back to required data type.
short* rowStartShort = reinterpret_cast<short*>(rowStartBytes);
short* pixelAddress = rowStartShort + ( 3 * x ); // Jump in short
//Modify the image values
pixelAddress[0] = 255;
pixelAddress[1] = 255;
pixelAddress[2] = 255;
}
Method 2:
Divide the input image step by the size of required data type (short). It may be done when passing the step as a kernel argument.
my_kernel<<<grid,block>>>( img, width, height, img_step/sizeof(short));
I have used method 2 for quite a long time. It is a shortcut method, but later on when I got to look at the source code of certain image processing libraries, I realized that actually Method 1 is more portable, since the size of type can vary across different platforms.

Global device memory size limit when using statically alocated memory in cuda

I thought the maximal size of global memory should be only limited by the GPU device no matter it is allocated statically using __device__ __manged__ or dynamically using cudaMalloc.
But I found that if using the __device__ manged__ way, the maximum array size I can declare is much smaller than the GPU device limit.
The minimal working example is as follows:
#include <stdio.h>
#include <cuda_runtime.h>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define MX 64
#define MY 64
#define MZ 64
#define NX 64
#define NY 64
#define M (MX * MY * MZ)
__device__ __managed__ float A[NY][NX][M];
__device__ __managed__ float B[NY][NX][M];
__global__ void swapAB()
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
for(int j = 0; j < NY; j++)
for(int i = 0; i < NX; i++)
A[j][i][tid] = B[j][i][tid];
}
int main()
{
swapAB<<<M/256,256>>>();
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaDeviceSynchronize() );
return 0;
}
It uses 64 ^5 * 2 * 4 / 2^30 GB = 8 GB global memory, and I'll run compile and run it on a Nvidia Telsa K40c GPU which has a 12GB global memory.
Compiler cmd:
nvcc test.cu -gencode arch=compute_30,code=sm_30
Output warning:
warning: overflow in implicit constant conversion.
When I ran the generated executable, an error says:
GPUassert: an illegal memory access was encountered test.cu
Surprisingly, if I use the dynamically allocated global memory of the same size (8GB) via the cudaMalloc API instead, there is no compiling warning and runtime error.
I'm wondering if there are any special limitation about the allocatable size of static global device memory in CUDA.
Thanks!
PS: OS and CUDA: CentOS 6.5 x64, CUDA-7.5.
This would appear to be a limitation of the CUDA runtime API. The root cause is this function (in CUDA 7.5):
__cudaRegisterVar(
void **fatCubinHandle,
char *hostVar,
char *deviceAddress,
const char *deviceName,
int ext,
int size,
int constant,
int global
);
which only accepts a signed int for the size of any statically declared device variable. This would limit the maximum size to 2^31 (2147483648) bytes. The warning you see is because the CUDA front end is emitting boilerplate code containing calls to __cudaResgisterVar like this:
__cudaRegisterManagedVariable(__T26, __shadow_var(A,::A), 0, 4294967296, 0, 0);
__cudaRegisterManagedVariable(__T26, __shadow_var(B,::B), 0, 4294967296, 0, 0);
It is the 4294967296 which is the source of the problem. The size will overflow the signed integer and cause the API call to blow up. So it seems you are limited to 2Gb per static variable for the moment. I would recommend raising this as a bug with NVIDIA if it is a serious problem for your application.

Cuda: Global memory broadcast to registers on Compute 5.0 architecture

I have the following code:
__global__ void someKernel(unsigned char * global_mem, unsigned int * start) {
unsigned int size;
size = *(unsigned int *)&global_mem[start[blockIdx.x]];
//Do many things with size
}
Where all of my threads from the same block will read the same memory location from the global memory and do many things with it.
How fast/slow is the copy going to be? I have a block of 256 threads.
Will the global memory broadcast to the whole block, or to a single warp (meaning I will have to do 256/32 reads from global memory)?
Would the following situation be better where I first read from global memory to shared memory and then I copy to a register (bearing the warp divergence and the synchronization overhead):
__global__ void someKernel(unsigned char * global_mem, unsigned int * start) {
__shared__ int tmpsize;
unsigned int size;
int i = threadIdx.x;
if (i == 0) {
tmpsize = *(unsigned int *)&global_mem[start[blockIdx.x]];
}
__syncthreads();
size = tmpsize;
//Do many things with size
}
Would this be faster, considering that I have a huge grid with blocks of 256 threads and each block reads a different start location

malloc using 4 bytes for char

I am writing a code to examine how memory is managed between stack and heap. for a course work.
#include<stdio.h>
#include<stdlib.h>
#define NUM_OF_CHARS 100
// function prototype
void f(void);
int main()
{
f();
return 0;
}
void f(void)
{
char *ptr1;
ptr1 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 1: %016lx\n", (long)ptr1);
char *ptr2;
ptr2 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 2: %016lx\n", (long)ptr2);
}
when I run this code I get the following:
Address array 1: 000000000209e010
Address array 2: 000000000209e1b0
my expectation was to see a difference in the address of 100 bytes, but the difference is 416 bytes, when I changed the NUM_OF_CHARS to any other value (200,300,...) the result was always (NUM_OF_CHARS*4 + 16), so it seams like malloc is allocating 4 bytes for each char rather one byte plus 16 bytes of some overhead.
can anyone explain what is happening here?
Memory allocation is platform/compiler dependent. The only thing malloc ensures is that it allocates enough memory for what you are asking and nothing more.
There is no guarantee that your addresses will be contiguous due to memory alignment
Also, you are allocating by size of ints and not char in your code. This is most likely the reason why you see a NUM_OF_CHARS*4 difference, while the remaining difference can be attributed to padding.

CUDA: Shift arrays on shared memory

I am trying to load a flattened 2D matrix into shared memory, shift the data along x, write back to global memory shifting also along y. The input data is therefore shifted along x and y. What I have:
__global__ void test_shift(float *data_old, float *data_new)
{
uint glob_index = threadIdx.x + blockIdx.y*blockDim.x;
__shared__ float VAR;
__shared__ float VAR2[NUM_THREADS];
// load from global to shared
VAR = data_old[glob_index];
// do some stuff on VAR
if (threadIdx.x < NUM_THREADS - 1)
{
VAR2[threadIdx.x + 1] = VAR; // shift (+1) along x
}
__syncthreads();
// write to global memory
if (threadIdx.y < ny - 1)
{
glob_index = threadIdx.x + (blockIdx.y + 1)*blockDim.x; // redefine glob_index to shift along y (+1)
data_new[glob_index] = VAR2[threadIdx.x];
}
The call to the kernel:
test_shift <<< grid, block >>> (data_old, data_new);
and grid and blocks (blockDim.x is equal to the matrix width, i.e. 64):
dim3 block(NUM_THREADS, 1);
dim3 grid(1, ny);
I am not able to achieve it. Could someone please point out what's wrong with this? Should I use a strided index or an offset?
VAR should not have been declared as shared, because in the current form all threads scribble over each other's data when you load from global memory: VAR = data_old[glob_index];.
You also have an out-of-bounds access when you access VAR2[threadIdx.x + 1], so your kernel never finishes (depending on the compute capability of the device - 1.x devices didn't check shared memory accesses as rigorously).
You could have detected the latter by checking the return codes of all calls to CUDA functions for errors.
Shared variables are, well, shared by all threads in a single block. This means that you don't have blockDim.y complects of shared variables but only a single complect per block.
uint glob_index = threadIdx.x + blockIdx.y*blockDim.x;
__shared__ float VAR;
__shared__ float VAR2[NUM_THREADS];
VAR = data_old[glob_index];
if (threadIdx.x < NUM_THREADS - 1)
{
VAR2[threadIdx.x + 1] = VAR; // shift (+1) along x
}
This instructs all threads in a block to write data into a single variable (VAR). Next you have no synchronization, and you use this variable in the second assignment. This will have undefined result, because threads from the first warp are reading from this variable and threads from the second warp are still trying to write something there.
You should change VAR to be local, or create an array of shared memory variables for all threads in block.
if (threadIdx.y < ny - 1)
{
glob_index = threadIdx.x + (blockIdx.y + 1)*blockDim.x;
data_new[glob_index] = VAR2[threadIdx.x];
}
In VAR2[0] you still have some garbage (you've never written there). threadIdx.y is always zero in your blocks.
And avoid using uints. They have (or used to have) some perfomance problems.
Actually, for such simple task you don't need to use shared memory
__global__ void test_shift(float *data_old, float *data_new)
{
int glob_index = threadIdx.x + blockIdx.y*blockDim.x;
float VAR;
// load from global to local
VAR = data_old[glob_index];
int glob_index_new;
// calculate only if we are going to output something
if ( (blockIdx.y < gridDim.y - 1) && ( threadIdx.x < blockDim.x - 1 ))
{
glob_index_new = threadIdx.x + 1 + (blockIdx.y + 1)*blockDim.x;
// do some stuff on VAR
} else // just write 0.0 to remove garbage
{
glob_index_new = ( (blockIdx.y == gridDim.y - 1) && ( threadIdx.x == blockDim.x - 1 ) ) ? 0 : ((blockIdx.y == gridDim.y - 1) ? threadIdx.x : (blockIdx.y)*blockDim.x );
VAR = 0.0;
}
// write to global memory
data_new[glob_index_new] = VAR;
}

Resources