Synchronizing Statically Allocated Struct Instances between CPU and GPU - memory

I have a struct that contains an array, and I want to copy the contents from an instance of that struct in CPU memory to another instance in GPU memory.
My question is similar to this one. There are two big difference between this question and the one from the link:
I'm not using an array of structs. I just need one.
All instances of the struct are statically allocated.
In attempt to answer my own question, I tried modifying the code in the answer as follows:
#include <stdio.h>
#include <stdlib.h>
#define cudaCheckError() { \
cudaError_t err = cudaGetLastError(); \
if(err != cudaSuccess) { \
printf("Cuda error: %s:%d: %s\n", __FILE__, __LINE__, cudaGetErrorString(err)); \
exit(1); \
} \
}
struct Test {
char array[5];
};
__global__ void kernel(Test *dev_test) {
for(int i=0; i < 5; i++) {
printf("Kernel[0][i]: %c \n", dev_test[0].array[i]);
}
}
__device__ Test dev_test; //dev_test is now global, statically allocated, and one instance of the struct
int main(void) {
int size = 5;
Test test; //test is now statically allocated and one instance of the struct
char temp[] = { 'a', 'b', 'c', 'd' , 'e' };
memcpy(test.array, temp, size * sizeof(char));
cudaCheckError();
cudaMemcpy(&dev_test, &test, sizeof(Test), cudaMemcpyHostToDevice);
cudaCheckError();
kernel<<<1, 1>>>(&dev_test);
cudaCheckError();
cudaDeviceSynchronize();
cudaCheckError();
// memory free
return 0;
}
But this code throws a runtime error:
Cuda error: HelloCUDA.cu:34: invalid argument
Is there a way to copy test into dev_test?

When using a statically allocated __device__ variable:
We don't use the cudaMemcpy API. We use the cudaMemcpyToSymbol (or cudaMemcpyFromSymbol) API
We don't pass __device__ variables as kernel arguments. They are at global scope. You just use them in your kernel code.
The following code has these issues addressed:
$ cat t10.cu
#include <stdio.h>
#define cudaCheckError() { \
cudaError_t err = cudaGetLastError(); \
if(err != cudaSuccess) { \
printf("Cuda error: %s:%d: %s\n", __FILE__, __LINE__, cudaGetErrorString(err)); \
exit(1); \
} \
}
struct Test {
char array[5];
};
__device__ Test dev_test; //dev_test is now global, statically allocated, and one instance of the struct
__global__ void kernel() {
for(int i=0; i < 5; i++) {
printf("Kernel[0][i]: %c \n", dev_test.array[i]);
}
}
int main(void) {
int size = 5;
Test test; //test is now statically allocated and one instance of the struct
char temp[] = { 'a', 'b', 'c', 'd' , 'e' };
memcpy(test.array, temp, size * sizeof(char));
cudaCheckError();
cudaMemcpyToSymbol(dev_test, &test, sizeof(Test));
cudaCheckError();
kernel<<<1, 1>>>();
cudaCheckError();
cudaDeviceSynchronize();
cudaCheckError();
// memory free
return 0;
}
$ nvcc -o t10 t10.cu
$ cuda-memcheck ./t10
========= CUDA-MEMCHECK
Kernel[0][i]: a
Kernel[0][i]: b
Kernel[0][i]: c
Kernel[0][i]: d
Kernel[0][i]: e
========= ERROR SUMMARY: 0 errors
$
(your array usage in kernel code also didn't make sense. dev_test is not an array, therefore you cannot index into it: dev_test[0]....)

Related

How to get stack trace for C/C++ program in CYGWIN environment?

How to get stack trace for C/C++ program in CYGWIN environment ?
** I was looking for a back trace mechanism, I've compiled some of the solutions found here and made it a small program for quick reference.
My Answers with a code snippet:
#if defined(__CYGWIN__)
#include <Windows.h>
#include <dbghelp.h>
#include <psdk_inc/_dbg_common.h>
#include <cxxabi.h>
#include <cstring>
class Error // Windows version
{
private:
void *stacktrace[MAX_STACKTRACE_SIZE];
size_t stacktrace_size;
public:
const char* message;
Error(const char* m)
: message(m)
, stacktrace_size(0)
{
// Capture the stack, when error is 'hit'
stacktrace_size = CaptureStackBackTrace(0, MAX_STACKTRACE_SIZE, stacktrace, nullptr);
}
void print_backtrace(ostream& out) const
{
SYMBOL_INFO * symbol;
HANDLE process;
size_t length;
process = GetCurrentProcess();
SymInitialize(process, nullptr, TRUE);
symbol = (SYMBOL_INFO *)calloc(sizeof(SYMBOL_INFO) + 256 * sizeof(char), 1);
symbol->MaxNameLen = 255;
symbol->SizeOfStruct = sizeof(SYMBOL_INFO);
length = strlen (symbol->Name);
std::string result;
char tempStr[255] = {0};
for (int i = 0; i < stacktrace_size; i++)
{
int status = 0;
// '_' is missing in symbol->Name , hence prefix it and concat with symbol->Name
char prefixed_symbol [256] = "_" ;
SymFromAddr(process, (DWORD64)(stacktrace[i]), 0, symbol);
auto backtrace_line = string(symbol->Name);
if (backtrace_line.size() == 0) continue;
// https://en.wikipedia.org/wiki/Name_mangling
// Prefix '_' with symbol name, so that __cxa_demangle does the job correctly
// $ c++filt -n _Z9test_ringI12SmallIntegerIhEEvRK4RingIT_E
strcat (prefixed_symbol, symbol->Name);
char * demangled_name = abi::__cxa_demangle(prefixed_symbol, nullptr, nullptr, &status);
if(status < 0)
{
sprintf(tempStr, "%i: %s - 0x%0X\n", stacktrace_size-i-1, symbol->Name, symbol->Address);
// out << symbol->Name << endl;
}
else
{
sprintf(tempStr, "%i: %s - 0x%0X\n", stacktrace_size - i - 1, demangled_name, symbol->Address);
// out << demangled_name << endl;
}
// Append the extracted info to the result
result += tempStr;
// Free the HEAP allocation made by __cxa_demangle
free((void*)demangled_name);
// Restore the prefix '_' string
prefixed_symbol [1] = '\0';
}
std::cout << result << std::endl;
free(symbol);
}
};
int main ()
{
try {
do_something ();
if (false == status) throw Error("SystemError");
}
catch (const Error &error)
{
cout << "NotImplementedError(\"" << error.message << "\")" << endl;
error.print_backtrace(cout);
return 1;
}
#endif
Command Line Option:
// Use -limagehlp to link the library
g++ -std=c++20 main.cpp -limagehlp

histogram kernel memory issue

I am trying to implement an algorithm to process images with more than 256 bins.
The main issue to process histogram in such case comes from the impossibility to allocate more than 32 Kb as local tab in the GPU.
All the algorithms I found for 8 bits per pixel images use a fixed size tab locally.
The histogram is the first process in that tab then a barrier is up and at last an addition is made with the output vector.
I am working with IR image which has more than 32K bins of dynamic.
So I cannot allocate a fixed size tab inside the GPU.
My algorithm use an atomic_add in order to create directly the output histogram.
I am interfacing with OpenCV so, in order to manage the possible case of saturation my bins use floating points. Depending on the ability of the GPU to manage single or double precision.
OpenCV doesn't manage unsigned int, long, and unsigned long data type as matrix type.
I have an error... I do think this error is a kind of segmentation fault.
After several days I still have no idea what can be wrong.
Here is my code :
histogram.cl :
#pragma OPENCL EXTENSION cl_khr_fp64: enable
#pragma OPENCL EXTENSION cl_khr_int64_base_atomics: enable
static void Atomic_Add_f64(__global double *val, double delta)
{
union {
double f;
ulong i;
} old;
union {
double f;
ulong i;
} new;
do {
old.f = *val;
new.f = old.f + delta;
}
while (atom_cmpxchg ( (volatile __global ulong *)val, old.i, new.i) != old.i);
}
static void Atomic_Add_f32(__global float *val, double delta)
{
union
{
float f;
uint i;
} old;
union
{
float f;
uint i;
} new;
do
{
old.f = *val;
new.f = old.f + delta;
}
while (atom_cmpxchg ( (volatile __global ulong *)val, old.i, new.i) != old.i);
}
__kernel void khist(
__global const uchar* _src,
const int src_steps,
const int src_offset,
const int rows,
const int cols,
__global uchar* _dst,
const int dst_steps,
const int dst_offset)
{
const int gid = get_global_id(0);
// printf("This message has been printed from the OpenCL kernel %d \n",gid);
if(gid < rows)
{
__global const _Sty* src = (__global const _Sty*)_src;
__global _Dty* dst = (__global _Dty*) _dst;
const int src_step1 = src_steps/sizeof(_Sty);
const int dst_step1 = dst_steps/sizeof(_Dty);
src += mad24(gid,src_step1,src_offset);
dst += mad24(gid,dst_step1,dst_offset);
_Dty one = (_Dty)1;
for(int c=0;c<cols;c++)
{
const _Rty idx = (_Rty)(*(src+c+src_offset));
ATOMIC_FUN(dst+idx+dst_offset,one);
}
}
}
The function Atomic_Add_f64 directly come from here and there
main.cpp
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <fstream>
#include <sstream>
#include <chrono>
int main()
{
cv::Mat_<unsigned short> a(480,640);
cv::RNG rng(std::time(nullptr));
std::for_each(a.begin(),a.end(),[&](unsigned short& v){ v = rng.uniform(0,100);});
bool ret = false;
cv::String file_content;
{
std::ifstream file_stream("../test/histogram.cl");
std::ostringstream file_buf;
file_buf<<file_stream.rdbuf();
file_content = file_buf.str();
}
int output_flag = cv::ocl::Device::getDefault().doubleFPConfig() == 0 ? CV_32F : CV_64F;
cv::String atomic_fun = output_flag == CV_32F ? "Atomic_Add_f32" : "Atomic_Add_f64";
cv::ocl::ProgramSource source(file_content);
// std::cout<<source.source()<<std::endl;
cv::ocl::Kernel k;
cv::UMat src;
cv::UMat dst = cv::UMat::zeros(1,65536,output_flag);
a.copyTo(src);
atomic_fun = cv::format("-D _Sty=%s -D _Rty=%s -D _Dty=%s -D ATOMIC_FUN=%s",
cv::ocl::typeToStr(src.depth()),
cv::ocl::typeToStr(src.depth()), // this to manage case like a matrix of usigned short stored as a matrix of float.
cv::ocl::typeToStr(output_flag),
atomic_fun.c_str());
ret = k.create("khist",source,atomic_fun);
std::cout<<"check create : "<<ret<<std::endl;
k.args(cv::ocl::KernelArg::ReadOnly(src),cv::ocl::KernelArg::WriteOnlyNoSize(dst));
std::size_t sz = a.rows;
ret = k.run(1,&sz,nullptr,false);
std::cout<<"check "<<ret<<std::endl;
cv::Mat b;
dst.copyTo(b);
std::copy_n(b.ptr<double>(0),101,std::ostream_iterator<double>(std::cout," "));
std::cout<<std::endl;
return EXIT_SUCCESS;
}
Hello I arrived to fix it.
I don't really know where the issue come from.
But if I suppose the output as a pointer rather than a matrix it work.
The changes I made are these :
histogram.cl :
__kernel void khist(
__global const uchar* _src,
const int src_steps,
const int src_offset,
const int rows,
const int cols,
__global _Dty* _dst)
{
const int gid = get_global_id(0);
if(gid < rows)
{
__global const _Sty* src = (__global const _Sty*)_src;
__global _Dty* dst = _dst;
const int src_step1 = src_steps/sizeof(_Sty);
src += mad24(gid,src_step1,src_offset);
ulong one = 1;
for(int c=0;c<cols;c++)
{
const _Rty idx = (_Rty)(*(src+c+src_offset));
ATOMIC_FUN(dst+idx,one);
}
}
}
main.cpp
k.args(cv::ocl::KernelArg::ReadOnly(src),cv::ocl::KernelArg::PtrWriteOnly(dst));
The rest of the code is the same in the two files.
For me it work fine.
If someone know why it work when the ouput is declared as a pointer rather than a vector (matrix of one row) I am interested.
Nevertheless my issue is fix :).

Compiler commands for accull while using opencv

I'm trying to accelerate an opencv program I wrote using OpenACC, I'm using the accull compiler to do this. However, I'm having a very hard time finding any documentation or examples that would help me on this issue.
http://scelementary.com/2015/04/30/openacc-on-jetson-tk1.html
I don't have any experience with ACCULL, but I can provide you with an example that uses OpenCV and OpenACC and maybe that'll help you get moving. This has been tested on X86 with PGI on Ubunut 14.04. This will read an image, invert the pixels, and write an image back out.
invert.cpp:
void invert(unsigned char *imgData, int w, int h, int ch, int step)
{
int i,j,c;
#pragma acc parallel loop collapse(3) copy(imgData[:h*w*ch])
for ( i = 0; i < h; i++)
for ( j = 0; j < w; j++ )
for ( c = 0; c < ch; c++ )
imgData[i*step + j*ch + c] = 255 - imgData[i*step + j*ch + c];
}
main.cpp:
#include <stdio.h>
#include <opencv/cv.h>
#include <opencv/cvaux.h>
#include <opencv/highgui.h>
void invert(unsigned char*,int,int,int,int);
int main(int argc, char* argv[])
{
if (argc < 3)
{
fprintf(stderr,"Usage: %s inFilename outFilename\n",argv[0]);
return -1;
}
IplImage* img = cvLoadImage(argv[1]);
printf("%s: %d x %d, %d %d\n", argv[1],img->width, img->height, img->widthStep, img->nChannels);
invert((unsigned char*)img->imageData,img->width,img->height, img->nChannels, img->widthStep);
if(!cvSaveImage(argv[2],img))
fprintf(stderr,"Failed to write to %s.\n",argv[2]);
cvReleaseImage(&img);
return 0;
}
Makefile:
a.out: main.cpp invert.cpp
pgc++ -fast -ta=tesla -c invert.cpp
pgc++ -fast -ta=tesla -c main.cpp
pgc++ -ta=tesla invert.o main.o -lopencv_legacy -lopencv_highgui -lopencv_core

OpenCV load image/video from stdin

I am trying to read a jpg image from stdin using the following code:
int c,count=0;
vector<uchar> buffer; //buffer for coding
/* for stdin, we need to read in the entire stream until EOF */
while ((c = fgetc(stdin)) != EOF) {
buffer.push_back(c);
count++;
}
cout << "Bytes: " << count << endl;
Mat im = imdecode(Mat(buffer),CV_LOAD_IMAGE_COLOR);
cout << "Decoded\n";
namedWindow("image", CV_WINDOW_AUTOSIZE);
imshow("image",im);
cv::waitKey(0);
I run this in cmd:
OpenCVTest < thumb.jpg
This is what I get:
Bytes: 335
Decoded
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in unknown function, file ..\..\..\src\opencv\modules\core\src\array.cpp, line 2482
The error seems reasonable since the image is about 7 KB and but according to the counter only 335 bytes have been read.
What am I doing wrong ?
My ultimate goal is to read a video from stdin frame by frame. Would that be possible ?
Thank you very much !
the following reads a jpeg sequence from stdin frame by frame. examples:
ffmpeg
ffmpeg -i input.mp4 -vcodec mjpeg -f image2pipe -pix_fmt yuvj420p -r 10 -|program.exe
mencoder
mencoder input.mp4 -nosound -ovc lavc -lavcopts vcodec=mjpeg -o -|program.exe
curl
curl "http://10.10.201.241/mjpeg/video.cgi&resolution=320x240"|program.exe
code
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
#if defined(_MSC_VER) || defined(WIN32) || defined(_WIN32) || defined(__WIN32__) \
|| defined(WIN64) || defined(_WIN64) || defined(__WIN64__)
# include <io.h>
# include <fcntl.h>
# define SET_BINARY_MODE(handle) setmode(handle, O_BINARY)
#else
# define SET_BINARY_MODE(handle) ((void)0)
#endif
#define BUFSIZE 10240
int main ( int argc, char **argv )
{
SET_BINARY_MODE(fileno(stdin));
std::vector<char> data;
bool skip=true;
bool imgready=false;
bool ff=false;
int readbytes=-1;
while (1)
{
char ca[BUFSIZE];
uchar c;
if (readbytes!=0)
{
readbytes=read(fileno(stdin),ca,BUFSIZE);
for(int i=0;i<readbytes;i++)
{
c=ca[i];
if(ff && c==(uchar)0xd8)
{
skip=false;
data.push_back((uchar)0xff);
}
if(ff && c==0xd9)
{
imgready=true;
data.push_back((uchar)0xd9);
skip=true;
}
ff=c==0xff;
if(!skip)
{
data.push_back(c);
}
if(imgready)
{
if(data.size()!=0)
{
cv::Mat data_mat(data);
cv::Mat frame(imdecode(data_mat,1));
imshow("frame",frame);
waitKey(1);
}else
{
printf("warning");
}
imgready=false;
skip=true;
data.clear();
}
}
}
else
{
throw std::string("zero byte read");
}
}
}

cudamemcpy error:"the launch timed out and was terminated"

My code is a parallel implmentation that calculates the nth digit of pi. When I finish the kernel and try to copy the memory back to the host I get a "the launch timed out and was terminated" error.
I used this code for error checking for each cudamalloc, cudamemcpy, and kernal launch.
std::string error = cudaGetErrorString(cudaGetLastError());
printf("%s\n", error);
These calls were saying everything was fine until the first cudamemcpy call after returning from the kernel. the error happens in the line "cudaMemcpy(avhost, avdev, size, cudaMemcpyDeviceToHost);" in main. Any help is appreciated.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#define mul_mod(a,b,m) fmod( (double) a * (double) b, m)
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
/* return the inverse of x mod y */
__device__ int inv_mod(int x,int y) {
int q,u,v,a,c,t;
u=x;
v=y;
c=1;
a=0;
do {
q=v/u;
t=c;
c=a-q*c;
a=t;
t=u;
u=v-q*u;
v=t;
} while (u!=0);
a=a%y;
if (a<0) a=y+a;
return a;
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
/* return the inverse of u mod v, if v is odd */
__device__ int inv_mod2(int u,int v) {
int u1,u3,v1,v3,t1,t3;
u1=1;
u3=u;
v1=v;
v3=v;
if ((u&1)!=0) {
t1=0;
t3=-v;
goto Y4;
} else {
t1=1;
t3=u;
}
do {
do {
if ((t1&1)==0) {
t1=t1>>1;
t3=t3>>1;
} else {
t1=(t1+v)>>1;
t3=t3>>1;
}
Y4:;
} while ((t3&1)==0);
if (t3>=0) {
u1=t1;
u3=t3;
} else {
v1=v-t1;
v3=-t3;
}
t1=u1-v1;
t3=u3-v3;
if (t1<0) {
t1=t1+v;
}
} while (t3 != 0);
return u1;
}
/* return (a^b) mod m */
__device__ int pow_mod(int a,int b,int m)
{
int r,aa;
r=1;
aa=a;
while (1) {
if (b&1) r=mul_mod(r,aa,m);
b=b>>1;
if (b == 0) break;
aa=mul_mod(aa,aa,m);
}
return r;
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
/* return true if n is prime */
int is_prime(int n)
{
int r,i;
if ((n % 2) == 0) return 0;
r=(int)(sqrtf(n));
for(i=3;i<=r;i+=2) if ((n % i) == 0) return 0;
return 1;
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
/* return the prime number immediatly after n */
int next_prime(int n)
{
do {
n++;
} while (!is_prime(n));
return n;
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
#define DIVN(t,a,v,vinc,kq,kqinc) \
{ \
kq+=kqinc; \
if (kq >= a) { \
do { kq-=a; } while (kq>=a); \
if (kq == 0) { \
do { \
t=t/a; \
v+=vinc; \
} while ((t % a) == 0); \
} \
} \
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
__global__ void digi_calc(int *s, int *av, int *primes, int N, int n, int nthreads){
int a,vmax,num,den,k,kq1,kq2,kq3,kq4,t,v,i,t1, h;
unsigned int tid = blockIdx.x*blockDim.x + threadIdx.x;
// GIANT LOOP
for (h = 0; h<1; h++){
if(tid > nthreads) continue;
a = primes[tid];
vmax=(int)(logf(3*N)/logf(a));
if (a==2) {
vmax=vmax+(N-n);
if (vmax<=0) continue;
}
av[tid]=1;
for(i=0;i<vmax;i++) av[tid]*= a;
s[tid]=0;
den=1;
kq1=0;
kq2=-1;
kq3=-3;
kq4=-2;
if (a==2) {
num=1;
v=-n;
} else {
num=pow_mod(2,n,av[tid]);
v=0;
}
for(k=1;k<=N;k++) {
t=2*k;
DIVN(t,a,v,-1,kq1,2);
num=mul_mod(num,t,av[tid]);
t=2*k-1;
DIVN(t,a,v,-1,kq2,2);
num=mul_mod(num,t,av[tid]);
t=3*(3*k-1);
DIVN(t,a,v,1,kq3,9);
den=mul_mod(den,t,av[tid]);
t=(3*k-2);
DIVN(t,a,v,1,kq4,3);
if (a!=2) t=t*2; else v++;
den=mul_mod(den,t,av[tid]);
if (v > 0) {
if (a!=2) t=inv_mod2(den,av[tid]);
else t=inv_mod(den,av[tid]);
t=mul_mod(t,num,av[tid]);
for(i=v;i<vmax;i++) t=mul_mod(t,a,av[tid]);
t1=(25*k-3);
t=mul_mod(t,t1,av[tid]);
s[tid]+=t;
if (s[tid]>=av[tid]) s-=av[tid];
}
}
t=pow_mod(5,n-1,av[tid]);
s[tid]=mul_mod(s[tid],t,av[tid]);
}
__syncthreads();
}
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
int main(int argc,char *argv[])
{
int N,n,i,totalp, h;
double sum;
const char *error;
int *sdev, *avdev, *shost, *avhost, *adev, *ahost;
argc = 2;
argv[1] = "2";
if (argc<2 || (n=atoi(argv[1])) <= 0) {
printf("This program computes the n'th decimal digit of pi\n"
"usage: pi n , where n is the digit you want\n"
);
exit(1);
}
sum = 0;
N=(int)((n+20)*logf(10)/logf(13.5));
totalp=(N/logf(N))+10;
ahost = (int *)calloc(totalp, sizeof(int));
i = 0;
ahost[0]=2;
for(i=1; ahost[i-1]<=(3*N); ahost[i+1]=next_prime(ahost[i])){
i++;
}
// allocate host memory
size_t size = i*sizeof(int);
shost = (int *)malloc(size);
avhost = (int *)malloc(size);
//allocate memory on device
cudaMalloc((void **) &sdev, size);
cudaMalloc((void **) &avdev, size);
cudaMalloc((void **) &adev, size);
cudaMemcpy(adev, ahost, size, cudaMemcpyHostToDevice);
if (i >= 512){
h = 512;
}
else h = i;
dim3 dimGrid(((i+512)/512),1,1);
dim3 dimBlock(h,1,1);
// launch kernel
digi_calc <<<dimGrid, dimBlock >>> (sdev, avdev, adev, N, n, i);
//copy memory back to host
cudaMemcpy(avhost, avdev, size, cudaMemcpyDeviceToHost);
cudaMemcpy(shost, sdev, size, cudaMemcpyDeviceToHost);
// end malloc's, memcpy's, kernel calls
for(h = 0; h <=i; h++){
sum=fmod(sum+(double) shost[h]/ (double) avhost[h],1.0);
}
printf("Decimal digits of pi at position %d: %09d\n",n,(int)(sum*1e9));
//free memory
cudaFree(sdev);
cudaFree(avdev);
cudaFree(adev);
free(shost);
free(avhost);
free(ahost);
return 0;
}
This is exactly the same problem you asked about in this question. The kernel is getting terminated early by the driver because it is taking too long to finish. If you read the documentation for any of these runtime API functions you will see the following note:
Note:
Note that this function may also return error codes from previous,
asynchronous launches.
All that is happening is that the first API call after the kernel launch is returning the error incurred while the kernel was running - in this case the cudaMemcpy call. The way you can confirm this for yourself is to do something like this directly after the kernel launch:
// launch kernel
digi_calc <<<dimGrid, dimBlock >>> (sdev, avdev, adev, N, n, i);
std::string error = cudaGetErrorString(cudaPeekAtLastError());
printf("%s\n", error);
error = cudaGetErrorString(cudaThreadSynchronize());
printf("%s\n", error);
The cudaPeekAtLastError() call will show you if there are any errors in the kernel launch, and the error code returned by the cudaThreadSynchronize() call will show whether any errors were generated while the kernel was executing.
The solution is exactly as outlined in the previous question: probably the simplest way is redesign the code so it is "re-entrant" so you can split the work over several kernel launches, with each kernel launch safely under the display driver watchdog timer limit.
Cuda somehow buffers all the read/write operations on global memory. So you can batch the operations in some loop with some kernel, and it will take actually NO TIME. Then, when you call memcpy, all the buffered operations are done, and it can timeout. Method to go with, is to call cudaThreadSynchronize procedure between iterations.
So remember: if a kernel run takes only nanoseconds to calculate - it doesn't mean that it is so fast - some of the writes to the global memory, are done when memcpy or threadsynchronize is called.

Resources