malloc using 4 bytes for char - memory

I am writing a code to examine how memory is managed between stack and heap. for a course work.
#include<stdio.h>
#include<stdlib.h>
#define NUM_OF_CHARS 100
// function prototype
void f(void);
int main()
{
f();
return 0;
}
void f(void)
{
char *ptr1;
ptr1 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 1: %016lx\n", (long)ptr1);
char *ptr2;
ptr2 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 2: %016lx\n", (long)ptr2);
}
when I run this code I get the following:
Address array 1: 000000000209e010
Address array 2: 000000000209e1b0
my expectation was to see a difference in the address of 100 bytes, but the difference is 416 bytes, when I changed the NUM_OF_CHARS to any other value (200,300,...) the result was always (NUM_OF_CHARS*4 + 16), so it seams like malloc is allocating 4 bytes for each char rather one byte plus 16 bytes of some overhead.
can anyone explain what is happening here?

Memory allocation is platform/compiler dependent. The only thing malloc ensures is that it allocates enough memory for what you are asking and nothing more.
There is no guarantee that your addresses will be contiguous due to memory alignment
Also, you are allocating by size of ints and not char in your code. This is most likely the reason why you see a NUM_OF_CHARS*4 difference, while the remaining difference can be attributed to padding.

Related

Global device memory size limit when using statically alocated memory in cuda

I thought the maximal size of global memory should be only limited by the GPU device no matter it is allocated statically using __device__ __manged__ or dynamically using cudaMalloc.
But I found that if using the __device__ manged__ way, the maximum array size I can declare is much smaller than the GPU device limit.
The minimal working example is as follows:
#include <stdio.h>
#include <cuda_runtime.h>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define MX 64
#define MY 64
#define MZ 64
#define NX 64
#define NY 64
#define M (MX * MY * MZ)
__device__ __managed__ float A[NY][NX][M];
__device__ __managed__ float B[NY][NX][M];
__global__ void swapAB()
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
for(int j = 0; j < NY; j++)
for(int i = 0; i < NX; i++)
A[j][i][tid] = B[j][i][tid];
}
int main()
{
swapAB<<<M/256,256>>>();
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaDeviceSynchronize() );
return 0;
}
It uses 64 ^5 * 2 * 4 / 2^30 GB = 8 GB global memory, and I'll run compile and run it on a Nvidia Telsa K40c GPU which has a 12GB global memory.
Compiler cmd:
nvcc test.cu -gencode arch=compute_30,code=sm_30
Output warning:
warning: overflow in implicit constant conversion.
When I ran the generated executable, an error says:
GPUassert: an illegal memory access was encountered test.cu
Surprisingly, if I use the dynamically allocated global memory of the same size (8GB) via the cudaMalloc API instead, there is no compiling warning and runtime error.
I'm wondering if there are any special limitation about the allocatable size of static global device memory in CUDA.
Thanks!
PS: OS and CUDA: CentOS 6.5 x64, CUDA-7.5.
This would appear to be a limitation of the CUDA runtime API. The root cause is this function (in CUDA 7.5):
__cudaRegisterVar(
void **fatCubinHandle,
char *hostVar,
char *deviceAddress,
const char *deviceName,
int ext,
int size,
int constant,
int global
);
which only accepts a signed int for the size of any statically declared device variable. This would limit the maximum size to 2^31 (2147483648) bytes. The warning you see is because the CUDA front end is emitting boilerplate code containing calls to __cudaResgisterVar like this:
__cudaRegisterManagedVariable(__T26, __shadow_var(A,::A), 0, 4294967296, 0, 0);
__cudaRegisterManagedVariable(__T26, __shadow_var(B,::B), 0, 4294967296, 0, 0);
It is the 4294967296 which is the source of the problem. The size will overflow the signed integer and cause the API call to blow up. So it seems you are limited to 2Gb per static variable for the moment. I would recommend raising this as a bug with NVIDIA if it is a serious problem for your application.

Cuda: Global memory broadcast to registers on Compute 5.0 architecture

I have the following code:
__global__ void someKernel(unsigned char * global_mem, unsigned int * start) {
unsigned int size;
size = *(unsigned int *)&global_mem[start[blockIdx.x]];
//Do many things with size
}
Where all of my threads from the same block will read the same memory location from the global memory and do many things with it.
How fast/slow is the copy going to be? I have a block of 256 threads.
Will the global memory broadcast to the whole block, or to a single warp (meaning I will have to do 256/32 reads from global memory)?
Would the following situation be better where I first read from global memory to shared memory and then I copy to a register (bearing the warp divergence and the synchronization overhead):
__global__ void someKernel(unsigned char * global_mem, unsigned int * start) {
__shared__ int tmpsize;
unsigned int size;
int i = threadIdx.x;
if (i == 0) {
tmpsize = *(unsigned int *)&global_mem[start[blockIdx.x]];
}
__syncthreads();
size = tmpsize;
//Do many things with size
}
Would this be faster, considering that I have a huge grid with blocks of 256 threads and each block reads a different start location

Why 64-bits aligned XOR do not run faster than 32-bits aligned XOR?

I want to test the speed of two block of memmory, and I did a experiment in a 64 bits machine(4M cache), and XOR two region of memory with 32-bits aligned and 64-bits aligned respectively.I thought the 64-bits aligned region XOR counld much faster than 32-bits aligned region XOR, but the speed of two types of XOR are quiet the same.
code:
void region_xor_w32( unsigned char *r1, /* Region 1 */
unsigned char *r2, /* Region 2 */
unsigned char *r3, /* Sum region */
int nbytes) /* Number of bytes in region */
{
uint32_t *l1;
uint32_t *l2;
uint32_t *l3;
uint32_t *ltop;
unsigned char *ctop;
ctop = r1 + nbytes;
ltop = (uint32_t *) ctop;
l1 = (uint32_t *) r1;
l2 = (uint32_t *) r2;
l3 = (uint32_t *) r3;
while (l1 < ltop) {
*l3 = ((*l1) ^ (*l2));
l1++;
l2++;
l3++;
}
}
void region_xor_w64( unsigned char *r1, /* Region 1 */
unsigned char *r2, /* Region 2 */
unsigned char *r3, /* Sum region */
int nbytes) /* Number of bytes in region */
{
uint64_t *l1;
uint64_t *l2;
uint64_t *l3;
uint64_t *ltop;
unsigned char *ctop;
ctop = r1 + nbytes;
ltop = (uint64_t *) ctop;
l1 = (uint64_t *) r1;
l2 = (uint64_t *) r2;
l3 = (uint64_t *) r3;
while (l1 < ltop) {
*l3 = ((*l1) ^ (*l2));
l1++;
l2++;
l3++;
}
}
Result:
I believe this is due to data starvation. That is, your CPU is so fast and your code is so efficient that your memory subsystem simply can't keep up. Even XORing in a 32-bit aligned way takes less time than fetching data from memory. That's why both 32-bit and 64-bit aligned approaches have the same speed — that of your memory subsystem.
To demonstrate, I've reproduces your experiment, but this time with four different ways of XORing:
non-aligned (i.e. byte-aligned) XORing;
32-bit aligned XORing;
64-bit aligned XORing;
128-bit aligned XORing.
The last one was implemented via _mm_xor_si128(), which is a part of the SSE2 instruction set.
As you can see, switching to 128-bit processing gave no performance boost. Switching to per-byte processing, on the other hand, slowed everything down — that's because in this case memory subsystem still beats CPU.

How is a pointer stored in memory?

I am working on an OS project and I am just wondering how a pointer is stored in memory? I understand that a pointer is 4 bytes, so how is the pointer spread amongst the 4 bytes?
My issue is, I am trying to store a pointer to a 4 byte slot of memory. Lets say the pointer is 0x7FFFFFFF. What is stored at each of the 4 bytes?
The way that pointer is stored is same as any other multi-byte values. The 4 bytes are stored according to the endianness of the system. Let's say the address of the 4 bytes is below:
Big endian (most significant byte first):
Address Byte
0x1000 0x7F
0x1001 0xFF
0x1002 0xFF
0x1003 0xFF
Small endian (least significant byte first):
Address Byte
0x1000 0xFF
0x1001 0xFF
0x1002 0xFF
0x1003 0x7F
Btw, 4 byte address is 32-bit system. 64-bit system has 8 bytes addresses.
EDIT:
To reference each individual part of the pointer, you need to use pointer. :)
Say you have:
int i = 0;
int *pi = &i; // say pi == 0x7fffffff
int **ppi = π // from the above example, int ppi == 0x1000
Simple pointer arithmetic would get you the pointer to each byte.
You should read up on Endianness. Normally you wouldn't work with just one byte of a pointer at a time, though, so the order of the bytes isn't relevant.
Update: Here's an example of making a fake pointer with a known value and then printing out each of its bytes:
#include <stdio.h>
int main(int arc, char* argv[]) {
int *p = (int *) 0x12345678;
unsigned char *cp = (unsigned char *) &p;
int i;
for (i = 0; i < sizeof(p); i++)
printf("%d: %.2x\n", i, cp[i]);
return 0;
}

CUDA shared array not getting values?

I am trying to implement simple parallel reduction. I am using the code from CUDA sdk. BUt somehow there is a problem in my kernel as the shared array is not getting values of the global array and its all zeroes.
extern __ shared __ float4 sdata[];
// each thread loadsone element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i= blockIdx.x*blockDim.x+ threadIdx.x;
sdata[tid] = dev_src[i];
__syncthreads();
// do reduction in shared mem
for(unsigned int s=1; s < blockDim.x; s *= 2) {
if(tid % (2*s) == 0){
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if(tid == 0)
out[blockIdx.x] = sdata[0];
EDIT::
ok I got it working by removing extern keyword and making shared array a constant size like 512. I am in good shape now. Maybe someone can explain why it was not working with extern keyword
I think I know why this is happening as I have faced this before. How are you calling the kernel?
Remember in the call kernel<<<blocks,threads,sharedMemory>>> the sharedMemory should be the size of the shared memory in bytes. So, if you are declaring for 512 elements, the third parameter should be 512 * sizeof(float4). I think you are just calling as below, which is wrong
kernel<<<blocks,threads,512>>> // this is wrong
Hope that helps

Resources