I am working on an OS project and I am just wondering how a pointer is stored in memory? I understand that a pointer is 4 bytes, so how is the pointer spread amongst the 4 bytes?
My issue is, I am trying to store a pointer to a 4 byte slot of memory. Lets say the pointer is 0x7FFFFFFF. What is stored at each of the 4 bytes?
The way that pointer is stored is same as any other multi-byte values. The 4 bytes are stored according to the endianness of the system. Let's say the address of the 4 bytes is below:
Big endian (most significant byte first):
Address Byte
0x1000 0x7F
0x1001 0xFF
0x1002 0xFF
0x1003 0xFF
Small endian (least significant byte first):
Address Byte
0x1000 0xFF
0x1001 0xFF
0x1002 0xFF
0x1003 0x7F
Btw, 4 byte address is 32-bit system. 64-bit system has 8 bytes addresses.
EDIT:
To reference each individual part of the pointer, you need to use pointer. :)
Say you have:
int i = 0;
int *pi = &i; // say pi == 0x7fffffff
int **ppi = π // from the above example, int ppi == 0x1000
Simple pointer arithmetic would get you the pointer to each byte.
You should read up on Endianness. Normally you wouldn't work with just one byte of a pointer at a time, though, so the order of the bytes isn't relevant.
Update: Here's an example of making a fake pointer with a known value and then printing out each of its bytes:
#include <stdio.h>
int main(int arc, char* argv[]) {
int *p = (int *) 0x12345678;
unsigned char *cp = (unsigned char *) &p;
int i;
for (i = 0; i < sizeof(p); i++)
printf("%d: %.2x\n", i, cp[i]);
return 0;
}
Related
I thought the maximal size of global memory should be only limited by the GPU device no matter it is allocated statically using __device__ __manged__ or dynamically using cudaMalloc.
But I found that if using the __device__ manged__ way, the maximum array size I can declare is much smaller than the GPU device limit.
The minimal working example is as follows:
#include <stdio.h>
#include <cuda_runtime.h>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define MX 64
#define MY 64
#define MZ 64
#define NX 64
#define NY 64
#define M (MX * MY * MZ)
__device__ __managed__ float A[NY][NX][M];
__device__ __managed__ float B[NY][NX][M];
__global__ void swapAB()
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
for(int j = 0; j < NY; j++)
for(int i = 0; i < NX; i++)
A[j][i][tid] = B[j][i][tid];
}
int main()
{
swapAB<<<M/256,256>>>();
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaDeviceSynchronize() );
return 0;
}
It uses 64 ^5 * 2 * 4 / 2^30 GB = 8 GB global memory, and I'll run compile and run it on a Nvidia Telsa K40c GPU which has a 12GB global memory.
Compiler cmd:
nvcc test.cu -gencode arch=compute_30,code=sm_30
Output warning:
warning: overflow in implicit constant conversion.
When I ran the generated executable, an error says:
GPUassert: an illegal memory access was encountered test.cu
Surprisingly, if I use the dynamically allocated global memory of the same size (8GB) via the cudaMalloc API instead, there is no compiling warning and runtime error.
I'm wondering if there are any special limitation about the allocatable size of static global device memory in CUDA.
Thanks!
PS: OS and CUDA: CentOS 6.5 x64, CUDA-7.5.
This would appear to be a limitation of the CUDA runtime API. The root cause is this function (in CUDA 7.5):
__cudaRegisterVar(
void **fatCubinHandle,
char *hostVar,
char *deviceAddress,
const char *deviceName,
int ext,
int size,
int constant,
int global
);
which only accepts a signed int for the size of any statically declared device variable. This would limit the maximum size to 2^31 (2147483648) bytes. The warning you see is because the CUDA front end is emitting boilerplate code containing calls to __cudaResgisterVar like this:
__cudaRegisterManagedVariable(__T26, __shadow_var(A,::A), 0, 4294967296, 0, 0);
__cudaRegisterManagedVariable(__T26, __shadow_var(B,::B), 0, 4294967296, 0, 0);
It is the 4294967296 which is the source of the problem. The size will overflow the signed integer and cause the API call to blow up. So it seems you are limited to 2Gb per static variable for the moment. I would recommend raising this as a bug with NVIDIA if it is a serious problem for your application.
I am writing a code to examine how memory is managed between stack and heap. for a course work.
#include<stdio.h>
#include<stdlib.h>
#define NUM_OF_CHARS 100
// function prototype
void f(void);
int main()
{
f();
return 0;
}
void f(void)
{
char *ptr1;
ptr1 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 1: %016lx\n", (long)ptr1);
char *ptr2;
ptr2 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 2: %016lx\n", (long)ptr2);
}
when I run this code I get the following:
Address array 1: 000000000209e010
Address array 2: 000000000209e1b0
my expectation was to see a difference in the address of 100 bytes, but the difference is 416 bytes, when I changed the NUM_OF_CHARS to any other value (200,300,...) the result was always (NUM_OF_CHARS*4 + 16), so it seams like malloc is allocating 4 bytes for each char rather one byte plus 16 bytes of some overhead.
can anyone explain what is happening here?
Memory allocation is platform/compiler dependent. The only thing malloc ensures is that it allocates enough memory for what you are asking and nothing more.
There is no guarantee that your addresses will be contiguous due to memory alignment
Also, you are allocating by size of ints and not char in your code. This is most likely the reason why you see a NUM_OF_CHARS*4 difference, while the remaining difference can be attributed to padding.
I have to pass medical image data retrieved from one proprietary device SDK to an image processing function in another - also proprietary - device SDK from a second vendor.
The first function gives me the image in a planar rgb format:
int mrcpgk_retrieve_frame(uint16_t *r, uint16_t *g, uint16_t *b, int w, int h);
The reason for uint16_t is that the device can be switched to output each color value encoded as 16-bit floating point values. However, I'm operating in "byte mode" and thus the upper 8 bits of each color value are always zero.
The second function from another device SDK is defined like this:
BOOL process_cpgk_image(const PBYTE rgba, DWORD width, DWORD height);
So we get filled three buffers with the following bits: (16bit planar rgb)
R: 0000000 rrrrrrrr 00000000 rrrrrrrr ...
G: 0000000 gggggggg 00000000 gggggggg ...
B: 0000000 bbbbbbbb 00000000 bbbbbbbb ...
And the desired output illustrated in bits is:
RGBA: rrrrrrrrggggggggbbbbbbbb00000000 rrrrrrrrggggggggbbbbbbbb00000000 ....
We don't have access to the source code of these functions and cannot change the environment. Currently we have implemented the following basic "bridge" to connect the two devices:
void process_frames(int width, int height)
{
uint16_t *r = (uint16_t*)malloc(width*height*sizeof(uint16_t));
uint16_t *g = (uint16_t*)malloc(width*height*sizeof(uint16_t));
uint16_t *b = (uint16_t*)malloc(width*height*sizeof(uint16_t));
uint8_t *rgba = (uint8_t*)malloc(width*height*4);
int i;
memset(rgba, 0, width*height*4);
while ( mrcpgk_retrieve_frame(r, g, b, width, height) != 0 )
{
for (i=0; i<width*height; i++)
{
rgba[4*i+0] = (uint8_t)r[i];
rgba[4*i+1] = (uint8_t)g[i];
rgba[4*i+2] = (uint8_t)b[i];
}
process_cpgk_image(rgba, width, height);
}
free(r);
free(g);
free(b);
free(rgba);
}
This code works perfectly fine but processing takes very long for many thousands of high resolution images. The two functions for processing and retrieving are very fast and our bridge is currently the bottleneck.
I know how to do basic arithmetic, logical and shifting operations with SSE2 intrinsics but I wonder if and how this 16bit planar rgb to packed rgba conversion can be accelerated with MMX, SSE2 or [S]SSE3?
(SSE2 would be preferable because there are still some pre-2005 appliances in use).
Here is a simple SSE2 implementation:
#include <emmintrin.h> // SSE2 intrinsics
assert((width*height)%8 == 0); // NB: total pixels must be multiple of 8
for (i=0; i<width*height; i+=8)
{
__m128i vr = _mm_load_si128((__m128i *)&r[i]); // load 8 pixels from r[i]
__m128i vg = _mm_load_si128((__m128i *)&g[i]); // load 8 pixels from g[i]
__m128i vb = _mm_load_si128((__m128i *)&b[i]); // load 8 pixels from b[i]
__m128i vrg = _mm_or_si128(vr, _mm_slli_epi16(vg, 8));
// merge r/g
__m128i vrgba = _mm_unpacklo_epi16(vrg, vb); // permute first 4 pixels
_mm_store_si128((__m128i *)&rgba[4*i], vrgba); // store first 4 pixels to rgba[4*i]
vrgba = _mm_unpackhi_epi16(vrg, vb); // permute second 4 pixels
_mm_store_si128((__m128i *)&rgba[4*i+16], vrgba); // store second 4 pixels to rgba[4*i+16]
}
Reference implementation with using of AVX2 instructions:
#include <immintrin.h> // AVX2 intrinsics
assert((width*height)%16 == 0); // total pixels count must be multiple of 16
assert(r%32 == 0 && g%32 == 0 && b%32 == 0 && rgba% == 0); // all pointers must to have 32-byte alignment
for (i=0; i<width*height; i+=16)
{
__m256i vr = _mm256_permute4x64_epi64(_mm265_load_si256((__m256i *)(r + i)), 0xD8); // load 16 pixels from r[i]
__m256i vg = _mm256_permute4x64_epi64(_mm265_load_si256((__m256i *)(g + i)), 0xD8); // load 16 pixels from g[i]
__m256i vb = _mm256_permute4x64_epi64(_mm265_load_si256((__m256i *)(b + i)), 0xD8); // load 16 pixels from b[i]
__m256i vrg = _mm256_or_si256(vr, _mm256_slli_si256(vg, 1));// merge r/g
__m256i vrgba = _mm256_unpacklo_epi16(vrg, vb); // permute first 8 pixels
_mm256_store_si256((__m256i *)(rgba + 4*i), vrgba); // store first 8 pixels to rgba[4*i]
vrgba = _mm256_unpackhi_epi16(vrg, vb); // permute second 8 pixels
_mm256_store_si256((__m256i *)(rgba + 4*i+32), vrgba); // store second 8 pixels to rgba[4*i + 32]
}
I want to test the speed of two block of memmory, and I did a experiment in a 64 bits machine(4M cache), and XOR two region of memory with 32-bits aligned and 64-bits aligned respectively.I thought the 64-bits aligned region XOR counld much faster than 32-bits aligned region XOR, but the speed of two types of XOR are quiet the same.
code:
void region_xor_w32( unsigned char *r1, /* Region 1 */
unsigned char *r2, /* Region 2 */
unsigned char *r3, /* Sum region */
int nbytes) /* Number of bytes in region */
{
uint32_t *l1;
uint32_t *l2;
uint32_t *l3;
uint32_t *ltop;
unsigned char *ctop;
ctop = r1 + nbytes;
ltop = (uint32_t *) ctop;
l1 = (uint32_t *) r1;
l2 = (uint32_t *) r2;
l3 = (uint32_t *) r3;
while (l1 < ltop) {
*l3 = ((*l1) ^ (*l2));
l1++;
l2++;
l3++;
}
}
void region_xor_w64( unsigned char *r1, /* Region 1 */
unsigned char *r2, /* Region 2 */
unsigned char *r3, /* Sum region */
int nbytes) /* Number of bytes in region */
{
uint64_t *l1;
uint64_t *l2;
uint64_t *l3;
uint64_t *ltop;
unsigned char *ctop;
ctop = r1 + nbytes;
ltop = (uint64_t *) ctop;
l1 = (uint64_t *) r1;
l2 = (uint64_t *) r2;
l3 = (uint64_t *) r3;
while (l1 < ltop) {
*l3 = ((*l1) ^ (*l2));
l1++;
l2++;
l3++;
}
}
Result:
I believe this is due to data starvation. That is, your CPU is so fast and your code is so efficient that your memory subsystem simply can't keep up. Even XORing in a 32-bit aligned way takes less time than fetching data from memory. That's why both 32-bit and 64-bit aligned approaches have the same speed — that of your memory subsystem.
To demonstrate, I've reproduces your experiment, but this time with four different ways of XORing:
non-aligned (i.e. byte-aligned) XORing;
32-bit aligned XORing;
64-bit aligned XORing;
128-bit aligned XORing.
The last one was implemented via _mm_xor_si128(), which is a part of the SSE2 instruction set.
As you can see, switching to 128-bit processing gave no performance boost. Switching to per-byte processing, on the other hand, slowed everything down — that's because in this case memory subsystem still beats CPU.
I need exactly 1 byte for some kind of socket based application and i cant find a way to create it.
unsigned char mydata = 3;
[NSMutableData dataWithBytes:&mydata length:sizeof(mydata)];
NSData reference, unsigned char is used to save 1 byte.