What exactly are the transaction metrics reported by NVPROF? - memory

I'm trying to figure out what exactly each of the metrics reported by "nvprof" are. More specifically I can't figure out which transactions are System Memory and Device Memory read and writes. I wrote a very basic code just to help figure this out.
#define TYPE float
#define BDIMX 16
#define BDIMY 16
#include <cuda.h>
#include <cstdio>
#include <iostream>
__global__ void kernel(TYPE *g_output, TYPE *g_input, const int dimx, const int dimy)
{
__shared__ float s_data[BDIMY][BDIMX];
int ix = blockIdx.x * blockDim.x + threadIdx.x;
int iy = blockIdx.y * blockDim.y + threadIdx.y;
int in_idx = iy * dimx + ix; // index for reading input
int tx = threadIdx.x; // thread’s x-index into corresponding shared memory tile
int ty = threadIdx.y; // thread’s y-index into corresponding shared memory tile
s_data[ty][tx] = g_input[in_idx];
__syncthreads();
g_output[in_idx] = s_data[ty][tx] * 1.3;
}
int main(){
int size_x = 16, size_y = 16;
dim3 numTB;
numTB.x = (int)ceil((double)(size_x)/(double)BDIMX) ;
numTB.y = (int)ceil((double)(size_y)/(double)BDIMY) ;
dim3 tbSize;
tbSize.x = BDIMX;
tbSize.y = BDIMY;
float* a,* a_out;
float *a_d = (float *) malloc(size_x * size_y * sizeof(TYPE));
cudaMalloc((void**)&a, size_x * size_y * sizeof(TYPE));
cudaMalloc((void**)&a_out, size_x * size_y * sizeof(TYPE));
for(int index = 0; index < size_x * size_y; index++){
a_d[index] = index;
}
cudaMemcpy(a, a_d, size_x * size_y * sizeof(TYPE), cudaMemcpyHostToDevice);
kernel <<<numTB, tbSize>>>(a_out, a, size_x, size_y);
cudaDeviceSynchronize();
return 0;
}
Then I run nvprof --metrics all for the output to see all the metrics. This is the part I'm interested in:
Metric Name Metric Description Min Max Avg
Device "Tesla K40c (0)"
Kernel: kernel(float*, float*, int, int)
local_load_transactions Local Load Transactions 0 0 0
local_store_transactions Local Store Transactions 0 0 0
shared_load_transactions Shared Load Transactions 8 8 8
shared_store_transactions Shared Store Transactions 8 8 8
gld_transactions Global Load Transactions 8 8 8
gst_transactions Global Store Transactions 8 8 8
sysmem_read_transactions System Memory Read Transactions 0 0 0
sysmem_write_transactions System Memory Write Transactions 4 4 4
tex_cache_transactions Texture Cache Transactions 0 0 0
dram_read_transactions Device Memory Read Transactions 0 0 0
dram_write_transactions Device Memory Write Transactions 40 40 40
l2_read_transactions L2 Read Transactions 70 70 70
l2_write_transactions L2 Write Transactions 46 46 46
I understand the shared and global accesses. The global accesses are coalesced and since there are 8 warps, there are 8 transactions.
But I can't figure out the system memory and device memory write transaction numbers.

It helps if you have a model of the GPU memory hierarchy with both logical and physical spaces, such as the one here.
Referring to the "overview tab" diagram:
gld_transactions refer to transactions issued from the warp targetting the global logical space. On the diagram, this would be the line from the "Kernel" box on the left to the "global" box to the right of it, and the logical data movement direction would be from right to left.
gst_transactions refer to the same line as above, but logically from left to right. Note that these logical global transaction could hit in a cache and not go anywhere after that. From the metrics standpoint, those transaction types only refer to the indicated line on the diagram.
dram_write_transactions refer to the line on the diagram which connects device memory on the right with L2 cache, and the logical data flow is from left to right on this line. Since the L2 cacheline is 32 bytes (whereas the L1 cacheline and size of a global transaction is 128 bytes), the device memory transactions are also 32 bytes, not 128 bytes. So a global write transaction that passes through L1 (it is a write-through cache if enabled) and L2 will generate 4 dram_write transactions. This should explain 32 out of the 40 transactions.
system memory transactions target zero-copy host memory. You don't seem to have that so I can't explain those.
Note that in some cases, for some metrics, on some GPUs, the profiler may have some "inaccuracy" when launching very small numbers of threadblocks. For example, some metrics are sampled on a per-SM basis and scaled. (device memory transactions are not in this category, however). If you have disparate work being done on each SM (perhaps due to a very small number of threadblocks launched) then the scaling can be misleading/less accurate. Generally if you launch a larger number of threadblocks, these usually become insignificant.
This answer may also be of interest.

Related

Integer overflow in Fibonacci number

I was solving this codechef problem on Fibonacci numbers. It says number is of 1000 digits then why it is not causing integer overflow in tester's solution when it is scanning the array and storing it in unsigned long long int. I can't understand how solution is working. Below is the problem and tester's solution.
The Head Chef has been playing with Fibonacci numbers for long . He has learnt several tricks related to Fibonacci numbers . Now he wants to test his chefs in the skills .
A fibonacci number is defined by the recurrence :
f(n) = f(n-1) + f(n-2) for n > 2
and f(1) = 0
and f(2) = 1 .
Given a number A , determine if it is a fibonacci number.
Input
The first line of the input contains an integer T denoting the number of test cases. The description of T test cases follows.
The only line of each test case contains a single integer A denoting the number to be checked .
Output
For each test case, output a single line containing "YES" if the given number is a fibonacci number , otherwise output a single line containing "NO" .
Constraints
1 ≤ T ≤ 1000
1 ≤ number of digits in A ≤ 1000
The sum of number of digits in A in all test cases <= 10000.
Example
Input:
3
3
4
5
Output:
YES
NO
YES
**Tester's solution:**
#include <iostream>
#include <cstdio>
#include <algorithm>
#include <set>
#include <cstring>
using namespace std;
int const mx = 6666;
set <unsigned long long> f;
unsigned long long fib[mx + 10];
char s[mx + 1];
int main(){
// freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
fib[0] = 0;
fib[1] = 1;
f.insert(1);
f.insert(0);
int i;
for (i = 2; i <= mx; i++){
fib[i] = fib[i - 1] + fib[i - 2];
f.insert(fib[i]);
}
int tc;
cin>>tc;
while (tc--){
unsigned long long n = 0, ten = 10;
cin>>s;
int len = strlen(s);
for (i = 0; i < len; i++){
char q = s[i];
unsigned long long a = q - '0';
n = n * ten + a;
}
if (f.find(n) == f.end()) printf("NO\n");
else printf("YES\n");
}
return 0;
}
From cplusplus you will see that,
ULLONG_MAX Maximum value for an object of type unsigned long long int is 18446744073709551615 (264-1) or greater.
The actual value depends on the particular system and library implementation, but shall reflect the limits of these types in
the target platform.
Above information is just to let you know its a BIG number. Moreover the cause of not getting overflow is not the limit i mentioned.
Most probably, the input file of judge does not contain any input that can cause an overflow.
And its still possible to set such input even after fulfilling the conditions,
1 ≤ T ≤ 1000
1 ≤ number of digits in A ≤ 1000
The sum of number of digits in A in all test cases <= 10000.

Amount of local memory per CUDA thread

I read in NVIDIA documentation (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications, table #12) that the amount of local memory per thread is 512 Ko for my GPU (GTX 580, compute capability 2.0).
I tried unsuccessfully to check this limit on Linux with CUDA 6.5.
Here is the code I used (its only purpose is to test local memory limit, it doesn't make any usefull computation):
#include <iostream>
#include <stdio.h>
#define MEMSIZE 65000 // 65000 -> out of memory, 60000 -> ok
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=false)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if( abort )
exit(code);
}
}
inline void gpuCheckKernelExecutionError( const char *file, int line)
{
gpuAssert( cudaPeekAtLastError(), file, line);
gpuAssert( cudaDeviceSynchronize(), file, line);
}
__global__ void kernel_test_private(char *output)
{
int c = blockIdx.x*blockDim.x + threadIdx.x; // absolute col
int r = blockIdx.y*blockDim.y + threadIdx.y; // absolute row
char tmp[MEMSIZE];
for( int i = 0; i < MEMSIZE; i++)
tmp[i] = 4*r + c; // dummy computation in local mem
for( int i = 0; i < MEMSIZE; i++)
output[i] = tmp[i];
}
int main( void)
{
printf( "MEMSIZE=%d bytes.\n", MEMSIZE);
// allocate memory
char output[MEMSIZE];
char *gpuOutput;
cudaMalloc( (void**) &gpuOutput, MEMSIZE);
// run kernel
dim3 dimBlock( 1, 1);
dim3 dimGrid( 1, 1);
kernel_test_private<<<dimGrid, dimBlock>>>(gpuOutput);
gpuCheckKernelExecutionError( __FILE__, __LINE__);
// transfer data from GPU memory to CPU memory
cudaMemcpy( output, gpuOutput, MEMSIZE, cudaMemcpyDeviceToHost);
// release resources
cudaFree(gpuOutput);
cudaDeviceReset();
return 0;
}
And the compilation command line:
nvcc -o cuda_test_private_memory -Xptxas -v -O2 --compiler-options -Wall cuda_test_private_memory.cu
The compilation is ok, and reports:
ptxas info : 0 bytes gmem
ptxas info : Compiling entry function '_Z19kernel_test_privatePc' for 'sm_20'
ptxas info : Function properties for _Z19kernel_test_privatePc
65000 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 21 registers, 40 bytes cmem[0]
I got an "out of memory" error at runtime on the GTX 580 when I reached 65000 bytes per thread. Here is the exact output of the program in the console:
MEMSIZE=65000 bytes.
GPUassert: out of memory cuda_test_private_memory.cu 48
I also did a test with a GTX 770 GPU (on Linux with CUDA 6.5). It ran without error for MEMSIZE=200000, but the "out of memory error" occurred at runtime for MEMSIZE=250000.
How to explain this behavior ? Am I doing something wrong ?
It seems you are running into not a local memory limitation but a stack size limitation:
ptxas info : Function properties for _Z19kernel_test_privatePc
65000 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
The variable that you had intended to be local is on the (GPU thread) stack, in this case.
Based on the information provided by #njuffa here, the available stack size limit is the lesser of:
The maximum local memory size (512KB for cc2.x and higher)
GPU memory/(#of SMs)/(max threads per SM)
Clearly, the first limit is not the issue. I assume you have a "standard" GTX580, which has 1.5GB memory and 16 SMs. A cc2.x device has a maximum of 1536 resident threads per multiprocessor. This means we have 1536MB/16/1536 = 1MB/16 = 65536 bytes stack. There is some overhead and other memory usage that subtracts from the total available memory, so the stack size limit is some amount below 65536, somewhere between 60000 and 65000 in your case, apparently.
I suspect a similar calculation on your GTX770 would yield a similar result, i.e. a maximum stack size between 200000 and 250000.

Memory bandwidth measurement with memset,memcpy

I am trying to understand the performance of memory operations with memcpy/memset. I measure the time needed for a loop containing memset,memcpy. See the attached code (it is in C++11, but in plain C the picture is the same). It is understandable that memset is faster than memcpy. But this is more-or-less the only thing which I understand... The biggest question is:
Why there is a such a strong dependence on the number of loop iterations?
The application is single threaded! And the CPU is: AMD FX(tm)-4100 Quad-Core Processor.
And here are some numbers:
memset: iters=1 0.0625 GB in 0.1269 s : 0.4927 GB per second
memcpy: iters=1 0.0625 GB in 0.1287 s : 0.4857 GB per second
memset: iters=4 0.25 GB in 0.151 s : 1.656 GB per second
memcpy: iters=4 0.25 GB in 0.1678 s : 1.49 GB per second
memset: iters=16 1 GB in 0.2406 s : 4.156 GB per second
memcpy: iters=16 1 GB in 0.3184 s : 3.14 GB per second
memset: iters=128 8 GB in 1.074 s : 7.447 GB per second
memcpy: iters=128 8 GB in 1.737 s : 4.606 GB per second
The code:
/*
-- Compilation and run:
g++ -O3 -std=c++11 -o mem-speed mem-speed.cc && ./mem-speed
-- Output example:
*/
#include <cstdio>
#include <chrono>
#include <memory>
#include <string.h>
using namespace std;
const uint64_t _KB=1024, _MB=_KB*_KB, _GB=_KB*_KB*_KB;
std::pair<double,char> measure_memory_speed(uint64_t buf_size,int n_iters)
{
// without returning something from the buffers, the compiler will optimize memset() and memcpy() calls
char retval=0;
unique_ptr<char[]> buf1(new char[buf_size]), buf2(new char[buf_size]);
auto time_start = chrono::high_resolution_clock::now();
for( int i=0; i<n_iters; i++ )
{
memset(buf1.get(),123,buf_size);
retval += buf1[0];
}
auto t1 = chrono::duration_cast<std::chrono::nanoseconds>(chrono::high_resolution_clock::now() - time_start);
time_start = chrono::high_resolution_clock::now();
for( int i=0; i<n_iters; i++ )
{
memcpy(buf2.get(),buf1.get(),buf_size);
retval += buf2[0];
}
auto t2 = chrono::duration_cast<std::chrono::nanoseconds>(chrono::high_resolution_clock::now() - time_start);
printf("memset: iters=%d %g GB in %8.4g s : %8.4g GB per second\n",
n_iters,n_iters*buf_size/double(_GB),(double)t1.count()/1e9, n_iters*buf_size/double(_GB) / (t1.count()/1e9) );
printf("memcpy: iters=%d %g GB in %8.4g s : %8.4g GB per second\n",
n_iters,n_iters*buf_size/double(_GB),(double)t2.count()/1e9, n_iters*buf_size/double(_GB) / (t2.count()/1e9) );
printf("\n");
double avr = n_iters*buf_size/_GB * (1e9/t1.count()+1e9/t2.count()) / 2;
retval += buf1[0]+buf2[0];
return std::pair<double,char>(avr,retval);
}
int main(int argc,const char **argv)
{
uint64_t n=64;
if( argc==2 )
n = atoi(argv[1]);
for( int i=0; i<=10; i++ )
measure_memory_speed(n*_MB,1<<i);
return 0;
}
Surely this is just down to the instruction caches loading - so the code runs faster after the 1st iteration, and the data cache speeding access to the memcpy/memcmp for further iterations. The cache memory is inside the processor so it doesn't have to fetch or put the data to the slower external memory so often - so runs faster.

PGMidi changing pitch sendBytes example

I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.

How can I use a page table to convert a virtual address into a physical one?

Lets say I have a normal page table:
Page Table (Page size = 4k)
Page #: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Page Frame #: 3 x 1 x 0 x 2 x 5 x 7 4 6 x x x
How can I convert an arbitrary logical address like 51996 into a physical memory address?
If I take log base 2 (4096), I get 12. I think this is how many bits I'm suppose to use for the offset of my address.
I'm just not sure. 51996 / 4096 = 12.69. So does this mean it lay on page#12 with a certain offset?
How do I then turn that into the physical address of "51996"?
To determine the page of a given memory address, take the first P bits (of the N bit) number.
P = lg2(numberOfPages)
In your example, P=lg2(16)=4
So the first 4 bits of a given memory address will tell us the page. That means the rest should be the offset from the start of that page.
Your example address, 51996, is 1100101100011100 in binary. I.e. [1100:101100011100].
1100 (12 in decimal) is the page number
101100011100 (2844 in decimal) is the offset
Now we need to find where page 12 is in memory.
Looking at your frame table, it appears that page 12 is resident in the 6th frame. In a system where all memory is pageable (i.e. no memory mapped IO) the 6th page frame will be at (entriesPerPage*frameNum)-1
In this case, 4000*6-1 = 23999 (The "-1" is needed since memory is 0-indexed.)
In this case, 4096*6-1 = 24575 (The "-1" is needed since memory is 0-indexed.)
Now all we have to do is add the offset and we have the physical memory address:
23999 + 2844=26843 = 0x68DB
24575 + 2844 = 27419 = 0x6B1B
Done!
Hope this (edit) was helpful XD
Edit:
Thanks to Jel for catching my mistake :)
Thanks to user8 for catching my other mistake! (frameNum instead of pageNum).
If I understand your question correctly (I probably don't), you want to know how to find the physical address from the virtual address using the page table structures. In that case, pretend you are the processor. Use the 10 most significant bits of the address to find the page table in the page directory (the top level page table). The next 10 bits are the index into the page table (the lower level page table). Use the address in that page table entry to find the physical page address. The last ten bits are the byte address into the page.
By the way, you would probably find a lot more people who would understand this type of question at an OS oriented site such as OSDev. I couldn't really go into much detail in this answer because I haven't done this type of stuff in years.
This is might help:
import java.util.Arrays;
import java.util.Scanner;
public class Run {
private static Scanner input = new Scanner(System.in);
public static void main(String[] args) {
System.out.println("////////// COMMANDS //////////");
System.out.println("Snapshot: S(enter)r, S(enter)m, S(enter)x, S(enter)p, S(enter)d, S(enter)c");
System.out.println("time: t");
System.out.println("Terminate: T#");
System.out.println("Kill: K#");
System.out.println("Start process: A");
System.out.println("Quit program: quit");
System.out.println("Deletes device: delete");
System.out.println ("//////////////////////////////");
OS myComputer;
int hdd, cdd, printer, cpu, mem, page;
hdd = cdd = printer = cpu = 20;
mem = 1;
page = 1;
System.out.println("");
System.out.println("|||| SYS GEN ||||");
System.out.println("");
mem = 0;
System.out.println("Number of Hard-Drives:");
while (hdd > 10) {
String inpt = input.next();
while (Asset.isInt(inpt) == false)
inpt = input.next();
hdd = Integer.parseInt(inpt);
if (hdd > 10)
System.out.println("Try something smaller (less than 10)");
}
System.out.println("Number of CD-Drives: ");
while (cdd > 10) {
String inpt = input.next();
while (Asset.isInt(inpt) == false)
inpt = input.next();
cdd = Integer.parseInt(inpt);
if (cdd > 10)
System.out.println("Try something smaller (less than 10)");
}
System.out.println("Number of Printers:");
while (printer > 10) {
String inpt = input.next();
while (Asset.isInt(inpt) == false)
inpt = input.next();
printer = Integer.parseInt(inpt);
if (printer > 10)
System.out.println("Try something smaller (less than 10)");
}
System.out.println("Amount of Memory:");
while (mem <= 0) {
String inpt = input.next();
if (Asset.isInt(inpt))
mem = Integer.parseInt(inpt);
if (mem<=0)
System.out.println("The memory size must be greater than zero.");
else
break;
}
Integer[] factors = Asset.factors(mem);
System.out.println("Enter a page size: "+Arrays.toString(factors));
while (true) {
String inpt = input.next();
while (Asset.isInt(inpt) == false)
inpt = input.next();
if (Asset.inArray(factors, Integer.parseInt(inpt))) {
page = Integer.parseInt(inpt);
break;
} else {
System.out.println("Page size must be one of these -> "+Arrays.toString(factors));
}
}
System.out.println("Number of CPUs (max10):");
while (cpu > 10) {
String inpt = input.next();
while (Asset.isInt(inpt) == false)
inpt = input.next();
cpu = Integer.parseInt(inpt);
if (cpu > 10)
System.out.println("Try something smaller (less than 10)");
}
myComputer = new OS(cpu, hdd, cdd, printer, mem, page);
myComputer.Running();
}
}
first step : 51996 / 4000 = 12 -> p , remain= 3996 -> d (offset).
now look at the table p(12) = 6
second step : (6*4000) + 3996 : 27996
the physical address is 27996.
The following page table is for a system with 16-bit virtual and physical addresses and with 4,096-byte pages. The reference bit is set to 1 when the page has been referenced. Periodically, a thread zeroes out all values of the reference bit. A dash for a page frame indicates the page is not in memory. The page-replacement algorithm is localized LRU, and all numbers are provided in decimal.
Page Page Frame Reference Bit
0 9 0
1 1 0
2 14 0
3 10 0
4 - 0
5 13 0
6 8 0
7 15 0
8 0 0
9 - 0
10 5 0
11 4 0
12 - 0
13 3 0
14 - 0
15 2 0
a. Convert the following virtual addresses (in hexadecimal) to the equivalent physical addresses (provide answers in hexadecimal AND decimal). Also set the reference bit for the appropriate entry in the page table. (3)
i. 0xBC2C
ii. 0x00ED
iii. 0xEA14
iv. 0x6901
v. 0x23A1
vi. 0xA999

Resources