Odd Metal shader profiler results with atomic functions - ios

I've implemented a minimal test compute shader in order to get a feel for the performance of Metal atomic functions, specifically atomic_fetch_add_explicit and atomic_compare_exchange_weak_explicit using atomic_uint in the device address space.
Running a compute kernel which runs atomic_fetch_add_explicit once, 307200 times, takes around 50 µs on an iPhone 12 Pro. Running atomic_fetch_add_explicit 10 times per thread instead takes about 650 µs, which makes sense. However, I'm confused by the Xcode shader profiler's line-by-line performance metrics:
Blue, red and yellow mean arithmetic, synchronisation and control flow, respectively. According to Apple's documentation, this result means the atomic function calls each take zero percent of the function's total elapsed time:
The statistics for lines in the function body indicate the time as a percent of the function's total elapsed time.
However clearly that's not the case, as my time expenditure is multiplied roughly ten-fold as I go from one to ten calls per thread.
Is this an Xcode bug, or am I missing something here? Answers appreciated, answers with sources doubly appreciated. :)

Related

OpenCL kernel out of resources based on number of loop iterations INSIDE the kernel. Can the compiled kernel be too large to fit on the GPU?

TL;DR I have an OpenCL kernel that loops for a large number of iterations and calls several user-made functions. The kernel works fine for few iterations, but increasing the number of iterations causes an CL_OUT_OF_RESOURCES (-5) error. If the same kernel is executed on a better GPU it is able to loop for more iterations without the error. What can be causing this error based on the number of iterations? Is it possible that the loops are being unrolled and generating a coder larger than the GPU can hold?
I am developing an OpenCL kernel to run on GPU that computes a very complex function. To keep things organized, I have a "kernel.cl" file with the main kernel (the __kernel void function) and a "aux_functions.cl" file with ~20 auxiliary functions (they are of type int, int2, int16, but not __kernel) that are called several times by the kernel and by themselves.
The problem specification is roughly as follows (justification for such many loops):
I have two arrays representing full HD images (1920x1080 integers)
For each 128x128 patch of one image, I must find the value of 4 parameters that optimize a given function (the second image is used to evaluate how good it is)
For the same 128x128 patch and the same 4 parameters, each 4x4 sub-patch is transformed slightly different based on its position inside the larger 128x128 patch
And I tried to model the execution as follows:
Each workgroup will compute the kernel for one 128x128 patch (I started processing only the 10 first patches -- 10 workgroups)
Each workgroup is composed of 256 workitems
Each workitem will test a distinct set of values (a fraction of a predefiend set) for the 4 parameters based on their IDs
The main structure of the kernel is as follows:
__kernel void funct(__global int *referenceFrameSamples, __global int *currentFrameSamples,const int frameWidth, const int frameHeight, __global int *result){
// Initialize some variables and get global and local IDs
for(executed X times){ // These 4 outer loops are used to test different combinations of parameters to be applied in a given function in the inner loops
for(executed Y times){
for(executed Z times){
for(executed W times){
// Simple assignments based on the outer for loops
for(executed 32x){ // Each execution of the inner loop applies a function to a 4x4 patch
for(executed 32x){
// The relevant computation is performed here
// Calls a couple of lightweight functions using only the private variables
// Calls a complex function that uses the __global int *referenceFrameSamples variable
}
}
// Compute something and use select() to update the best value
}
}
}
// Write the best value to __global *results buffer
}
The problem is that when the outer 4 loops are repeated a few times the kernel runs fine, but if I increase the iterations the kernel crashes with the error ERROR! clWaitForEvents returned CL_OUT_OF_RESOURCES (-5). I am testing it on a notebook with a GPU GeForce 940MX with 2GB, and the kernel starts crashing when X * Y * Z * W = 1024.
The clEnqueueNDRangeKernel() call has no error, only the clWaitForEvents() called after it returns an error. I am using CLIntercept to profile the errors and running time of the kernel. Also, when the kernel runs smooth I can measure the execution time correctly (showed next), but when it crashes, the "measured" execution time is ridiculously wrong (billions of miliseconds) even though it crashes on the first seconds.
cl_ulong time_start;
cl_ulong time_end;
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_START, sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_END, sizeof(time_end), &time_end, NULL);
double nanoSeconds = time_end-time_start;
printf("OpenCl Execution time is: %0.3f miliseconds \n",nanoSeconds / 1000000.0);
What I tested:
Improve the complex auxiliary function that used __global variable: instead of passing the __global pointer, I read the relevant part of the array into a private array and passed it as argument. Outcome: improved running time on success cases, but still fails in the same case
Reduce workgroups and workitems: even using 1 workgroup and 1 workitem (the absolute minimum) with the same number of iterations yields the same error. For a smaller number of iterations, running time decreases with less workitems/groups
Running the same kernel on a better GPU: after doing the previous 2 modifications (improved function and reduced workitems) I launched the kernel on a desktop equipped with a GPU Titan V with 12GB. It is able to compute the kernel with a larger number of iterations (I tried up to 1 million iterations) without giving the CL_OUT_OF_RESOURCES, and the running time seems to increase linearly with the iterations. Although this is the computer that will actually run the kernel over a dataset to solve my problem, it is a server that must be accessed remotely. I would prefer to do the development on my notebook and deploy the final code on the server.
My guess: I know that function calls are inlined in GPU. Since the program is crashing based on the number of iterations, my only guess is that these for loops are being unrolled, and with the inlined functions, the compiled kernel is too big to fit on the GPU (even with a single workitem). This also explains why using a better GPU allows increasing the number of iterations.
Question: What could be causing this CL_OUT_OF_RESOURCES error based on the number of iterations?
Of course I could reduce the number of iterations in each workitem, but then I would need multiple workgroups to process the same set of data (the same 128x128 patch) and would need to access global memory to select the best result between workgroups of the same patch. I may end up proceeding in this direction, but I would really like to know what is happening with the kernel at this moment.
Update after #doqtor comment:
Using -cl-nv-verbose when building the program reports the following resources usage. It's strange that these values do not change irrespective of the number of iterations, either when the program runs successfully and when it crashes.
ptxas info : 0 bytes gmem, 532 bytes cmem[3]
ptxas info : Compiling entry function 'naive_affine_2CPs_CU_128x128_SR_64x64' for 'sm_50'
ptxas info : Function properties for naive_affine_2CPs_CU_128x128_SR_64x64
ptxas . 66032 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 140 registers, 360 bytes cmem[0]
Running clinfo reports that my GPU has
Registers per block (NV) 65536
Global memory size 2101870592 (1.958GiB)
Max memory allocation 525467648 (501.1MiB)
Local memory size 49152 (48KiB)
It seems that I am not using too many registers, but I don't know how those stack frame, cmem[0] and cmem[3] relate to the memory information reported by clinfo.
Is it possible that the loops are being unrolled and generating a coder larger than the GPU can hold?
Yey, that is part of the problem. The compiler sees that you have a loop with a fixed, small range, and it automatically unrolls it. This happens for the six nested loops and then the assembly blows up. This will get register spilling into global memory which makes the application very slow.
However even if the compiler does not unroll the loops, every thread does X*Y*Z*W*32*32 iterations of "The relevant computation", which takes an eternety. The system thinks it freezed up and you get CL_OUT_OF_RESOURCES .
Can you really not parallelize any of these six nested loops? The best solution would be to parallelize them all, that means include them in the kernel range and launch a few hundred million threads that do "The relevant computation" without any loops. You should have as much independent threads / workgroups as possible to get the best performance (saturate the GPU).
Remember, your GPU has thousands of cores grouped into warps of 32 and SMs of 2 or 4 warps, and if you only launch a single workgroup, it will run on only a single SM with 64 or 128 cores and the remaining cores stay idle.

There can be at most 65535 Thread Groups in each dimension of a Dispatch call

I have a DirectCompute application making computation on images (Like computing average pixel value, applying a filter and much more). For some computation, I simply treat the image as an array of integer and dispatch a computer shader like this:
FImmediateContext.Dispatch(PixelCount, 1, 1);
The result is exactly the expected value, so the comptation is correct. Nevertheless, at runt time, I see in the debug log the following message:
D3D11 ERROR: ID3D11DeviceContext::Dispatch: There can be at most 65535 Thread Groups in each dimension of a Dispatch call. One of the following is too high: ThreadGroupCountX (3762013), ThreadGroupCountY (1), ThreadGroupCountZ (1) [ EXECUTION ERROR #2097390: DEVICE_DISPATCH_THREADGROUPCOUNT_OVERFLOW]
This error is shown only in the debug log, everything else is correct, including the computation result. This makes me thinking that the GPU somehow manage the very large thread group, probably breaking it to smaller groups sequentially executed.
My question is: should I care about this error or is it OK to keep it and letting the GPU do the work for me?
Thx.
If you only care about it working on your particular piece of hardware and driver, then it's fine. If you care about it working on all Direct3D Feature Level 11.0 cards, then it's not fine as there's no guarantee it will work on any other driver or device.
See Microsoft Docs for details on the limits for DirectCompute.
If you care about robust behavior, it's important to test DirectCompute applications across a selection of cards & drivers. The same is true of basically any use of DirectX 12. Much of the correctness behavior is left up to the application code.

(When) Does CACurrentMediaTime/mach_system_time wrap around on iOS?

To get accurate time measurements on iOS, mach_absolute_time() should be used. Or CACurrentMediaTime(), which is based on mach_absolute_time(). This is documented in this Apple Q&A, and also explained in several StackOverflow answers (e.g. https://stackoverflow.com/a/17986909, https://stackoverflow.com/a/30363702).
When does the value returned by mach_absolute_time() wrap around? When does the value returned by CACurrentMediaTime() wrap around? Does this happen in any realistic timespan? The return value of mach_absolute_time() is of type uint64, but I'm unsure about how this maps to a real timespan.
The document you reference notes that mach_absolute_time is CPU dependent, so we can't say how much time must elapse before it wraps. On the simulator, mach_absolute_time is nanoseconds, so if it's wrapping at UInt64.max, that translates to 585 years. On my iPhone 7+, it's 24,000,000 mac_absolute_time per second, which translates to 24 thousand years. Bottom line, the theoretical maximum amount of time captured by mach_absolute_time will vary based upon CPU, but you won't ever encounter this in any practical application.
For what it's worth, consistent with those various posts you found, the CFAbsoluteTimeGetCurrent documentation warns that:
Repeated calls to this function do not guarantee monotonically increasing results. The system time may decrease due to synchronization with external time references or due to an explicit user change of the clock.
So, you definitely don't want to use NSDate/Date or CFAbsoluteTimeGetCurrent if you want accurate elapsed times. Neither ensures monotonically increasing values.
In short, when I need that sort of behavior, I generally use CACurrentMediaTime, because it enjoy the benefits of mach_absolute_time, but it converts it to seconds for me, which makes it very simple to use. And neither it nor mach_absolute_time are going to loop in any realistic time period.

CUDA 128 bytes read in a single instruction

I am new to CUDA and currently optimize an existing application for molecular dynamics. What it does is that it takes array of double4 with coordinates and computes forces based on the neighborlist. I wrote a kernel with the following lines:
double4 mPos=d_arr_xyz[gid];
while(-1!=(id=d_neib_list[gid*MAX_NEIGHBORS+i])){
Calc(gid,mPos,AA,d_arr_xyz,id);i++;
}
then Calc takes d_arr_xyz[id] and calculates force. That gives 1 read of double4 + 65 reads of (int +double4) inside every call of Calc (65 is average number of neighbors (not equal to -1) in d_neib_list for each particle).
Is it possible to reduce those reads? Neighborlists for different particles, i.e. d_arr_xyz[gid] and d_arr_xyz[id] do not correalte, so I cannot use shared memory for the block of threads to cache d_arr_xyz.
What I see is that if somehow to load the whole list int*MAX_NEIGHBORS into shared memory in one or few large transactions, that will remove 65 separate reads of int.
So the question is: is it possible to do it so that those 65 reads of int will be translated into several large transactions. I read in the documentation that reads can be even 128 bytes long. What exactly should I write so that assembler will make 1 large call?
Update:
Thank you for your replies. From the answer from user talonmies below, I changed the code replacing dimensions x and y for the neighbors matrix. Now consecutive threads load consecutive int[gid], I guess that may result in a 128 byte read. The program works 8% faster.
All memory transactions are issued (where possible) on a per warp basis. So the 128 byte transaction you are asking about is when all 32 threads in a warp issue a memory load instruction which can be serviced in a single "coalesced" transaction. A single thread can't issue large memory transactions, only a warp of 32 threads can, and only when the memory coalescing requirements of whichever architecture you run the code on can be satisfied.
I couldn't really follow your description of what you code is actually doing, but from first principles alone, the answer would appear to be no.

Profiling with an external counter or best alternative

I'am non-programmer, trying to assess the time spent in (opencv-)functions. We have an AD-converter which comes with a counter that is able to count external signals (e.g. from a function generator) with a frequency of 1 MHz = 1 µs resolution. The actual counter status can be queried with a function cbIn32(..., unsigned long *pointertovalue).
So my idea was to query the counter status before and after calling the function of interest and to calculate then the difference. However, doubts came up when I calcultated the difference without a function call in between, which revealed rel. high fluctuations (values between 80 and 400 µs or so). I wondered, if calculating the average time for calling cbIn32() (approx. 180 µs) and substract this from the putative time spent in the function of interest is a valid solution.
So my first two questions:
Is that approach generally feasible or useless?
Where do the fluctuations come from?
Alternatively, we tried using getTickCount(), which seemed to deliver reasonable values. But checking forums revealed that it has a low resolution of about 10 ms, which would be insatisfactory (100 µs resolution would be appreciated). However, the values we got were in the sub-ms range.
This brings me to the next questions:
How can the time assessed for a function with getTickCount() be in the microseconds range, when the resolution is around 10 ms?
Should I trust the obtained values or not?
I also tried it with gprof, but it gave me "no time accumulated", although I am sure that the time spent in a function containing opencv-related calls is at least a few milliseconds. I even tried rebuilding opencv with ENABLE_PROFILING=ON, but same result. I read somewhere that you need to build static opencv libraries to enable profiling, but I am not sure if this would improve the situation. So the question here is:
What do I have to do so that gprof also "sees" opencv functions?
Next alternative would be the QueryPerformanceCounter() function of the WINAPI. I don't how to use it, but I would fight my way through, if you recommend it. Question to that approach:
Will it be problematic because of multiple cores?
If yes, is there an "easy" way to handle that problem?
I also tried it with verysleepy, but it exits somehow to early (worked fine with other .exe).
Newbie-friendly answers would be very, very appreciated. My goal is to find the easiest approach with highest precision. I'm working on Win7 64bit, Eclipse with MinGW.
Thx for your help...

Resources