OpenCL - Block processing in image, global & local worksizes - image-processing

I am trying to optimize a block matching algorithm for motion estimation in OpenCL. Basically the image size is 384x288 and supposing the image is divided into a number of non-overlapping macro blocks of size 16x16, a total of 24x18 macro blocks can be realized.
At each macro block location, the motion in two consecutive frames has to be estimated (involves searching nearby region for sum of absolute differences in pixel intensity - gray using 16x16 blocks), am I correct in setting the global sizes to 24 and 18 respectively while launching the kernel?
My understanding is that when the opencl kernel launches, the location of the macroblock location on original image can be worked out as {get_local_size(0) x 16 -1, get_local_size(1) x 16 - 1}. Is this correct? Also what would be the optimal value for local work group size for this use case?
Thank you

am I correct in setting the global sizes to 24 and 18 respectively
while launching the kernel
If each thread computes a whole macroblock, yes you are right about global size but local size should be 1 or something like 3x2. But if single thread computes single pixel, no, global parameter is total threads. It should be 384x288 if you calculate one pixel per thread.
Number of groups/macroblocks change with the local size versus global size.
If there are 16 threads in a group and if there are 32 threads total, there would be only 2 groups of threads. Same thing happens for 2D and 3D kernel executions.
The location of the macroblock location on original image can be worked out as
x=get_group_id(0) * get_local_size(0)
y=get_group_id(1) * get_local_size(1)
id starts from zero. Where location(x,y) points to the upper-left corner of the patch. Then the lower-right corner would be
xLast=get_group_id(0) * get_local_size(0)+get_local_size(0)
yLast=get_group_id(1) * get_local_size(1)+get_local_size(1)
Ofcourse origin is assumed to be 0,0 at most-top-most-left.
Also what would be the optimal value for local work group size for
this use case?
If you leave local size parameter empty (null), opencl implementation chooses it itself(with a suitable size but may not be best) so number of groups is unknown.
Global size and local size will be different if you have thread per pixel or thread per group or even more than one thread per pixel. For example, if 2 new frames are to be calculated from older 5 frames, 2 threads per pixel could be used. Or you can do all job in single thread of a pixel, or you can do all 16x16 pixels job in a single thread, or you can do everything in a single thread. Choice is yours, you should test/farsee if your algorithm is embarrassingly parallel or serial.
I guess estimation is something like a 5(or 11)-point stencil (2d time differentiation?) so it will add things, multiply things suitably by single thread, then apply to a pixel, then do same for another frame's pixel, then do same for all 16x16 pixels of macroblock, then do same for all macroblock, it should use 1 thread per pixel(re-using the already computed stencil to compute 2 frames)(with only 1 color?).
You could start with a working code(or re-write yourself), then parallelize it on its nested loops, for example you could scan lines (1D kernel), scan pixels (2D kernel), scan pixels and their sub-pixels (3D?) such that i becomes get_global_id(0) and j becomes get_global_id(1).

Related

Fourier Transform of an image and associated units

I have the following rather simple problem and unfortunately I am not getting forward.
Imagine a simple 2D image with pixels and a unique value for each pixel of the image.
For example, let the image be 512x512 pixels in size and 10 nm x 10 nm in dimension. Since I want to view the image in frequency space, I calculate the Fourier transform of the image.
Of course, the image still has 512x512 pixels, but what about the units? I would say the physical units are now 1/m, but what happens to the 10 nm?
So what would be the dimensions of my image after the Fourier transform? Would be very grateful for any help.
The first frequency bin (after the DC bin) stands for "1 cycle" across the whole domain, so that means 1 / (10 nm) in your case, i.e. wavelength is 10 nm. The pattern continues, next bin stands for 2 cycles, or 2 / (10 nm), i.e. wavelength is 5 nm.
I'm not sure it makes sense to translate the length units in this process.

What are Apple's Metal (Metal Shader Language) texture coordinates?

In iOS or OS/X what texture coordinates are used in Metal Shader Language kernel function? For example, given an MTLTexture and uint2 gid[[thread_position_in_grid]] Is gid.x and gid.ybetween 0..1 (x and y are floats) or 0..inTexture.get_width() (x and y are integers).
Thanks in Advance
thread_position_in_grid is an index (an integer) in the grid that takes values in the ranges you specify in dispatchThreadgroups:threadsPerThreadgroup:. It's up to you to decide how many thread groups you want, and how many threads per group.
In the following sample code you can see that threadsPerGroup.width * numThreadgroups.width == inputImage.width and threadsPerGroup.height * numThreadgroups.height == inputImage.height. In this case, a position in the grid will thus be a non-normalized (integer) pixel coordinate.
Each launch of a compute shader in Metal is accompanied by a dense rectangular 3D grid of thread IDs. The dimensions of the grid is set when you call [MTLComputeCommandEncoder dispatchThreadGroups:threadsPerThreadgroup:]. You can for example have a threadgroup size of {16,16,1} (256 threads in a threadgroup as a 16x16x1 square), and threadgroup count of {1,2,1}, which will cause two threadgroups to be launched with a total area of 512 threads in the shape {16,32,1}. These are the integers that appear at the top of your kernel as [[thread_position_in_grid]]. The thread position is the way that you tell which thread you are, just like the threadID parameter passed to a block by dispatch_apply().
Metal specifies no mapping from [[thread_position_in_grid]] to coordinates in a texture. This is done by you in software in your compute shader. If you want to read every other pixel in a region of a texture at some offset in the image, then you need to multiply the threadID by two and add an offset in your kernel before passing the new coordinate to texture2d.sample. Since Metal can not launch partial threadgroups, it is up to you to make sure that unneeded threadgroups are not executed. For example, when applied to a smaller texture, the full size of your 32x64 launch might cause you to write off the end of your texture. In this case you must check the threadID to see if the thread will write off the end and then either return out of the shader or skip over the texture write call for that thread to avoid the problem.
thread_position_in_grid is always made of unsigned integers, and provides these options, but none of them are related to texture coordinates. It may be helpful to ask another, related question, because you seem to be conflating the idea of textures and kernel functions.
16- or -32 bit
1D, 2D, or 3D

Image 2x downsampling with Lanczos filter

I'm trying to implement image downsampling with Lanczos2.
However, the kernel seems to have zeros everywhere (since sin(pi*x)=0 if x is integer) except at the center pixel.
Thus, if the downsampling factor is an integer number (e.g. the output size is 1/2 of the original size at each dimension), then the Lanczos downsampling yields the exact same result as nearest neighbor interpolation (just taking every other pixel in 2X downsampling).
I believe that this is not intended to be the case, so my question is:
What am I missing?
How to use lanczos2 filter for 2x downsampling and is the result expected to be different than simply take every other pixel?
The kernel for 2x downsampling is given in section "Decimation by factor of 2 with the Lanczos2 sinc function" on page 10 of the reference you linked, with the coefficients:
0, -0.032, 0, 0.284, 0.496, 0.284, 0, -0.032, 0
This kernel is obtained by evaluating the given lanczos2(x) function at values of x=0.5n where n is the sample number (an integer). This reflect the fact that the output rate is half the original sampling rate (thus requires a half-band filter before pixel decimation to avoid aliasing).
P.S.: the kernel with zeros everywhere except at the center pixel that you have obtained, would typically be used (although implementations would usually optimize this kernel out as a simple pixel copy) in conjunction with a phase 1/2 kernel for interpolation by a factor 2.

How to choose the number of bins when creating HSV histogram?

I was reading some documentation about HSV histogram, and in several refs the Saturation channel was quantized into 256 values. Why is that? Is there any reason behind choosing this number?
I have the same questions for the Hue channel, often it is quantized into 180 values.
Disclaimer: Off-hand answers (i.e., not backed up by any documentation):
"256" is a popular number for a bin size because Programmers Like Round Numbers -- it fits in a single byte. And "180" because the HSB circle is "360 [degrees]", but "360" does not fit into a single byte.
For many image formats, the range of RGB values is limited to 0..255 per channel -- 3 bytes in total. To store the same amount of data (ignoring any artifacts of converting to another color model), Saturation and Brightness are often expressed in single bytes as well. The same could be done for Hue, by scaling the original range of 0..359 (as Hue is usually expressed as a value in degrees on the HSB Color Wheel) into the byte range 0..255. However, probably because it's easier to do calculations with a number close to the original 360° full circle, the range is clipped to 0..179. That way the value can be stored into a single byte (and thus "HSB" uses as much memory as "RGB") and can be converted trivially back to (close to) its original value -- multiply by 2. Obviously, sticking to the storage space wins over fidelity.
Given 256 values for both S and B, and 180 for H, you end up with a color space of 256*256*180 = 11,796,480 colors. To inspect the number of colors, you build a histogram: an array where you can read out the total amount of pixels in a certain color or color range. Using a color range here, instead of actual values, significantly cuts down the memory requirements.
For an RGB color image, with the colors fairly evenly distributed, you could shift down each channel a certain number of bits. This is how a straightforward conversion from 24-bit "true-color" RGB down to 15-bit RGB "high-color" space works: each channel gets divided by 8, reducing 256 values down to 32 (5 bits per channel). Conversion to a 16-bit high-color RGB space works the same; the bit that got left over in the 15-bit conversion is assigned to green. Thus, the range of colors for green is doubled, which is useful since the human eye is more perceptive for shades of green than for the other two primaries.
It gets more complicated when the colors in the input image are not evenly distributed. A naive solution is to create an array of [256][256][256], initialize all to zero, then fill the array with the colors of the image, and finally sort them. There are better alternatives -- let me consult my old Computer Graphics [1] here. Hold on.
13.4 Reproducing Color mentions the names of two different approaches from Heckbert (Color Image Quantization for Frame Buffer Display, SIGGRAPH 82): the popularity and the median-cut algorithms. (Unfortunately, that's all they say about this topic. I assume efficient code for both can be googled for.)
A rough guess:
The size for each bin (H,S,B) should be reflected by what you are trying to use it for. This older SO question, for example, uses a large bin for hue -- color is considered the most important -- and only 3 different values for both saturation and brightness. Thus, bright images with some subdued areas (say, a comic book) will give a good spread in this histogram, but a real-color photograph will not so much.
The main limit is that the bin sizes, multiplied with each other, should use a reasonably small amount of memory, yet cover enough of each component to get evenly filled. Perhaps some trial-and-error comes into play here. You could initially evenly distribute all of H, S, and B components over the available memory in your histogram and process a small part of the image; say, 1 out of 4 pixels, horizontally and vertically. If you notice one of the component bins fills up too fas where others stay untouched, adjust the ranges and restart.
If you need to do an analysis of multiple pictures, make sure they are all alike in their color gamut. You cannot expect a reasonable bin size to work on all sorts of images; you would end up with an evenly distribution, where all matches are only so-so.
[1] Computer Graphics. Principles and Practices. (1997) J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Hughes, 2nd ed., Reading, MA: Addison-Wesley.

Dealing with Boundary conditions / Halo regions in CUDA

I'm working on image processing with CUDA and i've a doubt about pixel processing.
What is often done with the boundary pixels of an image when applying a m x m convolution filter?
In a 3 x 3 convolution kernel, ignoring the 1 pixel boundary of the image is easier to deal with, especially when the code is improved with shared memory. Indeed, in this case, one does not need to check if a given pixel has all the neigbourhood available (i.e. pixel at coord (0, 0) has not left, left-upper, upper neighbours). However, removing the 1 pixel boundary of the original image could generate partial results.
Opposite to that, I'd like to process all the pixels within the image, also when using shared memory improvements, i.e., for example, loading 16 x 16 pixels, but computing the inner 14 x 14. Also in this case, ignoring the boundary pixels generates a clearer code.
What is usually done in this case?
Does anyone usually use my approach ignoring the boundary pixels?
Of course, I'm aware the answer depends on the type of problem, i.e. adding two images pixel-wise has not this problem.
Thanks in advance.
A common approach to dealing with border effects is to pad the original image with extra rows & columns based on your filter size. Some common choices for the padded values are:
A constant (e.g. zero)
Replicate the first and last row / column as many times as needed
Reflect the image at the borders (e.g. column[-1] = column[1], column[-2] = column[2])
Wrap the image values (e.g. column[-1] = column[width-1], column[-2] = column[width-2])
tl;dr: It depends on the problem you're trying to solve -- there is no solution for this that applies to all problems. In fact, mathematically speaking, I suspect there may be no "solution" at all since I believe it's an ill-posed problem you're forced to deal with.
(Apologies in advance for my reckless abuse of mathematics)
To demonstrate let's consider a situation where all pixel components and kernel values are assumed to be positive. To get an idea of how some of these answers could lead us astray let's further think about a simple averaging ("box") filter. If we set values outside the boundary of the image to zero then this will clearly drag down the average at every pixel within ceil(n/2) (manhattan distance) of the boundary. So you'll get a "dark" border on your filtered image (assuming a single intensity component or RGB colorspace -- your results will vary by colorspace!). Note that similar arguments can be made if we set the values outside the boundary to any arbitrary constant -- the average will tend towards that constant. A constant of zero might be appropriate if the edges of your typical image tend towards 0 anyway. This is also true if we consider more complex filter kernels like a gaussian however the problem will be less pronounced because the kernel values tend to decrease quickly with distance from the center.
Now suppose that instead of using a constant we choose to repeat the edge values. This is the same as making a border around the image and copying rows, columns, or corners enough times to ensure the filter stays "inside" the new image. You could also think of it as clamping/saturating the sample coordinates. This has problems with our simple box filter because it overemphasizes the values of the edge pixels. A set of edge pixels will appear more than once yet they all receive the same weight w=(1/(n*n)).
Suppose we sample an edge pixel with value K 3 times. That means its contribution to the average is:
K*w + K*w + K*w = K*3*w
So effectively that one pixel has a higher weight in the average. Note that since this is an average filter the weight is a constant over the kernel. However this argument applies to kernels with weights that vary by position too (again: think of the gaussian kernel..).
Suppose we wrap or reflect the sampling coordinates so that we're still using values from within the boundary of the image. This has some valuable advantages over using a constant but isn't necessarily "correct" either. For instance, how many photos do you take where the objects at the upper border are similar to those at the bottom? Unless you're taking pictures of mirror-smooth lakes I doubt this is true. If you're taking pictures of rocks to use as textures in games wrapping or reflecting could be appropriate. I'm sure there are significant points to be made here about how wrapping and reflecting will likely reduce any artifacts that result from using a fourier transform. However this comes back to the same idea: that you have a periodic signal which you do not wish to distort by introducing spurious new frequencies or overestimating the amplitude of existing frequencies.
So what can you do if you're filtering photos of bright red rocks beneath a blue sky? Clearly you don't want to add orange-ish haze in the blue sky and blue-ish fuzz on the red rocks. Reflecting the sample coordinate works because we expect similar colors to those pixels found at the reflected coordinates... unless, just for the sake of argument, we imagine the filter kernel is so big that the reflected coordinate would extend past the horizon.
Let's go back to the box filter example. An alternative with this filter is to stop thinking about using a static kernel and think back to what this kernel was meant to do. An averaging/box filter is designed to sum the pixel components then divide by the number of pixels summed. The idea is that this smooths out noise. If we're willing to trade a reduced effectiveness in suppressing noise near the boundary we can simply sum fewer pixels and divide by a correspondingly smaller number. This can be extended to filters with similar what-I-will-call-"normalizing" terms -- terms that are related to the area or volume of the filter. For "area" terms you count the number of kernel weights that are within the boundary and ignore those weights that are not. Then use this count as the "area" (which might involve a extra multiplication). For volume (again: assuming positive weights!) simply sum the kernel weights. This idea is probably awful for derivative filters because there are fewer pixels to compete with the noisy pixels and differentials are notoriously sensitive to noise. Also, some filters have been derived by numeric optimization and/or empirical data rather than from ab-initio/analytic methods and thus may lack a readily apparent "normalizing" factor.
Your question is somewhat broad and I believe it mixes two problems:
dealing with boundary conditions;
dealing with halo regions.
The first problem (boundary conditions) is encountered, for example, when computing the convolution between and image and a 3 x 3 kernel. When the convolution window comes across the boundary, one has the problem of extending the image outside of its boundaries.
The second problem (halo regions) is encountered, for example, when loading a 16 x 16 tile within shared memory and one has to process the internal 14 x 14 tile to compute second order derivatives.
For the second issue, I think a useful question is the following: Analyzing memory access coalescing of my CUDA kernel.
Concerning the extension of a signal outside of its boundaries, a useful tool is provided in this case by texture memory thanks to the different provided addressing modes, see The different addressing modes of CUDA textures.
Below, I'm providing an example on how a median filter can be implemented with periodic boundary conditions using texture memory.
#include <stdio.h>
#include "TimingGPU.cuh"
#include "Utilities.cuh"
texture<float, 1, cudaReadModeElementType> signal_texture;
#define BLOCKSIZE 32
/*************************************************/
/* KERNEL FUNCTION FOR MEDIAN FILTER CALCULATION */
/*************************************************/
__global__ void median_filter_periodic_boundary(float * __restrict__ d_vec, const unsigned int N){
unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid < N) {
float signal_center = tex1D(signal_texture, tid - 0);
float signal_before = tex1D(signal_texture, tid - 1);
float signal_after = tex1D(signal_texture, tid + 1);
printf("%i %f %f %f\n", tid, signal_before, signal_center, signal_after);
d_vec[tid] = (signal_center + signal_before + signal_after) / 3.f;
}
}
/********/
/* MAIN */
/********/
int main() {
const int N = 10;
// --- Input host array declaration and initialization
float *h_arr = (float *)malloc(N * sizeof(float));
for (int i = 0; i < N; i++) h_arr[i] = (float)i;
// --- Output host and device array vectors
float *h_vec = (float *)malloc(N * sizeof(float));
float *d_vec; gpuErrchk(cudaMalloc(&d_vec, N * sizeof(float)));
// --- CUDA array declaration and texture memory binding; CUDA array initialization
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>();
//Alternatively
//cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(32, 0, 0, 0, cudaChannelFormatKindFloat);
cudaArray *d_arr; gpuErrchk(cudaMallocArray(&d_arr, &channelDesc, N, 1));
gpuErrchk(cudaMemcpyToArray(d_arr, 0, 0, h_arr, N * sizeof(float), cudaMemcpyHostToDevice));
cudaBindTextureToArray(signal_texture, d_arr);
signal_texture.normalized = false;
signal_texture.addressMode[0] = cudaAddressModeWrap;
// --- Kernel execution
median_filter_periodic_boundary<<<iDivUp(N, BLOCKSIZE), BLOCKSIZE>>>(d_vec, N);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
gpuErrchk(cudaMemcpy(h_vec, d_vec, N * sizeof(float), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("h_vec[%i] = %f\n", i, h_vec[i]);
printf("Test finished\n");
return 0;
}

Resources