NEON acceleration for 12-bit to 8-bit - image-processing

I have a buffer of 12-bit data (stored in 16-bit data)
and need to converts into 8-bit (shift by 4)
How can the NEON accelerate this processing ?
Thank you for your help
Brahim

Took the liberty to assume a few things explained below, but this kind of code (untested, may require a few modifications) should provide a good speedup compared to naive non-NEON version:
#include <arm_neon.h>
#include <stdint.h>
void convert(const restrict *uint16_t input, // the buffer to convert
restrict *uint8_t output, // the buffer in which to store result
int sz) { // their (common) size
/* Assuming the buffer size is a multiple of 8 */
for (int i = 0; i < sz; i += 8) {
// Load a vector of 8 16-bit values:
uint16x8_t v = vld1q_u16(buf+i);
// Shift it by 4 to the right, narrowing it to 8 bit values.
uint8x8_t shifted = vshrn_n_u16(v, 4);
// Store it in output buffer
vst1_u8(output+i, shifted);
}
}
Things I assumed here:
that you're working with unsigned values. If it's not the case, it will be easy to adapt anyway (uint* -> int*, *_u8->*_s8 and *_u16->*_s16)
as the values are loaded 8 by 8, I assumed the buffer length was a multiple of 8 to avoid edge cases. If that's not the case, you should probably pad it artificially to a multiple of 8.
Finally, the 2 resource pages used from the NEON documentation:
about loads and stores of vectors.
about shifting vectors.
Hope this helps!

prototype : void dataConvert(void * pDst, void * pSrc, unsigned int count);
1:
vld1.16 {q8-q9}, [r1]!
vld1.16 {q10-q11}, [r1]!
vqrshrn.u16 d16, q8, #4
vqrshrn.u16 d17, q9, #4
vqrshrn.u16 d18, q10, #4
vqrshrn.u16 d19, q11, #4
vst1.16 {q8-q9}, [r0]!
subs r2, #32
bgt 1b
q flag : saturation
r flag : rounding
change u16 to s16 in case of signed data.

Related

How to implement decay towards zero in signed fixed point math, in sse?

There are many decay-like physical events (for example body friction or charge leak), that are usually modelled in iterators like x' = x * 0.99, which is usually very easy to write in floating point arithmetics.
However, i have a demand to do this in 16-bit "8.8" signed fixed point manner, in sse. For efficient implementation on typical ALU mentioned formula can be rewritten as x = x - x/128; or x = x - (x>>7) where >> is "arithmetic", sign-extending right shift.
And i stuck here, because _mm_sra_epi16() produces totally counterintuitive behaviour, which is easily verifiable by following example:
#include <cstdint>
#include <iostream>
#include <emmintrin.h>
using namespace std;
int main(int argc, char** argv) {
cout << "required: ";
for (int i = -1; i < 7; ++i) {
cout << hex << (0x7fff >> i) << ", ";
}
cout << endl;
cout << "produced: ";
__m128i a = _mm_set1_epi16(0x7fff);
__m128i b = _mm_set_epi16(-1, 0, 1, 2, 3, 4, 5, 6);
auto c = _mm_sra_epi16(a, b);
for (auto i = 0; i < 8; ++i) {
cout << hex << c.m128i_i16[i] << ", ";
}
cout << endl;
return 0;
}
Output would be as follows:
required: 0, 7fff, 3fff, 1fff, fff, 7ff, 3ff, 1ff,
produced: 0, 0, 0, 0, 0, 0, 0, 0,
It only applies first shift to all, like it is actually _mm_sra1_epi16 function, accidentely named sra and given __m128i second argument bu a funny clause for no reason. So this cannot be used in SSE.
On other hand, i heard that division algorithm is enormously complex, thus _mm_div_epi16 is absent in SSE and also cannot be used.
What to do and how to implement/vectorize that popular "decay" technique?
x -= x>>7 is trivial to implement with SSE2, using a constant shift count for efficiency. This compiles to 2 instructions if AVX is available, otherwise a movdqa is needed to copy v before a destructive right-shift.
__m128i downscale(__m128i v){
__m128i dec = _mm_srai_epi16(v, 7);
return _mm_sub_epi16(v, dec);
}
GCC even auto-vectorizes it (Godbolt).
void foo(short *__restrict a) {
for (int i=0 ; i<10240 ; i++) {
a[i] -= a[i]>>7; // inner loop uses the same psraw / psubw
}
}
Unlike float, fixed-point has constant absolute precision over the full range, not constant relative precision. So for small positive numbers, v>>7 will be zero and your decrement will stall. (Negative inputs underflow to -1, because arithmetic right shift rounds towards -infinity.)
If small inputs where the shift can underflow to 0, you might want to OR with _mm_set1_epi16(1) to make sure the decrement is non-zero. Negligible effect on large-ish inputs. However, that will eventually make a downscale chain go from 0 to -1. (And then back up to 0, because -1 | 1 == -1 in 2's complement).
__m128i downscale_nonzero(__m128i v){
__m128i dec = _mm_srai_epi16(v, 7);
dec = _mm_or_si128(dec, _mm_set1_epi16(1));
return _mm_sub_epi16(v, dec);
}
If starting negative, the sequence would be -large, logarithmic until -128, linear until -4, -3, -2, -1, 0, -1, 0, -1, ...
Your code got all-zeros because _mm_sra_epi16 uses the low 64 bits of the 2nd source vector as a 64-bit shift count that applies to all elements. Read the manual. So you shifted all the bits out of each 16-bit element.
It's not idiotic, but per-element shift counts require AVX2 (for 32/64-bit elements) or AVX512BW for _mm_srav_epi16 or 64-bit arithmetic right shifts, which would make sense for the way you're trying to use it. (But the shift count is unsigned, so -1 also going to shift out all the bits).
Indeed, that instruction should be named _mm_sra1_epi16()
Yup, that would make sense. But remember that when these were named, AVX2 _mm_srav_* didn't exist yet. Also, that specific name would not be ideal because 1 and i are not the most visually distinct. (i for immediate, for the psraw xmm1, imm16 form instead of the psraw xmm1, xmm2/m128 form of the asm instruction: http://felixcloutier.com/x86/PSRAW:PSRAD:PSRAQ.html).
The other way it makes sense is that the MMX/SSE2 asm instruction has two forms: immediate (with the same count for all elements of course), and vector. Instead of forcing you to broadcast the count to all element, the vector version takes the scalar count in the bottom of a vector register. I think the intended use-case is after a movd xmm0, eax or something.
If you need per-element-variable shift counts without AVX512, see various Q&As about emulating it, e.g. Shifting 4 integers right by different values SIMD.
Some of the workarounds use multiplies by powers of 2 for variable left-shift, and then a right shift to put the data where needed. (But you need to somehow get the 1<<n SIMD vector prepared, so this works if the same set of counts is reused for many vectors, or especially if it's a compile-time constant).
With 16-bit elements, you can use just one _mm_mulhi_epi16 to do runtime-variable right shift counts with no precision loss or range limits. mulhi(x*y) is exactly like (x*(int)y) >> 16, so you can use y=1<<14 to right shift by 16-14 = 2 in that element.

Calculating constants for CRC32 using PCLMULQDQ

I'm reading through the following paper on how to implement CRC32 efficiently using the PCLMULQDQ instruction introduced in Intel Westmere and AMD Bulldozer:
V. Gopal et al. "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction." 2009. http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
I understand the algorithm, but one thing I'm not sure about is how to calculate the constants $k_i$. For example, they provide the constant values for the IEEE 802.3 polynomial:
k1 = x^(4*128+64) mod P(x) = 0x8833794C
k4 = x^128 mod P(x) = 0xE8A45605
mu = x^64 div P(x) = 0x104D101DF
and so on. I can just use these constants as I only need to support the one polynomial, but I'm interested: how did they calculate those numbers? I can't just use a typical bignum implementation (e.g. the one provided by Python) because the arithmetic must happen in GF(2).
It's just like regular division, except you exclusive-or instead of subtract. So start with the most significant 1 in the dividend. Exclusive-or the dividend by the polynomial, lining up the most significant 1 of the polynomial with that 1 in the dividend to turn it into a zero. Repeat until you have eliminated all of the 1's above the low n bits, where n is the order of the polynomial. The result is the remainder.
Make sure that your polynomial has the high term in the n+1th bit. I.e., use 0x104C11DB7, not 0x4C11DB7.
If you want the quotient (which you wrote as "div"), then keep track of the positions of the 1's you eliminated. That set, shifted down by n, is the quotient.
Here is how:
/* Placed in the public domain by Mark Adler, Jan 18, 2014. */
#include <stdio.h>
#include <inttypes.h>
/* Polynomial type -- must be an unsigned integer type. */
typedef uintmax_t poly_t;
#define PPOLY PRIxMAX
/* Return x^n mod p(x) over GF(2). x^deg is the highest power of x in p(x).
The positions of the bits set in poly represent the remaining powers of x in
p(x). In addition, returned in *div are as many of the least significant
quotient bits as will fit in a poly_t. */
static poly_t xnmodp(unsigned n, poly_t poly, unsigned deg, poly_t *div)
{
poly_t mod, mask, high;
if (n < deg) {
*div = 0;
return poly;
}
mask = ((poly_t)1 << deg) - 1;
poly &= mask;
mod = poly;
*div = 1;
deg--;
while (--n > deg) {
high = (mod >> deg) & 1;
*div = (*div << 1) | high; /* quotient bits may be lost off the top */
mod <<= 1;
if (high)
mod ^= poly;
}
return mod & mask;
}
/* Compute and show x^n modulo the IEEE 802.3 CRC-32 polynomial. If d is true,
also show the low bits of the quotient. */
static void show(unsigned n, int showdiv)
{
poly_t div;
printf("x^%u mod p(x) = %#" PPOLY "\n", n, xnmodp(n, 0x4C11DB7, 32, &div));
if (showdiv)
printf("x^%u div p(x) = %#" PPOLY "\n", n, div);
}
/* Compute the constants required to use PCLMULQDQ to compute the IEEE 802.3
32-bit CRC. These results appear on page 16 of the Intel paper "Fast CRC
Computation Using PCLMULQDQ Instruction". */
int main(void)
{
show(4*128+64, 0);
show(4*128, 0);
show(128+64, 0);
show(128, 0);
show(96, 0);
show(64, 1);
return 0;
}

Why Global memory version is faster than constant memory in my CUDA code?

I am working on some CUDA program and I wanted to speed up computation using constant memory but it turned that using constant memory makes my code ~30% slower.
I know that constant memory is good at broadcasting reads to whole warps and I thought that my program could take an advantage of it.
Here is constant memory code:
__constant__ float4 constPlanes[MAX_PLANES_COUNT];
__global__ void faultsKernelConstantMem(const float3* vertices, unsigned int vertsCount, int* displacements, unsigned int planesCount) {
unsigned int blockId = __mul24(blockIdx.y, gridDim.x) + blockIdx.x;
unsigned int vertexIndex = __mul24(blockId, blockDim.x) + threadIdx.x;
if (vertexIndex >= vertsCount) {
return;
}
float3 v = vertices[vertexIndex];
int displacementSteps = displacements[vertexIndex];
//__syncthreads();
for (unsigned int planeIndex = 0; planeIndex < planesCount; ++planeIndex) {
float4 plane = constPlanes[planeIndex];
if (v.x * plane.x + v.y * plane.y + v.z * plane.z + plane.w > 0) {
++displacementSteps;
}
else {
--displacementSteps;
}
}
displacements[vertexIndex] = displacementSteps;
}
Global memory code is the same but it have one parameter more (with pointer to array of planes) and uses it instead of global array.
I thought that those first global memory reads
float3 v = vertices[vertexIndex];
int displacementSteps = displacements[vertexIndex];
may cause "desynchronization" of threads and then they will not take an advantage of broadcasting of constant memory reads so I've tried to call __syncthreads(); before reading constant memory but it did not changed anything.
What is wrong? Thanks in advance!
System:
CUDA Driver Version: 5.0
CUDA Capability: 2.0
Parameters:
number of vertices: ~2.5 millions
number of planes: 1024
Results:
constant mem version: 46 ms
global mem version: 35 ms
EDIT:
So I've tried many things how to make the constant memory faster, such as:
1) Comment out the two global memory reads to see if they have any impact and they do not. Global memory was still faster.
2) Process more vertices per thread (from 8 to 64) to take advantage of CM caches. This was even slower then one vertex per thread.
2b) Use shared memory to store displacements and vertices - load all of them at beginning, process and save all displacements. Again, slower than shown CM example.
After this experience I really do not understand how the CM read broadcasting works and how can be "used" correctly in my code. This code probably can not be optimized with CM.
EDIT2:
Another day of tweaking, I've tried:
3) Process more vertices (8 to 64) per thread with memory coalescing (every thread goes with increment equal to total number of threads in system) -- this gives better results than increment equal to 1 but still no speedup
4) Replace this if statement
if (v.x * plane.x + v.y * plane.y + v.z * plane.z + plane.w > 0) {
++displacementSteps;
}
else {
--displacementSteps;
}
which is giving 'unpredictable' results with little bit of math to avoid branching using this code:
float dist = v.x * plane.x + v.y * plane.y + v.z * plane.z + plane.w;
int distInt = (int)(dist * (1 << 29)); // distance is in range (0 - 2), stretch it to int range
int sign = 1 | (distInt >> (sizeof(int) * CHAR_BIT - 1)); // compute sign without using ifs
displacementSteps += sign;
Unfortunately this is a lot of slower (~30%) than using the if so ifs are not that big evil as I thought.
EDIT3:
I am concluding this question that this problem probably can not be improved by using constant memory, those are my results*:
*Times reported as median from 15 independent measurements. When constant memory was not large enough for saving all planes (4096 and 8192), kernel was invoked multiple times.
Although a compute capability 2.0 chip has 64k of constant memory, each of the multi-processors has only 8k of constant-memory cache. Your code has each thread requiring access to all 16k of the constant memory, so you are losing performance through cache misses. To effectively use constant memory for the plane data, you will need to restructure your implementation.

Clipping when converting signed 16-bit PCM samples to unsigned 8-bit PCM samples

I have signed mono 16-bit PCM audio samples stored in a SInt16 buffer and I am trying to convert them to unsigned mono 8-bit PCM samples stored in a UInt8 buffer. I've written the following basically-working code:
for (int i=0; i < numSamples; i++) {
SInt8 tempSigned8Bit = signed16BitBuffer[i]/127; // In 2 passes
unsigned8BitBuffer[i] = tempSigned8Bit + 127; // for clarity
}
However, I can hear clipping at the maximum amplitudes in the resulting audio, or at least that is my impression of where the distortion is occurring. Is this an artifact of the re-quantization or do I need to include some sort of clamping as described in this question about a similar conversion but without any signedness conversion:
Convert 16 bit pcm to 8 bit
Bitwise optimizations unnecessary but I certainly wouldn't say no to them.
This will fail for large values because you need to divide by 256 not 127. Also the offset needs to be 128, not 127.
for (int i = 0; i < numSamples; i++) {
SInt8 tempSigned8Bit = signed16BitBuffer[i] / 256;
unsigned8BitBuffer[i] = tempSigned8Bit + 128;
}
The conversion for +/- full scale and zero looks like this:
Signed Divide Add
16 bit by 256 128
sample
32767 -> 127 -> 255 ; Full scale +
0 -> 0 -> 128 ; 0
-32768 -> -128 -> 0 ; Full scale -

how to make a CUDA Histogram kernel?

I am writing a CUDA kernel for Histogram on a picture, but I had no idea how to return a array from the kernel, and the array will change when other thread read it. Any possible solution for it?
__global__ void Hist(
TColor *dst, //input image
int imageW,
int imageH,
int*data
){
const int ix = blockDim.x * blockIdx.x + threadIdx.x;
const int iy = blockDim.y * blockIdx.y + threadIdx.y;
if(ix < imageW && iy < imageH)
{
int pixel = get_red(dst[imageW * (iy) + (ix)]);
//this assign specific RED value of image to pixel
data[pixel] ++; // ?? problem statement ...
}
}
#para d_dst: input image TColor is equals to float4.
#para data: the array for histogram size [255]
extern "C" void
cuda_Hist(TColor *d_dst, int imageW, int imageH,int* data)
{
dim3 threads(BLOCKDIM_X, BLOCKDIM_Y);
dim3 grid(iDivUp(imageW, BLOCKDIM_X), iDivUp(imageH, BLOCKDIM_Y));
Hist<<<grid, threads>>>(d_dst, imageW, imageH, data);
}
Have you looked at the SDK sample? The "histogram" sample is available in the CUDA SDK (currently version 3.0 on the NVIDIA developer site, version 3.1 beta available for registered developers).
The documentation with the sample explains nicely how to handle your summation, either using global memory atomics on the GPU or by collecting the results for each block separately and then doing a separate reduction (either on the host or the GPU).
Histogramming is not particularly efficient when implemented with CUDA (or with GPGPU in general) - typically you need to generate lots of partial histograms in shared memory and then sum them. You might want to consider keeping this particular task on the CPU.
You will have to either use atomic function to block other thread from using he same memory, or use the partial histogram. Either way it not that efficient unless the input image is very very large.

Resources