how to deinterleave image channel in SSE - image-processing

is there any way we can DE-interleave 32bpp image channels similar as below code in neon.
//Read all r,g,b,a pixels into 4 registers
uint8x8x4_t SrcPixels8x8x4= vld4_u8(inPixel32);
ChannelR1_32x4 = vmovl_u16(vget_low_u16(vmovl_u8(SrcPixels8x8x4.val[0]))),
channelR2_32x4 = vmovl_u16(vget_high_u16(vmovl_u8(SrcPixels8x8x4.val[0]))), vGaussElement_32x4_high);
basically i want all color channels in separate vectors with every vector has 4 elements of 32bits to do some calculation but i am not very familiar with SSE and could not find such instruction in SSE or if some one can provide better ways to do that? Any help is highly appreciated

Since the 8 bit values are unsigned you can just do this with shifting and masking, much like you would for scalar code, e.g.
__m128i vrgba;
__m128i vr = _mm_and_si128(vrgba, _mm_set1_epi32(0xff));
__m128i vg = _mm_and_si128(_mm_srli_epi32(vrgba, 8), _mm_set1_epi32(0xff));
__m128i vb = _mm_and_si128(_mm_srli_epi32(vrgba, 16), _mm_set1_epi32(0xff));
__m128i va = _mm_srli_epi32(vrgba, 24);
Note that I'm assuming your RGBA elements have the R component in the LS 8 bits and the A component in the MS 8 bits, but if they are the opposite endianness you can just change the names of the vr/vg/vb/va vectors.

Related

SSE Shift Instruction zeroes the vector with _mm_set1_epi32() for the count vector?

Here's the situation: m3 = _mm_srli_epi32(m2, 23); does exactly what is expected,
m3 = _mm_srl_epi32(m2, shift); however (shift being initialized as __m128i shift = _mm_set1_epi32(23);) yields zero.
I've checked and shift does have the value it should have. Is there something simple I may be missing?
_mm_srl_epi32 (__m128i a, __m128i count) takes the count as the low 64 bits of the count vector. set1_epi32(32) is (23<<32) | 23 which is a huge number which shifts out all the bits.
SSE shifts saturate the count (unlike scalar shifts which mask the count).
You want _mm_cvtsi32_si128(int) to zero-extend a single int into a __m128i, or if your shift count is already in a vector you need to isolate it in the low 64 bits of a vector with an AND, shuffle, or whatever.
movq xmm,xmm can zero-extend a 64-bit element to 128, but there's no equivalent for 32-bit elements.

SSE: shuffle (permutevar) 4x32 integers

I have some code using the AVX2 intrinsic _mm256_permutevar8x32_epi32 aka vpermd to select integers from an input vector by an index vector. Now I need the same thing but for 4x32 instead of 8x32. _mm_permutevar_ps does it for floating point, but I'm using integers.
One idea is _mm_shuffle_epi32, but I'd first need to convert my 4x32 index values to a single integer, that is:
imm[1:0] := idx[31:0]
imm[3:2] := idx[63:32]
imm[5:4] := idx[95:64]
imm[7:6] := idx[127:96]
I'm not sure what's the best way to do that, and moreover I'm not sure it's the best way to proceed. I'm looking for the most efficient method on Broadwell/Haswell to emulate the "missing" _mm_permutevar_epi32(__m128i a, __m128i idx). I'd rather use 128-bit instructions than 256-bit ones if possible (i.e. I don't want to widen the 128-bit inputs then narrow the result).
It's useless to generate an immediate at run-time, unless you're JITing new code. An immediate is a byte that's literally part of the machine-code instruction encoding. That's great if you have a compile-time-constant shuffle (after inlining + template expansion), otherwise forget about those shuffles that take the control operand as an integer1.
Before AVX, the only variable-control shuffle was SSSE3 pshufb. (_mm_shuffle_epi8). That's still the only 128-bit (or in-lane) integer shuffle instruction in AVX2 and I think AVX512.
AVX1 added some in-lane 32-bit variable shuffles, like vpermilps (_mm_permutevar_ps). AVX2 added lane-crossing integer and FP shuffles, but somewhat strangely no 128-bit version of vpermd. Perhaps because Intel microarchitectures have no penalty for using FP shuffles on integer data. (Which is true on Sandybridge family, I just don't know if that was part of the reasoning for the ISA design). But you'd think they would have added __m128i intrinsics for vpermilps if that's what you were "supposed" to do. Or maybe the compiler / intrinsics design people didn't agree with the asm instruction-set people?
If you have a runtime-variable vector of 32-bit indices and want to do a shuffle with 32-bit granularity, by far your best bet is to just use AVX _mm_permutevar_ps.
_mm_castps_si128( _mm_permutevar_ps (_mm_castsi128_ps(a), idx) )
On Intel at least, it won't even introduce any extra bypass latency when used between integer instructions like paddd; i.e. FP shuffles specifically (not blends) have no penalty for use on integer data in Sandybridge-family CPUs.
If there's any penalty on AMD Bulldozer or Ryzen, it's minor and definitely cheaper than the cost of calculating a shuffle-control vector for (v)pshufb.
Using vpermd ymm and ignoring the upper 128 bits of input and output (i.e. by using cast intrinsics) would be much slower on AMD (because its 128-bit SIMD design has to split lane-crossing 256-bit shuffles into several uops), and also worse on Intel where it makes it 3c latency instead of 1 cycle.
#Iwill's answer shows a way to calculate a shuffle-control vector of byte indices for pshufb from a vector of 4x32-bit dword indices. But it uses SSE4.1 pmulld which is 2 uops on most CPUs, and could easily be a worse bottleneck than shuffles. (See discussion in comments under that answer.) Especially on older CPUs without AVX, some of which can do 2 pshufb per clock unlike modern Intel (Haswell and later only have 1 shuffle port and easily bottleneck on shuffles. IceLake will add another shuffle port, according to Intel's Sunny Cove presentation.)
If you do have to write an SSSE3 or SSE4.1 version of this, it's probably best to still use only SSSE3 and use pshufb plus a left shift to duplicate a byte within a dword before ORing in the 0,1,2,3 into the low bits, not pmulld. SSE4.1 pmulld is multiple uops and even worse than pshufb on some CPUs with slow pshufb. (You might not benefit from vectorizing at all on CPUs with only SSSE3 and not SSE4.1, i.e. first-gen Core2, because it has slow-ish pshufb.)
On 2nd-gen Core2, and Goldmont, pshufb is a single-uop instruction with 1-cycle latency. On Silvermont and first-gen Core 2 it's not so good. But overall I'd recommend pshufb + pslld + por to calculate a control-vector for another pshufb if AVX isn't available.
An extra shuffle to prepare for a shuffle is far worse than just using vpermilps on any CPU that supports AVX.
Footnote 1:
You'd have to use a switch or something to select a code path with the right compile-time-constant integer, and that's horrible; only consider that if you don't even have SSSE3 available. It may be worse than scalar unless the jump-table branch predicts perfectly.
Although Peter Cordes is correct in saying that the AVX instruction vpermilps and its intrinsic _mm_permutevar_ps() will probably do the job, if you're working on machines older than Sandy Bridge, an SSE4.1 variant using pshufb works quite well too.
AVX variant
Credits to #PeterCordes
#include <stdio.h>
#include <immintrin.h>
__m128i vperm(__m128i a, __m128i idx){
return _mm_castps_si128(_mm_permutevar_ps(_mm_castsi128_ps(a), idx));
}
int main(int argc, char* argv[]){
__m128i a = _mm_set_epi32(0xDEAD, 0xBEEF, 0xCAFE, 0x0000);
__m128i idx = _mm_set_epi32(1,0,3,2);
__m128i shu = vperm(a, idx);
printf("%04x %04x %04x %04x\n", ((unsigned*)(&shu))[3],
((unsigned*)(&shu))[2],
((unsigned*)(&shu))[1],
((unsigned*)(&shu))[0]);
return 0;
}
SSE4.1 variant
#include <stdio.h>
#include <immintrin.h>
__m128i vperm(__m128i a, __m128i idx){
idx = _mm_and_si128 (idx, _mm_set1_epi32(0x00000003));
idx = _mm_mullo_epi32(idx, _mm_set1_epi32(0x04040404));
idx = _mm_or_si128 (idx, _mm_set1_epi32(0x03020100));
return _mm_shuffle_epi8(a, idx);
}
int main(int argc, char* argv[]){
__m128i a = _mm_set_epi32(0xDEAD, 0xBEEF, 0xCAFE, 0x0000);
__m128i idx = _mm_set_epi32(1,0,3,2);
__m128i shu = vperm(a, idx);
printf("%04x %04x %04x %04x\n", ((unsigned*)(&shu))[3],
((unsigned*)(&shu))[2],
((unsigned*)(&shu))[1],
((unsigned*)(&shu))[0]);
return 0;
}
This compiles down to the crisp
0000000000400550 <vperm>:
400550: c5 f1 db 0d b8 00 00 00 vpand 0xb8(%rip),%xmm1,%xmm1 # 400610 <_IO_stdin_used+0x20>
400558: c4 e2 71 40 0d bf 00 00 00 vpmulld 0xbf(%rip),%xmm1,%xmm1 # 400620 <_IO_stdin_used+0x30>
400561: c5 f1 eb 0d c7 00 00 00 vpor 0xc7(%rip),%xmm1,%xmm1 # 400630 <_IO_stdin_used+0x40>
400569: c4 e2 79 00 c1 vpshufb %xmm1,%xmm0,%xmm0
40056e: c3 retq
The AND-masking is optional if you can guarantee that the control indices will always be the 32-bit integers 0, 1, 2 or 3.

What does this x86 SSE code do?

I see this piece of code in OpenCV.
__m128i delta = _mm_set1_epi8(-128),
t = _mm_set1_epi8((char)threshold),
K16 = _mm_set1_epi8((char)K);
(void)K16;
(void)delta;
(void)t;
Can someone explain to me what it does ? All I got is what the sse functions do but what happens in next three line is unclear
Sets the 128 bit value to the signed char input in 8-bit strides:
http://msdn.microsoft.com/en-us/library/6e14xhyf(v=vs.90).aspx

OpenCV : How do I find the minimum element along a specific dimension?

I'm a new user to OpenCV. I'm using version 2.3.2 (from the SVN repository).
I have a specific 3-dimensional cv::Mat structure which is 288 x 384 x 10. This represents a 288 x 384 image and the other 10 channels represent a disparity value. I want to find the minimum element and its location. There is a minMaxElem function in OpenCV with it doesn't work with multi-dimensional arrays. Any idea how I can use the channel splitting functions in OpenCV to perform this?
You can use minMaxIdx function to find minimum/maximum on multidimensional array:
void minMaxIdx(InputArray src, double* minVal, double* maxVal,
int* minIdx=0, int* maxIdx=0, InputArray mask=noArray());
Non-zero minIdx and maxIdx should point to the arrays having enough length to store indexes for all dimensions (3 for 3-dimensional Mat).
minVal and maxVal are used to return single minimum/maximum value. They can be 0 if you don't need the values.

Converting RGB to grayscale/intensity

When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.
It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.
However, I didn't find a good reference for this on the web. What is the source of these values?
See also these previous questions: here and here.
The specific numbers in the question are from CCIR 601 (see Wikipedia article).
If you convert RGB -> grayscale with slightly different numbers / different methods,
you won't see much difference at all on a normal computer screen
under normal lighting conditions -- try it.
Here are some more links on color in general:
Wikipedia Luma
Bruce Lindbloom 's outstanding web site
chapter 4 on Color in the book by Colin Ware, "Information Visualization", isbn 1-55860-819-2;
this long link to Ware in books.google.com
may or may not work
cambridgeincolor :
excellent, well-written
"tutorials on how to acquire, interpret and process digital photographs
using a visually-oriented approach that emphasizes concept over procedure"
Should you run into "linear" vs "nonlinear" RGB,
here's part of an old note to myself on this.
Repeat, in practice you won't see much difference.
### RGB -> ^gamma -> Y -> L*
In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),
are called "nonlinear" or
Gamma corrected.
"Linear" values are defined as
Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma
where gamma is 2.2 for many PCs.
The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))
(purists tongue-click) but here I'll drop the '.
Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma,
so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.
(LCD displays are more complex;
furthermore, some graphics cards compensate for gamma.)
To get the measure of lightness called L* from RGB,
first divide R G B by 255, and compute
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
This is Y in XYZ color space; it is a measure of color "luminance".
(The real formulas are not exactly x^gamma, but close;
stick with x^gamma for a first pass.)
Finally,
L* = 116 * Y ^ 1/3 - 16
"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --
Wikipedia Lab color space
I found this publication referenced in an answer to a previous similar question. It is very helpful, and the page has several sample images:
Perceptual Evaluation of Color-to-Grayscale Image Conversions by Martin Čadík, Computer Graphics Forum, Vol 27, 2008
The publication explores several other methods to generate grayscale images with different outcomes:
CIE Y
Color2Gray
Decolorize
Smith08
Rasche05
Bala04
Neumann07
Interestingly, it concludes that there is no universally best conversion method, as each performed better or worse than others depending on input.
Heres some code in c to convert rgb to grayscale.
The real weighting used for rgb to grayscale conversion is 0.3R+0.6G+0.11B.
these weights arent absolutely critical so you can play with them.
I have made them 0.25R+ 0.5G+0.25B. It produces a slightly darker image.
NOTE: The following code assumes xRGB 32bit pixel format
unsigned int *pntrBWImage=(unsigned int*)..data pointer..; //assumes 4*width*height bytes with 32 bits i.e. 4 bytes per pixel
unsigned int fourBytes;
unsigned char r,g,b;
for (int index=0;index<width*height;index++)
{
fourBytes=pntrBWImage[index];//caches 4 bytes at a time
r=(fourBytes>>16);
g=(fourBytes>>8);
b=fourBytes;
I_Out[index] = (r >>2)+ (g>>1) + (b>>2); //This runs in 0.00065s on my pc and produces slightly darker results
//I_Out[index]=((unsigned int)(r+g+b))/3; //This runs in 0.0011s on my pc and produces a pure average
}
Check out the Color FAQ for information on this. These values come from the standardization of RGB values that we use in our displays. Actually, according to the Color FAQ, the values you are using are outdated, as they are the values used for the original NTSC standard and not modern monitors.
What is the source of these values?
The "source" of the coefficients posted are the NTSC specifications which can be seen in Rec601 and Characteristics of Television.
The "ultimate source" are the CIE circa 1931 experiments on human color perception. The spectral response of human vision is not uniform. Experiments led to weighting of tristimulus values based on perception. Our L, M, and S cones1 are sensitive to the light wavelengths we identify as "Red", "Green", and "Blue" (respectively), which is where the tristimulus primary colors are derived.2
The linear light3 spectral weightings for sRGB (and Rec709) are:
Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y
These are specific to the sRGB and Rec709 colorspaces, which are intended to represent computer monitors (sRGB) or HDTV monitors (Rec709), and are detailed in the ITU documents for Rec709 and also BT.2380-2 (10/2018)
FOOTNOTES
(1) Cones are the color detecting cells of the eye's retina.
(2) However, the chosen tristimulus wavelengths are NOT at the "peak" of each cone type - instead tristimulus values are chosen such that they stimulate on particular cone type substantially more than another, i.e. separation of stimulus.
(3) You need to linearize your sRGB values before applying the coefficients. I discuss this in another answer here.
Starting a list to enumerate how different software packages do it. Here is a good CVPR paper to read as well.
FreeImage
#define LUMA_REC709(r, g, b) (0.2126F * r + 0.7152F * g + 0.0722F * b)
#define GREY(r, g, b) (BYTE)(LUMA_REC709(r, g, b) + 0.5F)
OpenCV
nVidia Performance Primitives
Intel Performance Primitives
Matlab
nGray = 0.299F * R + 0.587F * G + 0.114F * B;
These values vary from person to person, especially for people who are colorblind.
is all this really necessary, human perception and CRT vs LCD will vary, but the R G B intensity does not, Why not L = (R + G + B)/3 and set the new RGB to L, L, L?

Resources