SSE comparison returns vector of NANs - sse

I am trying something like this:
__m128 cA = _mm_set_ps1(-2.0f);
__m128 cB = _mm_set_ps1(2.0f);
__m128 df = _mm_cmpgt_ps(cA, cB);
In this case df returns with zeros.
But if I do:
__m128 cA = _mm_set_ps1(2.0f);
__m128 cB = _mm_set_ps1(-2.0f);
__m128 df = _mm_cmpgt_ps(cA, cB);
It returns all -nan .Is is expected behaviour? If it is,how do I evaluate those nans?
Using Intel CPU,MS VisualStudio 2017

SIMD compares produce a mask. All-one bits is the bit-pattern for -NaN. All-zero bits is the bit-pattern for +0.0
They're not intended to be interpreted as float. Use them with _mm_movemask_ps, blends, or things like _mm_and_ps.
e.g. _mm_and_ps( vec, cmp_result) zeros the elements where the compare was false. You can use this to do a conditional add by zeroing some input elements before an add.
For more about how to use SIMD, look up a guide/tutorial. https://stackoverflow.com/tags/sse/info

Related

SSE: shuffle (permutevar) 4x32 integers

I have some code using the AVX2 intrinsic _mm256_permutevar8x32_epi32 aka vpermd to select integers from an input vector by an index vector. Now I need the same thing but for 4x32 instead of 8x32. _mm_permutevar_ps does it for floating point, but I'm using integers.
One idea is _mm_shuffle_epi32, but I'd first need to convert my 4x32 index values to a single integer, that is:
imm[1:0] := idx[31:0]
imm[3:2] := idx[63:32]
imm[5:4] := idx[95:64]
imm[7:6] := idx[127:96]
I'm not sure what's the best way to do that, and moreover I'm not sure it's the best way to proceed. I'm looking for the most efficient method on Broadwell/Haswell to emulate the "missing" _mm_permutevar_epi32(__m128i a, __m128i idx). I'd rather use 128-bit instructions than 256-bit ones if possible (i.e. I don't want to widen the 128-bit inputs then narrow the result).
It's useless to generate an immediate at run-time, unless you're JITing new code. An immediate is a byte that's literally part of the machine-code instruction encoding. That's great if you have a compile-time-constant shuffle (after inlining + template expansion), otherwise forget about those shuffles that take the control operand as an integer1.
Before AVX, the only variable-control shuffle was SSSE3 pshufb. (_mm_shuffle_epi8). That's still the only 128-bit (or in-lane) integer shuffle instruction in AVX2 and I think AVX512.
AVX1 added some in-lane 32-bit variable shuffles, like vpermilps (_mm_permutevar_ps). AVX2 added lane-crossing integer and FP shuffles, but somewhat strangely no 128-bit version of vpermd. Perhaps because Intel microarchitectures have no penalty for using FP shuffles on integer data. (Which is true on Sandybridge family, I just don't know if that was part of the reasoning for the ISA design). But you'd think they would have added __m128i intrinsics for vpermilps if that's what you were "supposed" to do. Or maybe the compiler / intrinsics design people didn't agree with the asm instruction-set people?
If you have a runtime-variable vector of 32-bit indices and want to do a shuffle with 32-bit granularity, by far your best bet is to just use AVX _mm_permutevar_ps.
_mm_castps_si128( _mm_permutevar_ps (_mm_castsi128_ps(a), idx) )
On Intel at least, it won't even introduce any extra bypass latency when used between integer instructions like paddd; i.e. FP shuffles specifically (not blends) have no penalty for use on integer data in Sandybridge-family CPUs.
If there's any penalty on AMD Bulldozer or Ryzen, it's minor and definitely cheaper than the cost of calculating a shuffle-control vector for (v)pshufb.
Using vpermd ymm and ignoring the upper 128 bits of input and output (i.e. by using cast intrinsics) would be much slower on AMD (because its 128-bit SIMD design has to split lane-crossing 256-bit shuffles into several uops), and also worse on Intel where it makes it 3c latency instead of 1 cycle.
#Iwill's answer shows a way to calculate a shuffle-control vector of byte indices for pshufb from a vector of 4x32-bit dword indices. But it uses SSE4.1 pmulld which is 2 uops on most CPUs, and could easily be a worse bottleneck than shuffles. (See discussion in comments under that answer.) Especially on older CPUs without AVX, some of which can do 2 pshufb per clock unlike modern Intel (Haswell and later only have 1 shuffle port and easily bottleneck on shuffles. IceLake will add another shuffle port, according to Intel's Sunny Cove presentation.)
If you do have to write an SSSE3 or SSE4.1 version of this, it's probably best to still use only SSSE3 and use pshufb plus a left shift to duplicate a byte within a dword before ORing in the 0,1,2,3 into the low bits, not pmulld. SSE4.1 pmulld is multiple uops and even worse than pshufb on some CPUs with slow pshufb. (You might not benefit from vectorizing at all on CPUs with only SSSE3 and not SSE4.1, i.e. first-gen Core2, because it has slow-ish pshufb.)
On 2nd-gen Core2, and Goldmont, pshufb is a single-uop instruction with 1-cycle latency. On Silvermont and first-gen Core 2 it's not so good. But overall I'd recommend pshufb + pslld + por to calculate a control-vector for another pshufb if AVX isn't available.
An extra shuffle to prepare for a shuffle is far worse than just using vpermilps on any CPU that supports AVX.
Footnote 1:
You'd have to use a switch or something to select a code path with the right compile-time-constant integer, and that's horrible; only consider that if you don't even have SSSE3 available. It may be worse than scalar unless the jump-table branch predicts perfectly.
Although Peter Cordes is correct in saying that the AVX instruction vpermilps and its intrinsic _mm_permutevar_ps() will probably do the job, if you're working on machines older than Sandy Bridge, an SSE4.1 variant using pshufb works quite well too.
AVX variant
Credits to #PeterCordes
#include <stdio.h>
#include <immintrin.h>
__m128i vperm(__m128i a, __m128i idx){
return _mm_castps_si128(_mm_permutevar_ps(_mm_castsi128_ps(a), idx));
}
int main(int argc, char* argv[]){
__m128i a = _mm_set_epi32(0xDEAD, 0xBEEF, 0xCAFE, 0x0000);
__m128i idx = _mm_set_epi32(1,0,3,2);
__m128i shu = vperm(a, idx);
printf("%04x %04x %04x %04x\n", ((unsigned*)(&shu))[3],
((unsigned*)(&shu))[2],
((unsigned*)(&shu))[1],
((unsigned*)(&shu))[0]);
return 0;
}
SSE4.1 variant
#include <stdio.h>
#include <immintrin.h>
__m128i vperm(__m128i a, __m128i idx){
idx = _mm_and_si128 (idx, _mm_set1_epi32(0x00000003));
idx = _mm_mullo_epi32(idx, _mm_set1_epi32(0x04040404));
idx = _mm_or_si128 (idx, _mm_set1_epi32(0x03020100));
return _mm_shuffle_epi8(a, idx);
}
int main(int argc, char* argv[]){
__m128i a = _mm_set_epi32(0xDEAD, 0xBEEF, 0xCAFE, 0x0000);
__m128i idx = _mm_set_epi32(1,0,3,2);
__m128i shu = vperm(a, idx);
printf("%04x %04x %04x %04x\n", ((unsigned*)(&shu))[3],
((unsigned*)(&shu))[2],
((unsigned*)(&shu))[1],
((unsigned*)(&shu))[0]);
return 0;
}
This compiles down to the crisp
0000000000400550 <vperm>:
400550: c5 f1 db 0d b8 00 00 00 vpand 0xb8(%rip),%xmm1,%xmm1 # 400610 <_IO_stdin_used+0x20>
400558: c4 e2 71 40 0d bf 00 00 00 vpmulld 0xbf(%rip),%xmm1,%xmm1 # 400620 <_IO_stdin_used+0x30>
400561: c5 f1 eb 0d c7 00 00 00 vpor 0xc7(%rip),%xmm1,%xmm1 # 400630 <_IO_stdin_used+0x40>
400569: c4 e2 79 00 c1 vpshufb %xmm1,%xmm0,%xmm0
40056e: c3 retq
The AND-masking is optional if you can guarantee that the control indices will always be the 32-bit integers 0, 1, 2 or 3.

Why does Metal constant with h suffix produce bad byte range results?

I have run into a very strange problem in my Metal shader that has to do with a byte value in the range (0, 255). This byte value is represented as a ushort that is converted to a half precision by code like (x / 255.0h). What is strange is that this literal constant divide seems to be optimized incorrectly when run on a A7 device (A10 does not do this). Has anyone else run into this? Is there some way I can write Metal code that is used only on this GPU family 1 device?
I found a workaround, by leaving the h suffix off of the inline constant:
// This method accepts 4 byte range input values and encodes them as a half4 vector
// that works properly on A7 class hardware. The issue with A7 devices is that
// there seems to be a compler bug or range issue with an operation like (x / 255.0h).
// What should be the same operation (x / 255.0) does not show the range problem on A7.
half4
encodeBytesAsHalf4(const ushort4 b4) {
return half4(b4.x/255.0, b4.y/255.0, b4.z/255.0, b4.a/255.0);
}

how to deinterleave image channel in SSE

is there any way we can DE-interleave 32bpp image channels similar as below code in neon.
//Read all r,g,b,a pixels into 4 registers
uint8x8x4_t SrcPixels8x8x4= vld4_u8(inPixel32);
ChannelR1_32x4 = vmovl_u16(vget_low_u16(vmovl_u8(SrcPixels8x8x4.val[0]))),
channelR2_32x4 = vmovl_u16(vget_high_u16(vmovl_u8(SrcPixels8x8x4.val[0]))), vGaussElement_32x4_high);
basically i want all color channels in separate vectors with every vector has 4 elements of 32bits to do some calculation but i am not very familiar with SSE and could not find such instruction in SSE or if some one can provide better ways to do that? Any help is highly appreciated
Since the 8 bit values are unsigned you can just do this with shifting and masking, much like you would for scalar code, e.g.
__m128i vrgba;
__m128i vr = _mm_and_si128(vrgba, _mm_set1_epi32(0xff));
__m128i vg = _mm_and_si128(_mm_srli_epi32(vrgba, 8), _mm_set1_epi32(0xff));
__m128i vb = _mm_and_si128(_mm_srli_epi32(vrgba, 16), _mm_set1_epi32(0xff));
__m128i va = _mm_srli_epi32(vrgba, 24);
Note that I'm assuming your RGBA elements have the R component in the LS 8 bits and the A component in the MS 8 bits, but if they are the opposite endianness you can just change the names of the vr/vg/vb/va vectors.

loaddup_pd/unpacklo_pd on Xeon Phi

If I have the following doubles in a 512-wide SIMD vector, as in a Xeon Phi register:
m0 = |b4|a4|b3|a3|b2|a2|b1|a1|
is it possible to make it into:
m0_d = |a4|a4|a3|a3|a2|a2|a1|a1|
using a single instruction?
Also since there are no bitwise intrinsics for doubles is this still a valid way to achieve the above?
m0_t = _mm512_swizzle_pd(m9,_MM_SWIZ_REG_CDAB);//m0_t->|a4|b4|a3|b3|a2|b2|a1|b1|
__m512d res = _mm512_mask_or_epi64(m0,k1,zero,m0_t);//k1 is 0xAA
Can be achieved as follows:
m0_d = _mm512_mask_swizzle_pd(m0,0xAA,m0,_MM_SWIZ_REG_CDAB);
It might seem that the swizzle operation is limited, but with the masked variant we can achieve other permutations too.

linear transformation function

I need to write a function that takes 4 bytes as input, performs a reversible linear transformation on this, and returns it as 4 bytes.
But wait, there is more: it also has to be distributive, so changing one byte on the input should affect all 4 output bytes.
The issues:
if I use multiplication it won't be reversible after it is modded 255 via the storage as a byte (and its needs to stay as a byte)
if I use addition it can't be reversible and distributive
One solution:
I could create an array of bytes 256^4 long and fill it in, in a one to one mapping, this would work, but there are issues: this means I have to search a graph of size 256^8 due to having to search for free numbers for every value (should note distributivity should be sudo random based on a 64*64 array of byte). This solution also has the MINOR (lol) issue of needing 8GB of RAM, making this solution nonsense.
The domain of the input is the same as the domain of the output, every input has a unique output, in other words: a one to one mapping. As I noted on "one solution" this is very possible and I have used that method when a smaller domain (just 256) was in question. The fact is, as numbers get big that method becomes extraordinarily inefficient, the delta flaw was O(n^5) and omega was O(n^8) with similar crappiness in memory usage.
I was wondering if there was a clever way to do it. In a nutshell, it's a one to one mapping of domain (4 bytes or 256^4). Oh, and such simple things as N+1 can't be used, it has to be keyed off a 64*64 array of byte values that are sudo random but recreatable for reverse transformations.
Balanced Block Mixers are exactly what you're looking for.
Who knew?
Edit! It is not possible, if you indeed want a linear transformation. Here's the mathy solution:
You've got four bytes, a_1, a_2, a_3, a_4, which we'll think of as a vector a with 4 components, each of which is a number mod 256. A linear transformation is just a 4x4 matrix M whose elements are also numbers mod 256. You have two conditions:
From Ma, we can deduce a (this means that M is an invertible matrix).
If a and a' differ in a single coordinate, then Ma and Ma' must differ in every coordinate.
Condition (2) is a little trickier, but here's what it means. Since M is a linear transformation, we know that
M(a - a) = Ma - Ma'
On the left, since a and a' differ in a single coordinate, a - a has exactly one nonzero coordinate. On the right, since Ma and Ma' must differ in every coordinate, Ma - Ma' must have every coordinate nonzero.
So the matrix M must take a vector with a single nonzero coordinate to one with all nonzero coordinates. So we just need every entry of M to be a non-zero-divisor mod 256, i.e., to be odd.
Going back to condition (1), what does it mean for M to be invertible? Since we're considering it mod 256, we just need its determinant to be invertible mod 256; that is, its determinant must be odd.
So you need a 4x4 matrix with odd entries mod 256 whose determinant is odd. But this is impossible! Why? The determinant is computed by summing various products of entries. For a 4x4 matrix, there are 4! = 24 different summands, and each one, being a product of odd entries, is odd. But the sum of 24 odd numbers is even, so the determinant of such a matrix must be even!
Here are your requirements as I understand them:
Let B be the space of bytes. You want a one-to-one (and thus onto) function f: B^4 -> B^4.
If you change any single input byte, then all output bytes change.
Here's the simplest solution I have thusfar. I have avoided posting for a while because I kept trying to come up with a better solution, but I haven't thought of anything.
Okay, first of all, we need a function g: B -> B which takes a single byte and returns a single byte. This function must have two properties: g(x) is reversible, and x^g(x) is reversible. [Note: ^ is the XOR operator.] Any such g will do, but I will define a specific one later.
Given such a g, we define f by f(a,b,c,d) = (a^b^c^d, g(a)^b^c^d, a^g(b)^c^d, a^b^g(c)^d). Let's check your requirements:
Reversible: yes. If we XOR the first two output bytes, we get a^g(a), but by the second property of g, we can recover a. Similarly for the b and c. We can recover d after getting a,b, and c by XORing the first byte with (a^b^c).
Distributive: yes. Suppose b,c, and d are fixed. Then the function takes the form f(a,b,c,d) = (a^const, g(a)^const, a^const, a^const). If a changes, then so will a^const; similarly, if a changes, so will g(a), and thus so will g(a)^const. (The fact that g(a) changes if a does is by the first property of g; if it didn't then g(x) wouldn't be reversible.) The same holds for b and c. For d, it's even easier because then f(a,b,c,d) = (d^const, d^const, d^const, d^const) so if d changes, every byte changes.
Finally, we construct such a function g. Let T be the space of two-bit values, and h : T -> T the function such that h(0) = 0, h(1) = 2, h(2) = 3, and h(3) = 1. This function has the two desired properties of g, namely h(x) is reversible and so is x^h(x). (For the latter, check that 0^h(0) = 0, 1^h(1) = 3, 2^h(2) = 1, and 3^h(3) = 2.) So, finally, to compute g(x), split x into four groups of two bits, and take h of each quarter separately. Because h satisfies the two desired properties, and there's no interaction between the quarters, so does g.
I'm not sure I understand your question, but I think I get what you're trying to do.
Bitwise Exclusive Or is your friend.
If R = A XOR B, R XOR A gives B and R XOR B gives A back. So it's a reversible transformation, assuming you know the result and one of the inputs.
Assuming I understood what you're trying to do, I think any block cipher will do the job.
A block cipher takes a block of bits (say 128) and maps them reversibly to a different block with the same size.
Moreover, if you're using OFB mode you can use a block cipher to generate an infinite stream of pseudo-random bits. XORing these bits with your stream of bits will give you a transformation for any length of data.
I'm going to throw out an idea that may or may not work.
Use a set of linear functions mod 256, with odd prime coefficients.
For example:
b0 = 3 * a0 + 5 * a1 + 7 * a2 + 11 * a3;
b1 = 13 * a0 + 17 * a1 + 19 * a2 + 23 * a3;
If I remember the Chinese Remainder Theorem correctly, and I haven't looked at it in years, the ax are recoverable from the bx. There may even be a quick way to do it.
This is, I believe, a reversible transformation. It's linear, in that af(x) mod 256 = f(ax) and f(x) + f(y) mod 256 = f(x + y). Clearly, changing one input byte will change all the output bytes.
So, go look up the Chinese Remainder Theorem and see if this works.
What you mean by "linear" transformation?
O(n), or a function f with f(c * (a+b)) = c * f(a) + c * f(b)?
An easy approach would be a rotating bitshift (not sure if this fullfils the above math definition). Its reversible and every byte can be changed. But with this it does not enforce that every byte is changed.
EDIT: My solution would be this:
b0 = (a0 ^ a1 ^ a2 ^ a3)
b1 = a1 + b0 ( mod 256)
b2 = a2 + b0 ( mod 256)
b3 = a3 + b0 ( mod 256)
It would be reversible (just subtract the first byte from the other, and then XOR the 3 resulting bytes on the first), and a change in one bit would change every byte (as b0 is the result of all bytes and impacts all others).
Stick all of the bytes into 32-bit number and then do a shl or shr (shift left or shift right) by one, two or three. Then split it back into bytes (could use a variant record). This will move bits from each byte into the adjacent byte.
There are a number of good suggestions here (XOR, etc.) I would suggest combining them.
You could remap the bits. Let's use ii for input and oo for output:
oo[0] = (ii[0] & 0xC0) | (ii[1] & 0x30) | (ii[2] & 0x0C) | (ii[3] | 0x03)
oo[1] = (ii[0] & 0x30) | (ii[1] & 0x0C) | (ii[2] & 0x03) | (ii[3] | 0xC0)
oo[2] = (ii[0] & 0x0C) | (ii[1] & 0x03) | (ii[2] & 0xC0) | (ii[3] | 0x30)
oo[3] = (ii[0] & 0x03) | (ii[1] & 0xC0) | (ii[2] & 0x30) | (ii[3] | 0x0C)
It's not linear, but significantly changing one byte in the input will affect all the bytes in the output. I don't think you can have a reversible transformation such as changing one bit in the input will affect all four bytes of the output, but I don't have a proof.

Resources