bitcast integer to vector of char - clang

I just compiled a small piece of C code using clang 3.7:
typedef unsigned char char4 __attribute__ ((vector_size (4)));
char4 f1 (char4 v)
{
return v / 2;
}
That functions compile to (I removed debuginfo):
define <4 x i8> #f1(<4 x i8> %v) {
entry:
%div = udiv <4 x i8> %v, bitcast (<1 x i32> <i32 2> to <4 x i8>)
ret <4 x i8> %div
}
According to llvm documentation, bitcast operation doesn’t change bits, meaning to <4 x i8> should yield <2, 0, 0, 0> (or <0, 0, 0, 2>). Am I right?
Therefore, I’ll get Division by Zero exception.
The code I wrote intended to make a broadcast (or splat), and not a bitcast.
Could someone please explain what’s happening?
Thanks!

actually it looks like a bug in clang:
https://llvm.org/bugs/show_bug.cgi?id=27085
this input code should either not compile, or generate a warning, or compile to a vector splat

Related

How would one create a bitwise rotation function in dart?

I'm in the process of creating a cryptography package for Dart (https://pub.dev/packages/steel_crypt). Right now, most of what I've done is either exposed from PointyCastle or simple-ish algorithms where bitwise rotations are unnecessary or replaceable by >> and <<.
However, as I move toward complicated cryptography solutions, which I can do mathematically, I'm unsure of how to implement bitwise rotation in Dart with maximum efficiency. Because of the nature of cryptography, the speed part is emphasized and uncompromising, in that I need the absolute fastest implementation.
I've ported a method of bitwise rotation from Java. I'm pretty sure this is correct, but unsure of the efficiency and readability:
My tested implementation is below:
int INT_BITS = 64; //Dart ints are 64 bit
static int leftRotate(int n, int d) {
//In n<<d, last d bits are 0.
//To put first 3 bits of n at
//last, do bitwise-or of n<<d with
//n >> (INT_BITS - d)
return (n << d) | (n >> (INT_BITS - d));
}
static int rightRotate(int n, int d) {
//In n>>d, first d bits are 0.
//To put last 3 bits of n at
//first, we do bitwise-or of n>>d with
//n << (INT_BITS - d)
return (n >> d) | (n << (INT_BITS - d));
}
EDIT (for clarity): Dart has no unsigned right or left shift, meaning that >> and << are signed right shifts, which bears more significance than I might have thought. It poses a challenge that other languages don't in terms of devising an answer. The accepted answer below explains this and also shows the correct method of bitwise rotation.
As pointed out, Dart has no >>> (unsigned right shift) operator, so you have to rely on the signed shift operator.
In that case,
int rotateLeft(int n, int count) {
const bitCount = 64; // make it 32 for JavaScript compilation.
assert(count >= 0 && count < bitCount);
if (count == 0) return n;
return (n << count) |
((n >= 0) ? n >> (bitCount - count) : ~(~n >> (bitCount - count)));
}
should work.
This code only works for the native VM. When compiling to JavaScript, numbers are doubles, and bitwise operations are only done on 32-bit numbers.

Get floating/decimal portion of a float

I would like:
unknown_function(123.456) -> 456
unknown_function(1234.56) -> 56
Or
unknown_function(123.456) -> "456"
Is there a builtin for this? The builtin trunc/1 does the opposite:
2> trunc(123.456).
123
There is this answer for C: Extract decimal part from a floating point number in C and this for Java: How to get the decimal part of a float?
No there is no BIF for this, but you can do this:
decimal_point(X, DecimalDigits) when X < 0 ->
decimal_point(-X, DecimalDigits);
decimal_point(X, DecimalDigits)->
(X - trunc(X)) * math:pow(10,DecimalDigits).
> decimal_point(2.33, 2).
33
> decimal_point(-2.33, 2).
33
This is inspired by #Dogbert's comment
The algorithm doesnt work using native floats due to floating point representation limits and rounding errors.
However, using https://github.com/tim/erlang-decimal:
frac_to_denom_int(Num, Denom, Precison) ->
{X, _} = string:to_integer(lists:nth(2, string:tokens(decimal:format(decimal:divide(Num, Denom, [{precision, Precison}])), "."))),
X.
E.g.,
frac_to_denom_int("1.0", "3.0", 1000).
> 3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333
If you don't have a frac,
d_to_denom_int(D_Tup)->
string:to_integer(lists:nth(2, string:tokens(decimal:format(D_Tup), "."))).
d_to_denom_int({0, 123456, -3}).
> 456
Based on #dogbert's comment, passing in one more flag compact on the float_to_list/2 call will help:
lists:nth(2, string:tokens(float_to_list(123.456, [{decimals, 10}, compact]), ".")).
% "456"
If you go over decimals 14, you'll start to see those rounding errors.

How to generate LLVM SSA Format

I write the following C code where variable X is being assigned twice:
int main()
{
int x;
x = 10;
x = 20;
return 0;
}
Compile and generate IR representation using the following command
clang -emit-llvm -c ssa.c
IR generated
; Function Attrs: nounwind uwtable
define i32 #main() #0 {
entry:
%retval = alloca i32, align 4
%x = alloca i32, align 4
store i32 0, i32* %retval
store i32 10, i32* %x, align 4
store i32 20, i32* %x, align 4
ret i32 0
}
If my understanding of SSA format is correct, we should in this example see x1 and x2 as two LLVM IR variables generated and assigned two values 10 and 20 respectively. Is there some specific option we should compile with to get SSA IR representation or my understanding of IR representation is incorrect? Please advise.
EDIT: as suggested in one answer, using -mem2reg optimization pass gives me the following output
clang -c -emit-llvm ssa.c -o ssa.bc
opt -mem2reg ssa.bc -o ssa.opt.bc
llvm-dis ssa.opt.bc
cat ssa.opt.ll
Resultant IR generated
; Function Attrs: nounwind uwtable
define i32 #main() #0 {
entry:
ret i32 0
}
it looks like the entire x assignment got optimized using mem2reg optimization. Any other way to generate and retain different x values?
LLVM passes mem2reg and reg2mem convert code to/from SSA form. You can run them using opt tool.

pow function in Objective-c

I am implementing a math calculation in Objective-C in which i have used pow(<#double#>, <#double#>) function but it behaves weird.
I am trying to solve below math
100 *(0.0548/360*3+powf(1+0.0533, 3/12)/powf(1+0.0548, 3/12))
For same math, result of excel and xcode is different.
Excel output = 100.01001 (correct)
NSLog(#"-->%f",100 *(0.0548/360*3+powf(1+0.0533, 3/12)/powf(1+0.0548, 3/12)));
Xcode output = 100.045667 (wrong)
Now as everyone knows 3/12 = 0.25.
When i replace *3/12* with 0.25 in above math than xcode returns true result as below
Excel output = 100.01001 (correct)
NSLog(#"-->%f",100 *(0.0548/360*3+powf(1+0.0533, 0.25)/powf(1+0.0548, 0.25)));
Xcode output = 100.010095 (correct)
Anyone knows why pow function behave weird like this?
Note: I have also used powf but behavior is still same.
3/12, when you're doing integer math, is zero. In languages like, C, C++, ObjC and Java an expression like x / y containing only integers gives you an integral result, not a floating point one.
I suggest you try 3.0/12.0 instead.
The following C program (identical behaviour in this case to ObjC) shows this in action:
#include <stdio.h>
#include <math.h>
int main (void) {
// Integer math.
double d = 100 *(0.0548/360*3+powf(1+0.0533, 3/12)/powf(1+0.0548, 3/12));
printf ("%lf\n", d);
// Just using zero as the power.
d = 100 *(0.0548/360*3+powf(1+0.0533, 0)/powf(1+0.0548, 0));
printf ("%lf\n", d);
// Using a floating point power.
d = 100 *(0.0548/360*3+powf(1+0.0533, 3.0/12.0)/powf(1+0.0548, 3.0/12.0));
printf ("%lf\n", d);
return 0;
}
The output is (annotated):
100.045667 <= integer math gives wrong answer.
100.045667 <= the same as if you simply used 0 as the power.
100.010095 <= however, floating point power is okay.

How to set all elements in a __m256d to, say, the 3rd element of another __m256d?

With 4 packed float (__m128), I can use the SSE intrinsic
__m128 X;
__m128 H = _mm_shuffle_ps(X,X,_MM_SHUFFLE(3,3,3,3));
to set all elements of H to the third element of X (is this the fastest way?)
Now, I want to do the same with 4 packed double (__m256d). I naively coded
__m256d X;
__m256d H = _mm256_shuffle_pd(X,X,_MM_SHUFFLE(3,3,3,3));
but this doesn't do the right thing! Instead it sets H={X[1],X[1],X[3],X[3]}.
So, how to do it right?
EDIT
using Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
It is not always optimal, but asking your compiler what it thinks can be a nice hint.
#include <x86intrin.h>
__m256d f(__m256d x){
__m256i m={3,3,3,3};
return __builtin_shuffle(x,m);
}
With gcc-4.8, this generates:
vpermilpd $15, %ymm0, %ymm0
vperm2f128 $17, %ymm0, %ymm0, %ymm0
clang has a different builtin for shuffling, I don't know if other compilers have something.
okay, after Mystical's comments, I could work it out myself:
template<int K>
inline __mm256d pick_single(__m256d x)
{
__m256 t = _mm256_permute2f128_pd(x,x, K&2?49:32);
return _mm256_permute_pd(t,K&1?15:0);
}
yields the desired result. Thanks for your help, Mystical!

Resources